International Journal of Advances in Engineering & Technology (IJAET

)

March Issue

Volume-3,Issue-1

URL

: http://www.ijaet.org

E-mail : editor@ijaet.org

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Table of Content
S. No.

Article Title & Authors (Vol. 3, Issue. 1, March-2012) Robust Fault Tolerant Control with Sensor Faults for a Four-Rotor Helicopter Hicham Khebbache1, Belkacem Sait1 and Fouad Yacef

Page No’s

1.

1-13

2.

Scrutinize the Abrasion Phenomenon with the Brunt Forces in the Joints of the Robots and Proposed A New Way to Decrease That Khashayar Teimoori and Mahdi Pirhayati

14-20

3.

Knowledge Management in Success of ERP Systems Usman Musa Zakari Usman, Mohammad Nazir Ahmad

21-28

4.

A New Proposal Erica+ Switch Algorithm For Traffic Management Ehab Aziz Khalil

29-40

5.

Automated Test Jig for Uniformity Evaluation of Luminaries Deepa Ramane, Jayashri Bangali, Arvind Shaligram

41-47

6.

Tracking of Intruders on Geographical Map using IDS Alert Manish Kumar, M. Hanumanthappa, T.V. Suresh Kumar

48-54

7.

Design and Analysis of Multidielectric Layer Microstrip Antenna with Varying Superstrate Layer Characteristics Samir Dev Gupta, Amit Singh

55-68

8.

Low Cost Intelligent Arm Actuator Using Pneumatic Artificial Muscle with Biofeedback Control Salman Afghani, Yasir Raza, Bilal Haider

69-75

9.

Two Stage Discrete Time Extended Kalman Filter Scheme for Micro Air Vehicle Sadia Riaz and Ali Usman

76-83

10.

Identification of Suspicious Regions to Detect Oral Cancers at an Earlier Stage– A Literature Survey K. Anuradha and K. Sankaranarayanan

84-91

11.

Waveform Analysis of Pulse Wave Detected in the Fingertip with PPG

92-100

Subhash Bharati & Girmallappa Gidveer
12. Slotted PIFA with Edge Feed for Wireless Applications 101-110

i

Vol. 3, Issue 1, pp. i-vi

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963 T. Anita Jones Mary, T. Joyce Selva Hephzibah, C. S. Ravichandran 13. A Novel Design of Multiband Square Patch Antenna Embeded with Gasket Fractal Slot for WLAN & WiMAX Communication Amit K. Panda and Asit K. Panda 14. Implementation of the Triple-DES Block Cipher using VHDL Sai Praveen Venigalla, M. Nagesh Babu, Srinivas Boddu, G. Santhi Swaroop Vemana 15. Computational Investigation of Performance Characteristics in a C-shape Diffusing Duct Prasanta. K. Sinha, A. N. Mullick, B. Halder and B. Majumdar 16. Guidelines in Selecting a Programming Language and a Database Management System Onkar Dipak Joshi, Virajit A. Gundale, Sachin M. Jagdale 17. Audio Steganalysis of LSB Audio using Moments and Multiple Regression Model Souvik Bhattacharyya1 and Gautam Sanyal 18. Analysis and Comparison of Combinational Circuits by using Low Power Techniques Suparshya Babu Sukhavasi, Susrutha Babu Sukhavasi, Vijaya Bhaskar M, B Rajesh Kumar 19. High Quality Design and Methodology Aspects to Enhance Large Scale Web Services Suryakant B Patil, Sachin Chavan, Preeti Patil 20. Energy Efficient Cluster Based Key Management Technique for Wireless Sensor Networks T. Lalitha and R. Umarani 21. On Product Summability of Fourier Series using Matrix Euler Method B. P. Padhy, Banitamani Mallik, U. K. Misra and Mahendra Misra 22. Material Handling and Supply Chain Management in Fertilizer Production – A Case Study T. K. Jack 23. VLSI Implementation of Error Tolerance Analysis for Pipeline based DWT in JPEG 2000 Encoder Rajamanickam. G & Jayamani. S 200-208 197-199 191-196 186-190 175-185 161-174 145-160 137-144 129-136 117-128 111-116

ii

Vol. 3, Issue 1, pp. i-vi

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

24.

Location Based Services in Android Ch. Radhika Rani, A. Praveen Kumar, D. Adarsh, K. Krishna Mohan, K. V. Kiran

209-220

25.

A Robust Design and Simulation of Efficient Micromechatronic System using FSV Controlled Auxiliary Damped PMLSM Sarin CR, Ajai M, Santhosh Krishnan, Arul Gandhi

221-232

26.

Influence of Filler Material on Glass Fiber/ Epoxy Composite Laminates During Drilling M.C.Murugesh1 and K. Sadashivappa

233-239

27.

LFSR Test Pattern for Fault Detection and Diagnosis for FPGA CLB Cells Fazal Noorbasha, K. Harikishore, Ch. Hemanth, A. Sivasairam, V. Vijaya Raju

240-246

28.

Fuzzy Based Rotor Ground Fault Location Method for Synchronous Machines Mohanraj. M, Rani Thottungal and Manobalan. M

247-254

29.

Turning Parameter Optimization for Surface Roughness of ASTM A242 Type-1 Alloys Steel by Taguchi Method Jitendra Verma, Pankaj Agrawal, Lokesh Bajpai

255-261

30.

Prediction of Heat-Release Patterns for Modeling Diesel Engine Performance and Emissions B.VenkateswaraRao and G. Amba Prasad Rao

262-269

31.

Multi-Objective Optimization of Cutting Parameters for Surface Roughness and Metal Removal Rate in Surface Grinding using Response Surface Methodology M. Janardhan and A. Gopala Krishna

270-283

32.

Harmonic Reduction in Cascaded Multilevel Inverter with Reduced Number of Switches using Genetic Algorithms C. Udhaya Shankar, J. Thamizharasi, Rani Thottungal, N. Nithyadevi

284-294

33.

Fingerprint Based Gender Identification using Frequency Domain Analysis Ritu Kaur and Susmita Ghosh Mazumdar

295-299

34.

Applying the Genetic Algorithms of Sorting the Elitist Non-Decisive Solutions in the Case Study "Resource Allocation"

300-305

iii

Vol. 3, Issue 1, pp. i-vi

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963 Khashayar Teimoori 35. Power Conditioning in Battery Chargers using Shunt Active Power Filter through Neural Network P. Thirumoorthi, Jyothis Francis and N.Yadaiah 36. Data Leakage Detection Archana Vaidya, Prakash Lahange, Kiran More, Shefali Kachroo & Nivedita Pandey 37. Extraction of Visual and Acoustic Features of the Driver for Real-Time Driver Monitoring System Sandeep Kotte 38. Review on Theory of Constraints CH. Lakshmi Tulasi, A. Ramakrishna Rao 39. Turbo Brake Assists – TURBOTOR: Evaluation G. Ramkumar and Amrit Om Nayak 40. Structural Refinement by Rietveld Method and Magnetic Study of Nanocrystalline Cu-Zn ferrites K. S. Lohar, S. M. Patange, Sagar E. Shirsath, V. S. Surywanshi, S. S. Gaikwad, Santosh S. Jadhav and Nilesh Kulkarni 41. Implementation and Control Different Multilevel Inverter Topologies for Current Waveform Improvement T. Shanthi and K. Karthikadevi 42. Performance Evolution of High Speed Network Carrying Multimedia Ehab Aziz Khalil 43. Application of Newton-Based OPF in Deregulated Power Systems T. Nireekshana, G. Kesava Rao and S. Siva Naga Raju 44. Secured Energy Efficient Hybrid Routing in Wireless Sensor Networks Deepika Srikumar and Seethalakshmi Vijaykumar 45. Co-operative P2P Information Exchange (cPIE) using Clustering Approach in Wireless Network S. Nithya1 & K. Palanivel 46. Performance Evaluation of Object Recognition Using Skeletal Shock Graph: Challenges and Future Prospects Pankaj Manohar Pandit, Sudhir Gangadharrao Akojwar iv Vol. 3, Issue 1, pp. i-vi 418-426 405-417 395-404 381-394 371-380 362-370 354-361 Conceptual Development and 345-353 334-344 322-333 315-321 306-314

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963 47. An Integrated Color and Hand Gesture Recognition Control for Wireless Robot Deepali Suryawanshi and C. S. Khandelwal 48. Conventional Ethanol Reforming Technology Developments for the Production of Hydrogen Prashant Tayade, Vilas Sapkal, Chandrasekhar Rode, Rajendra Sapkal 49. An Efficient Bit Rate Performance of Serial-Serial Multiplier with 1’s Asynchronous Counter P. Rajesh and B. Gopinath 50. Controller Area Network Data Extraction for Automobile Tejal Farkande and S. N. Pawar 51. Comparative Study of Experimental and Theoretical Load Carrying Capacity of Stone Column with and without Encasement of Geosynthetics Kameshwar Rao Tallapragada & Golait Y. S. 52. A Novel Face Recognition Approach using a Multimodal Feature Vector Jhilik Bhattacharya, Nattami Sekhar Lakshmi Prabha, Somajyoti Majumder, Gautam Sanyal 53. Influence of Geofabrics in the Construction of Pavements on Expansive Clayey Subgrades A. V. Narasimha Rao, D. Neeraja 54. E-Learning System Analysis using Smart Aided Tools through Web Services M. Balakrishnan and K. Duraiswamy 55. The Impact of Electronic Customer Relationship Management on Consumer’s Behavior Usman Musa Zakari Usman, Abdullah Nabeel Jalal, Mahdi Alhaji Musa 56. Performance & Emission Studies on a Single Cylinder DI Diesel Engine Fueled with Diesel & Rice Bran Oil Methyl Ester Blends Bhabani Prasanna Pattanaik, Basanta Kumar Nanda and Probir Kumar Bose 57. Power Flow Analysis of Three Phase Unbalanced Radial Distribution System Puthireddy Umapathi Reddy, Sirigiri Sivanagaraju, Prabandhamkam Sangameswararaju 514-524 505-513 500-504 494-499 487-493 477-486 466-476 460-465 451-459 436-450 427-435

v

Vol. 3, Issue 1, pp. i-vi

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963 58. Hyperspectral Image Classification using M-Band Wavelet Transform K. Kavitha, S. Arivazhagan, R. Dhivya Priya 59. Modeling and Controller Design of an Industrial Oil-Fired Boiler Plant Orosun Rapheal. and Adamu Sunusi Sani 60. Internet Based Remote Monitoring and Control System Monita N. Jadhav and G. R. Gidveer 61. Advanced Control Techniques in Variable Speed Stand Alone Wind Turbine System V. Sharmila Deve and S. Karthiga 62. Analysis of Unified Power Quality Conditioner during Power Quality Improvement B. Sridhar , Rajkumar Jhapte 63. Design of Multi Bit LFSR PNRG and Performance Comparison on FPGA using VHDL Amit Kumar Panda, Praveena Rajput, Bhawna Shukla 64. Numerical Analysis of the Viscoelastic Fluid in Plane Poiseuille Flow N. Khorasani and B. Mirzalou 65. Laser Diode Modelling for Wireless Power Transmission Emanuele Scivittaro and Anna Gina Perri 66. Effect of Process Parameters on the Physical Properties of Wires Produced by Friction Extrusion Method A. Hosseini, E.Azarsa, B.Davoodi, Y.Ardahani 67. Fabrication of PP/Al2O3 Surface Nanocomposite via Novel Friction Stir Processing Approach Shahram Alyali1, Amir Mostafapour2, Ehsan Azarsa 68. Failed Steam Traps: First Steps to Replacement Kant E Kanyarusoke and Ian Noble-Jack Members of IJAET Fraternity A-H 606-617 598-605 592-597 581-591 572-580 566-571 558-565 559-557 542-548 534-541 525-533

vi

Vol. 3, Issue 1, pp. i-vi

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

ROBUST FAULT TOLERANT CONTROL WITH SENSOR FAULTS FOR A FOUR-ROTOR HELICOPTER
Hicham Khebbache1, Belkacem Sait1 and Fouad Yacef 2
Automatic Laboratory of Setif (LAS), Electrical Engineering Department, Setif University, Algeria 2 Electrical Engineering Study and Modelling Laboratory (LAMEL), Automatic Control Department, Jijel University, Algeria
1

ABSTRACT
This paper considers the control problem for an underactuated quadrotor UAV system in presence of sensor faults. Dynamic modelling of quadrotor while taking into account various physical phenomena, which can influence the dynamics of a flying structure is presented. Subsequently, a new control strategy based on robust integral backstepping approach using sliding mode and taking into account the sensor faults is developed. Lyapunov based stability analysis shows that the proposed control strategy design keep the stability of the closed loop dynamics of the quadrotor UAV even after the presence of sensor failures. Numerical simulation results are provided to show the good tracking performance of proposed control laws.

KEYWORDS:

Backstepping control, Dynamic modelling, fault tolerant control (FTC), Nonlinear systems, Robust control, quadrotor, Unmanned aerial vehicles (UAV).

I.

INTRODUCTION

Unmanned aerial vehicles (UAV) have shown a growing interest thanks to recent technological projections, especially those related to instrumentation. They made possible the design of powerful systems (mini drones) endowed with real capacities of autonomous navigation at reasonable cost. Despite the real progress made, researchers must still deal with serious difficulties, related to the control of such systems, particularly, in the presence of atmospheric turbulences. In addition, the navigation problem is complex and requires the perception of an often constrained and evolutionary environment, especially in the case of low-altitude flights. In contrast to terrestrial mobile robots, for which it is often possible to limit the model to kinematics, the control of aerial robots (quadrotor) requires dynamics in order to account for gravity effects and aerodynamic forces [12]. In [8-15], authors propose a control-law based on the choice of a stabilizing Lyapunov function ensuring the desired tracking trajectories along (X,Z) axis and roll angle only. The authors in [4], do not take into account frictions due to the aerodynamic torques nor drag forces. They propose acontrollaws based on backstepping, and on sliding mode control in order to stabilize the complete system (i.e. translation and orientation). In [18-19], authors take into account the gyroscopic effects and show that the classical model-independent PD controller can stabilize asymptotically the attitude of the quadrotor aircraft. Moreover, they used a new Lyapunov function, which leads to an exponentially stabilizing controller based upon the PD2 and the compensation of coriolis and gyroscopic torques. While in [11] the authors develop a PID controller in order to stabilize altitude. The authors in [5] propose a control algorithm based upon sliding mode based on backstepping approach allowed the tracking of the various desired trajectories expressed in term of the center of mass coordinates along (X,Y,Z) axis and yaw angle. In [10], authors used a controller design based on backstepping approach. Moreover, they introduced two neural nets to estimate the aerodynamic components, one for aerodynamic forces and one for aerodynamic moments. While in [9] the authors propose a hybrid backstepping control technique and the Frenet-Serret Theory (Backstepping+FST) for attitude stabilization, that includes estimation of the desired angular acceleration (within the control law) as a

1

Vol. 3, Issue 1, pp. 1-13

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
function of the aircraft velocity. However, all control strategies previously proposed do not take into account the failures affecting the sensors of quadrotor, wich makes them very limited and induces undesired behavior of quadrotor, or even to instability of the latter after occurence of sensor faults. In this paper, we consider the stabilization problem of the quadrotor aircraft in presence of sensor failures. The dynamical model describing the quadrotor aircraft motions and taking into a account for various parameters which affect the dynamics of a flying structure such as frictions due to the aerodynamic torques, drag forces along (X,Y,Z) axis and gyroscopic effects is presented. Subsequently, a new control strategy based on robust integral backstepping approach using sliding mode taking into account the sensor faults is proposed and compared with a classical backstepping approach. This control strategy includes two compensation techniques, the first technique using an integral action, the second, is to use an another term containing "sign" function to compensate the effect of sensor faults. Finally all synthesized control laws are highlighted by simulations, a comparison with control strategies developed in this paper is also performed, these simulations show the inefficiency of classical backstepping approach after occurrence of sensor faults. Otherwise, fairly satisfactory results despite the presence of this sensor faults are given by the new control strategy.

II.

MODELLING

2.1. Quadrotor dynamic modelling The quadrotor have four propellers in cross configuration. The two pairs of propellers {1,3} and {2,4} as described in Fig. 1, turn in opposite directions. By varying the rotor speed, one can change the lift force and create motion. Thus, increasing or decreasing the four propeller’s speeds together generates vertical motion. Changing the 2 and 4 propeller’s speed conversely produces roll rotation coupled with lateral motion. Pitch rotation and the corresponding lateral motion; result from 1 and 3 propeller’s speed conversely modified. Yaw rotation is more subtle, as it results from the difference in the counter-torque between each pair of propellers. Let E (0,x,y,z) denote an inertial frame, and B (0’,X,Y,Z) denote a frame rigidly attached to the quadrotor as shown in Fig. 1.

Figure 1. Quadrotor configuration

Using Euler angles parameterization [14], the airframe orientation in space is given by a rotation matrix R from RB to RE.

cψ cθ R = sψ cθ   −sθ 

s φ sθ cψ − sψ cφ cφ sθ cψ + sψ s φ  s φ sθ sψ + cψ cθ cφsθ sψ − s φcψ    s φcθ cφcθ 

(1)

where C and S indicate the trigonometrically functions cos and sin respectively. The dynamic equations based on the formalism of Newton-Euler are given by: ζ& = ν  && m ζ = Ff + Ft + Fg & R = RS ( Ω ) I Ω = −Ω ∧ I Ω − M − M + M & gh a f 

(2)

2

Vol. 3, Issue 1, pp. 1-13

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
S ( Ω ) is a skew-symmetric matrix; for a given vector Ω= [Ω1 Ω2 Ω3 ] it is defined as follows:
T

 0 S ( Ω ) =  Ω3   −Ω 2 

−Ω3 0 Ω1

Ω2  −Ω1   0  

(3)

The approximate model of the quadrotor can be rewritten as: ζ& = ν  T 4  &&   m ζ = R ×  0 0 ∑ (b ωi2 )  + K ftν − mge z   i =1    & = RS ( Ω ) R  4 T  I Ω = −Ω ∧ I Ω − Ω ∧ J  0 0 ( −1)i +1 ω  − K Ω 2 + M & ∑ r  i  fa f   1 where:
m ζ=[x, y, z]T v

(4)

total mass of the structure position vector linear velocity roll angle pitch angle θ Ψ law angle angular velocity angular rotor speed ωi Ff resultant of the forces generated by the four rotors Ft resultant of the drag forces along (x,y,z) axis Fg gravity force Mgh resultant of torques due to the gyroscopic effects Ma resultant of aerodynamics frictions torques Mf moment developed by the quadrotor according to the body fixed frame Kft(x,y,z) translation drag coefficients Kfa(x,y,z) frictions aerodynamics coefficients I(x,y,z) body inertia Jr rotor inertia b lift coefficient d drag coefficient l distance from the rotors to the center of mass of the quadrotor aircraft The moment developed by the quadrotor according to the body fixed frame along an axis is the difference between the torque generated by each propeller on the other axis.
2 2   lb (ω 4 − ω 2 )   Mf = lb (ω 32 − ω12 )   2 2 2 2  d ω1 − ω 2 + ω 3 − ω 4    The complete dynamic model which governs the quadrotor is as follows [5-6]:

(5)

(

)

3

Vol. 3, Issue 1, pp. 1-13

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
 I −Iz ) K l && ( y && J & φ = θψ − r Ω r θ& − fax φ 2 + u 2 Ix Ix Ix Ix   && && & θ = ( I z − I x ) φψ + J r Ω φ − K fay θ& 2 + l u 3 r  Iy Iy Iy Iy   ( I x − I y ) θφ& − K faz ψ& 2 + 1 u & && ψ = 4 Iz Iz Iz    x = − K ftx x& + 1 u x u 1 &&  m m  K  y = − fty y + 1 u y u 1 && & m m   K cos(φ ) cos(θ ) && u1  z = − ftz z& − g +  m m
with

(6)

u x = ( cos φ cosψ sin θ + sin φ sinψ )   u y = ( cos φ sin θ sinψ − sin φ cosψ ) 
The system’s inputs are posed u 1 , u 2 , u 3 , u 4 and Ω r a disturbance, obtaining:
2 2 u1 = b ω12 + ω2 + ω32 + ω4  2 2 u 2 = lb (ω4 − ω2 )   2 2 u 3 = lb (ω3 − ω1 )  2 2 2 2 u 4 = d ω1 − ω2 + ω3 − ω4  Ωr = ω1 − ω2 + ω3 − ω4 

(7)

(

)
(8)

(

)

From (7) it easy to show that :

φd = arcsin (u x sin (ψ d ) − u y cos (ψ d ) )    (u x cos (ψ d ) + u y sin (ψ d ) )    θd = arcsin    cos (φd )    
2.2. Rotor dynamic model

(9)

The rotors are driven by DC motors with the well known equations: di  v = Ri + L dt + k e ω  (10)  dω k i = J +Cs + k rω2 r  m dt  As we a small motor with a very low inductance, then we can obtain the voltage to be applied to each motor as follows [11]:

vi =
with:

1

η

& (ω

i

+ µ0ωi2 + µ1ωi + µ2 , i ∈ [1,..., 4]

)

(11)

µ0 =

k k C kr k , µ1 = e m , µ 2 = s and η = m Jr JrR Jr JrR

where, vi : is the motor input, ω : is the angular speed, kr : is the load torque constant, ke, km : are respectively the electrical and mechanical torque constants, and Cs : is the solid friction.

4

Vol. 3, Issue 1, pp. 1-13

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

III.

CONTROL STRATEGIES OF QUADROTOR

3.1. Backstepping control of quadrotor The model (6) presented in the first part of this paper can be written in the state-space form: X& = f ( X ,U ) with X ∈ ℜ n is the state vector of the system, such as: T T & & & & & X = [ x 1 ,..., x 12 ] = φ , φ ,θ ,θ ,ψ ,ψ , x , x , y , y , z , z&    From (6) and (13) we obtain:
x2     2 a1 x 4 x 6 + a 2 x 2 + a3 Ω r x 4 + b1u 2     x4   2  a 4 x 2 x 6 + a5 x 4 + a6 Ω r x 2 + b 2u 3    x6   a7 x 2 x 4 + a8 x 62 + b 3u 4     x8   1   a9 x 8 + u x u 1   m   x 10     1 a10 x 10 + u y u 1   m     x 12    a x + cos(φ ) cos(θ ) u − g   11 12  1  m 

(12) (13)

(14)

with
  I y −Iz  K fax J , a3 = − r a1 =   , a2 = − Ix Ix   Ix     a =  I z − I x  , a = − K fa y , a = J r 4 6  Iy  5  Iy Iy (15)     K faz K ft x K ft y K ft z   Ix −I y  , a9 − , a10 = − , a11 = −  , a8 = − a7 =  Iz m m m  Iz    l l 1 b1 = ,b2 = ,b3 = Ix Iy Iz   The adopted control strategy is summarized in the control of two subsystems; the first relates to the orientation control, taking into account the position control along (X,Y) axis, while the second is that of the attitude control as shown it below the synoptic:

zd xd yd ux uy

Attitude Control

u1 u2 u3 u4
Quadrotor

φd
Passage

Y

Position Control

θd

Orientation Control

ψd
Figure 2. Synoptic scheme of the proposed control strategy

5

Vol. 3, Issue 1, pp. 1-13

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Using the backstepping approach for the control-laws synthesis, we simplify all the stages of calculation concerning the tracking errors and Lyapunov functions (Refer to [6] for more details) in the following way: x id − x i i ∈[1,3,5,7,9,11]  (16) ei =  & x (i −1)d + k (i −1)e(i −1) − x i i ∈[ 2,4,6,8,10,12]   and 1 2 i ∈ [1,3,5, 7,9,11]  2 ei  (17) Vi =  1 2 V + e i ∈ [ 2, 4, 6,8,10,12]  i −1 2 i  such as: k i > 0, i ∈ [1,...,12] The synthesized stabilizing control laws are as follows:  1 2 && u 3 = ( −a4 x 2 x 6 − a5 x 4 − a6 Ω r x 2 + θd + k 3 ( − k 3e 3 + e 4 ) + k 4e 4 + e 3 ) b2   1 2 && u 4 = ( −a7 x 2 x 4 − a8 x 6 + ψ d + k 5 ( − k 5e 5 + e 6 ) + k 6e 6 + e 5 ) b3   m  && /u 1 ≠ 0 u x = ( −a9 x 8 + x d + k 7 ( − k 7e 7 + e 8 ) + k 8e 8 + e 7 ) u1   m && /u 1 ≠ 0 u y = ( −a10 x 10 + y d + k 9 (− k 9e 9 + e10 ) + k 10e10 + e 9 ) u1   m && u 1 = ( g − a11x 12 + z d + k 11 (− k 11e11 + e12 ) + k 12e12 + e11 ) cos( x 1 ) cos( x 3 )  

(18)

3.2. Robust fault tolerant control with sensor faults of quadrotor
The choice of this method is not fortuitous considering the major advantages it presents:

• • •

It ensures Lyapunov stability. It ensures the handling of all system nonlinearities. It ensures the robustness and all properties of the desired dynamics in presence of sensor faults.

The complete model resulting by adding the sensor faults in the model (11) can be written in the statespace form:  X& = f ( X ,U ) (19)  Y = CX + E s Fs with Y ∈ ℜ p is the measured output vector, Fs ∈ℜq is the sensor faults vector, C ∈ℜ p ×n and

E s ∈ℜ p ×q are respectively, the observation matrix and the sensor faults distributions matrix, such as :
y = [ x1 x 2 + f s1 x 3 x 4 +f s 2 x 5 x 6 +f s 3 x 7 x 8 +f s 4 x 9 x10 + f s 5 x11 x12 +f s 6 ]
T

(20)

Remark 1: In our contribution, only the velocity sensor faults are considered, they are assumed to be bounded and slowly varying in time (i.e. f si ≤f s 0 and f&si ≈ 0 ). Using this control strategy as a recursive algorithm, one can synthesize the control laws forcing the system to follow the desired trajectory. For the first step we consider the tracking-error: (21) e 1 = x 1d − x 1 And we use the Lyapunov theorem by considering the Lyapunov function e1 positive definite and it’s time derivative negative semi-definite: 1 1 (22) V 1 = e 12 + d%12 2 2

6

Vol. 3, Issue 1, pp. 1-13

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
And it’s time derivative is then:

& & V&1 = e1 ( x 1d − ( y 2 − f s 1 ) ) + d%1 ( −ς1 )
The stabilization of e1 can be obtained by introducing a new virtual control y 2

(23)

( y 2 )d α1 = φ&d + k 1e1 + λ1ς1

(24)

In order to compensate the effect of the velocity sensor fault of roll motion, an integral term is introduced which can eliminate the tracking error. We take: (25) ς1 = ∫ e1dt It results that:

 k λ   e1  %T % V&1 = e1 −k 1e1 − λ1d%1 + d%1 ( −e1 ) = − e1 d%1  1 1    = −e1 Q1e1 1 0   d%1   k 1 and λ1 are chosen so as to make the definite matrix positive Q1 , which means that, V&1 ≤ 0 let us proceed to a variable change by making: & e 2 = x 1d + k 1e1 + λ1ς1 − y 2 For the second step we consider the augmented Lyapunov function: 1 2 V 2 =V 1 + e 2 2 & = e − k e − λ d% + e + d% ( −e ) + e ( β − b u ) V

(

)

(

)

(26)

(27)

(28) (29) (30) (31)

2

1

(

1 1

1

2

)

1

2

s1

1 2

Such as
2 && βs 1 = ( x 1d + k 1 (−k 1e1 + e2 ) − λ1e1 − a1 y 4 y 6 − a2 y 2 − a3Ωr y 4 ) + ∆βs 1

and if:

∆β s 1 = − k 1λ1d% + a1 (f s 3 y 4 + f s 2 y 6 − f s 2 f s 3 ) + a2 (f s 1 y 2 − f s 2 ) + a3Ω r f s 2 ≤ λ1 1
with: ( βs1) is the unknown part related to velocity sensor faults. The control input u 2 is then extracted satisfying V&2 ≤ 0 1 && 2 u 2 = (φd + k 1 ( − k 1e1 + e 2 ) + (1 − λ1 )e1 + k 2e 2 −a4 y 2 y 6 − a5 y 4 − a6 Ω r y 2 + λ2sign (e 2 ) ) b1 The term k 2e 2 with k 2 > 0 is added to stabilize (e1 , e 2 ) . The same steps are followed to extract u 3 , u 4 ,u x , u y and u1 .

(32)

 1 && θd + k 3 ( − k 3e 3 + e 4 ) + (1 − λ3 )e 3 + k 4e 4 −a4 y 2 y 6 − a5 y 42 − a6 Ω r y 2 + λ4sign (e 4 ) ) u 3 = b2   1 2 && u 4 = (ψ d + k 5 ( − k 5e 5 + e 6 ) + (1 − λ5 )e 5 + k 6e 6 −a7 y 2 y 4 − a8 y 6 + λ6 sign (e 6 ) ) b3  (33)  m  && /u 1 ≠ 0 u x = ( x d + k 7 ( − k 7e 7 + e 8 ) + (1 − λ7 )e 7 + k 8e 8 −a9 y 8 + λ8sign (e 8 ) ) u1   m && /u 1 ≠ 0 u y = ( y d + k 9 ( − k 9e 9 + e10 ) + (1 − λ9 )e 9 + k 10e10 −a10 y 10 + λ10sign (e10 ) ) u1   m && u 1 = ( z d + k 11 (− k 11e11 + e12 ) + (1 − λ11 )e11 + k 12e12 − a11 y 12 + g + λ12sign (e12 ) ) cos( x 1 ) cos( x 3 )  

(

with

x id − x i i ∈[3,5,7,9,11]  ei =  x (i −1)d + k (i −1)e(i −1) + λ(i −1)ς(i −1) − y i i ∈[ 4,6,8,10,12] &
and

(34)

7

Vol. 3, Issue 1, pp. 1-13

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
(35) sign (e i ) i ∈ [ 4, 6,8,10,12]   The corresponding lyapunov functions are given by: 1 2 1 %2 i ∈ [3,5, 7,9,11] and j ∈ [ 2,..., 6]  2 ei + 2 d j  (36) Vi =  1 2 V + e i ∈ [ 4, 6,8,10,12]  i −1 2 i  such as f sj % d j = d j − ς i = λ − ς i i ∈ [3, 5, 7, 9,11] and j ∈ [ 2,..., 6 ] i   k i λ j   (37) >0 i ∈ [3, 5, 7, 9,11] and j ∈ [ 2,..., 6 ] Q j =  1 0    k > 0 i ∈ [ 4, 6,8,10,12 ]  i   To synthesize a stabilizing control laws in the presence of velocity Sensor faults, the following necessary condition must be verified: (38) ∆β si = β si − β sin ≤ λi i ∈[ 2,...,6]

ς i = ∫

 e i dt 

i ∈ [3,5, 7,9,11]

( ∆β si ) represent the unknowns parts related to velocity sensor faults.
IV. SIMULATION RESULTS

In order to evaluate the performance of the controller developed in this paper, we performed two tests simulations. In the first test, a four sensor faults f sj , j ∈ [1, 2,3,6] are added in angular velocities and linear velocity of attitude Z of our system, with 100% amplitudes of these maximum values (i.e. max( x i ), i ∈ [ 2, 4, 6,12 ] ) at instants 15s, 20s, 25s and 30s respectively. In the second test, the same sensor faults are added at the same previous instants, but their amplitudes are increased of 200%. The data of this tested quadrotor are reported in appendix [11].
8 x 10
-3 -3

φ dp
y 2-Backstepping y 2-RFTCS Angular velocity of pitch [rad/s]

8

x 10

θ dp
y 4-Backstepping y 4-RFTCS Angular velocity of yaw [rad/s]

0.2

6 Angular velocity of roll [rad/s]

6

0.15

ψ dp
y6-Backstepping y6-RFTCS

4

4

0.1

2

2

0.05

0

0

-2

-2

0

-4

0

5

10

15

20

25 30 temps[sec]

35

40

45

50

-4

0

5

10

15

20

25 30 temps[sec]

35

40

45

50

-0.05

0

5

10

15

20

25 30 temps[sec]

35

40

45

50

(a) evolution of angular velocity of roll
0.15 0.1 Linear velocity along X axis [m/s]

(b) evolution of angular velocity of pitch
0.15 0.1 Linear velocity along Y axis [m/s]

(c) evolution of angular velocity of yaw
0.35 0.3 linear velocity of Z attitude [m/s] 0.25 0.2 0.15 0.1 0.05 0 -0.05 -0.1 z dp y 12-Backstepping y 12-RFTCS

0.05

0.05

0

0

-0.05

-0.05

-0.1 x dp y8-Backstepping y8-RFTCS 0 5 10 15 20 25 30 temps[sec] 35 40 45 50

-0.1 y dp y 10-Backstepping y 10-RFTCS 0 5 10 15 20 25 30 temps[sec] 35 40 45 50

-0.15

-0.15

-0.2

-0.2

0

5

10

15

20

25 30 temps[sec]

35

40

45

50

(d) evolution of linear velocity along X axis

(e) evolution of linear velocity along Y axis

(f) evolution of linear velocity along Z axis

Figure 3. Tracking simulation results of the angular and linear velocities, Test 1.

8

Vol. 3, Issue 1, pp. 1-13

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
12 10 8 6 4 2 0 -2 -4 x 10
-3

φ dp
y 2-Backstepping y 2-RFTCS Angular velocity of pitch [rad/s]

12 10 8 6 4 2 0 -2 -4

x 10

-3

θ dp
y 4-Backstepping y 4-RFTCS Angular velocity of yaw [rad/s]

0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 -0.05

ψ dp
y 6-Backstepping y 6-RFTCS

Angular velocity of roll [rad/s]

0

5

10

15

20

25 30 temps[sec]

35

40

45

50

0

5

10

15

20

25 30 temps[sec]

35

40

45

50

0

5

10

15

20

25 30 temps[sec]

35

40

45

50

(a) evolution of angular velocity of roll
0.15 0.1 Linear velocity along X axis [m/s]

(b) evolution of angular velocity of pitch
0.15 0.1 Linear velocity along Y axis [m/s]

(c) evolution of angular velocity of yaw
0.6 z dp y 12-Backstepping y 12-RFTCS 0.5 linear velocity of Z attitude [m/s]

0.05

0.05

0.4

0

0

0.3

-0.05

-0.05

0.2

-0.1 x dp -0.15 y8-Backstepping y8-RFTCS 0 5 10 15 20 25 30 temps[sec] 35 40 45 50

-0.1 y dp -0.15 y 10-Backstepping y 10-RFTCS 0 5 10 15 20 25 30 temps[sec] 35 40 45 50

0.1

0

-0.2

-0.2

-0.1

0

5

10

15

20

25 30 temps[sec]

35

40

45

50

(d) evolution of linear velocity along X axis

(e) evolution of linear velocity along Y axis

(f) evolution of linear velocity along Z axis

Figure 4. Tracking simulation results of the angular and linear velocities, Test 2.

Figure 3 and Figure 4 shows at outset a very good tracking of the desired velocities, but upon the appearance of sensor faults, the measurements of angular velocities and linear velocity of attitude Z (illustrated respectively by (a), (b), (c), and (f)) are deviated to their desired velocities, with 100% amplitude of her maximum values in Test. 1, and 200% amplitude of these maximum values in Test. 2 for both control strategies developed in this paper, which gives us a wrong information of the velocities of our system.
10 8 6 4 Pitch angle[rad] Yaw angle[rad] Roll angle[rad] 2 0 -2 -4 -6 -8 2 0 -2 0.2 -4 -6 -8 0.6 x 10
-3

8

x 10

-3

1.2

φd φ-Backstepping φ-RFTCS
6 4

θd θ-Backstepping θ-RFTCS
1

0.8

ψd ψ-Backstepping ψ-RFTCS

0.4

0

0

5

10

15

20

25 30 Time[sec]

35

40

45

50

0

5

10

15

20

25 30 Time[sec]

35

40

45

50

-0.2

0

5

10

15

20

25 30 Time[sec]

35

40

45

50

(a) evolution of roll angle
0.3 0.3 0.2 0.2

(b) evolution of pitch angle
6 5

(c) evolution of yaw angle
Zd Z-Backstepping Z-RFTCS

0.1 X position [m] Y position [m]

0.1 Z attitude [m] Yd Y-Backstepping Y-RFTCS 0 5 10 15 20 25 30 Time[sec] 35 40 45 50

4

0

0

3

-0.1

-0.1

2

-0.2 Xd X-Backstepping X-RFTCS 0 5 10 15 20 25 30 Time[sec] 35 40 45 50

-0.2

1

-0.3

-0.3

0

-0.4

-0.4

-1 0

5

10

15

20

25 30 Time[sec]

35

40

45

50

(d) evolution of x position

(e) evolution of y position

(f) evolution of z position

Figure 5. Tracking simulation results of trajectories along roll ( φ ), pitch ( θ ), yaw angle (ψ ) and (X,Y,Z) axis, Test 1.

9

Vol. 3, Issue 1, pp. 1-13

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
0.015 12 x 10
-3

1.2

φd
0.01

φ-Backstepping φ-RFTCS

10 8 6 Pitch angle[rad]

θd θ-Backstepping θ-RFTCS
1

0.8 Yaw angle[rad]

ψd ψ-Backstepping ψ-RFTCS

Roll angle[rad]

0.005

4 2 0 -2

0.6

0

0.4

0.2

-0.005

-4 0 -6

-0.01

0

5

10

15

20

25 30 Time[sec]

35

40

45

50

-8

0

5

10

15

20

25 30 Time[sec]

35

40

45

50

-0.2

0

5

10

15

20

25 30 Time[sec]

35

40

45

50

(a) evolution of roll angle
0.3 0.3 0.2 0.2

(b) evolution of pitch angle
6 5

(c) evolution of yaw angle
Zd Z-Backstepping Z-RFTCS

0.1 X position [m] Y position [m]

0.1 Z attitude [m] Yd Y-Backstepping Y-RFTCS 0 5 10 15 20 25 30 Time[sec] 35 40 45 50

4

0

0

3

-0.1

-0.1

2

-0.2 Xd X-Backstepping X-RFTCS 0 5 10 15 20 25 30 Time[sec] 35 40 45 50

-0.2

1

-0.3

-0.3

0

-0.4

-0.4

-1 0

5

10

15

20

25 30 Time[sec]

35

40

45

50

(d) evolution of x position

(e) evolution of y position

(f) evolution of z position

Figure 6. Tracking simulation results of trajectories along roll ( φ ), pitch ( θ ), yaw angle (ψ ) and (X,Y,Z) axis, Test 2.

According to Figure 5 and Figure 6, there is a very good tracking of the desired trajectories for both control strategies at outset, but after the occurrence of sensor faults in angular velocities and linear velocity of the attitude Z, the actual trajectories along roll angle, pitch angle, yaw angle, and z position (illustrated respectively by (a), (b), (c), and (f)) corresponding of classical backstepping approach are deviated to their desired values, which also explains the inefficiency of this control approach after occurrence of velocity sensor faults. However, the actual trajectories corresponding of this new control strategy converge to their desired trajectories after transient peaks with low amplitudes caused by the appearance of velocity sensor faults.
5.1 u1-RFTCS 5.05 5 3 4.95 Input control u [N.m] Input control u [N]
1

5 u1-Backstepping 4

x 10

-4

u2-RFTCS u2-Backstepping

4.9 4.85 4.8 4.75 4.7 4.65 4.6

2 1 0 -1 -2 -3 0

0

5

10

15

20

25 30 Time[sec]

35

40

45

50

2

5

10

15

20

25 30 Time[sec]

35

40

45

50

(a) evolution of input control u1
4 3 2 Input control u [N.m] 1 0 -1 -2 -3 -4 0 Input control u [N.m] x 10
-4

(b) evolution of input control u2
6 x 10
-4

u3-RFTCS u3-Backstepping 4

u4-RFTCS u4-Backstepping

2

0

3

4

-2

-4

-6

5

10

15

20

25 30 Time[sec]

35

40

45

50

-8

0

5

10

15

20

25 30 Time[sec]

35

40

45

50

(c) evolution of input control u2

(d) evolution of input control u4

Figure 7. Simulation results of all controllers, Test 1.

10

Vol. 3, Issue 1, pp. 1-13

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
5.1 5.05 5 3 4.95 Input control u [N.m] Input control u [N]
1

u1-RFTCS u1-Backstepping

5 4

x 10

-4

u2-RFTCS u2-Backstepping

4.9 4.85 4.8 4.75 4.7 4.65 4.6

2 1 0 -1 -2 -3 0

0

5

10

15

20

25 30 Time[sec]

35

40

45

50

2

5

10

15

20

25 30 Time[sec]

35

40

45

50

(a) evolution of input control u1
4 x 10
-4

(b) evolution of input control u2
1 x 10
-3

u3-RFTCS 3 u3-Backstepping 0.5 2 Input control u [N.m] Input control u [N.m] 0

u4-RFTCS u4-Backstepping

1

3

0

4

-0.5

-1 -1 -2

-3

0

5

10

15

20

25 30 Time[sec]

35

40

45

50

-1.5

0

5

10

15

20

25 30 Time[sec]

35

40

45

50

(c) evolution of input control u2

(d) evolution of input control u4

Figure 8. Simulation results of all inputs control, Test 2.

From Figures 7 and 8, it is clear to see that the classical backstepping approach give a smooth inputs control, with transient peaks during the appearance of sensor faults in angular velocities and linear velocity of the attitude Z, However, the inputs control corresponding to RFTCS are characterized at outset by very fast switching caused by using of the discontinuous compensation term "sign", also with transient peaks during the appearance of velocity sensor faults, but this chattering gone after the appearance of sensor fault in linear velocity of the attitude Z.
3D position 3D position

6

6

Z coordinate [m]

2

Z coordinate [m] Reference trajectory Real trajectory using Backstepping Real trajectory using RFTCS 0 -0.2 Y coordinate [m] -0.4 -0.4 -0.2 X coordinate [m] 0.2 0

4

4

2

0

0

-2 0.4 0.2

-2 0.4 0.4 0.2 0 -0.2 Y coordinate [m]

Reference trajectory Real trajectory using Backstepping Real trajectory using RFTCS 0.2 0 -0.2 -0.4 -0.4 X coordinate [m]

0.4

(a) evolution of position along (X,Y,Z) axis, Test 1

(b) evolution of position along (X,Y,Z) axis, Test 2

Figure 9. Global trajectory of the quadrotor in 3D.

The simulation results given by Figure 9 show the efficiency of the robust fault tolerant control strategy developed in this paper, which clearly shows a good performances and robustness towards stability and tracking of this control strategy with respect to the backstepping approach after the occurrence of velocity sensor faults.

V.

CONCLUSION AND FUTURE WORKS

In this paper, we proposed a new strategy of fault tolerant control based on backstepping approach and including the velocity sensor faults. Firstly, we start by the development of the dynamic model of the quadrotor taking into account the different physics phenomena which can influence the evolution of our system in the space, and secondly by the synthesis a stabilizing control laws in the presence of

11

Vol. 3, Issue 1, pp. 1-13

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
velocity sensor faults. The simulation results have shown that the backstepping approach renders the system unable to follow his reference after the appearance of velocity sensor faults, which also explains the sensitivity of this control technology towards a sensor failures. However, these simulation results also shows a high efficiency of the robust fault tolerant control strategy developed in this paper, she preserves performance and stability of quadrotor during a malfunction of these velocity sensors. As prospects we hope to develop other fault tolerant control strategies in order to eliminate the chattering phenomenon in the inputs control, while maintaining the performance and stability of this system with implementation them on a real system.

APPENDIX
m g l b d Jr I(x,y,z) Kfa(x,y,z) Kft(x,y,z) µ0 µ1 µ2 η 0,486 kg 9,806 m/s2 0,25 m 2,9842 × 10−5 N/rad/s 3,2320 × 10−7 N.m/rad/s 2,8385 × 10−5 kg.m2 diag (3,8278; 3,8288; 7,6566) × 10−3 kg.m2 diag (5,5670; 5,5670; 6,3540) × 10−4 N/rad/s diag (5,5670; 5,5670; 6,3540) × 10−4 N/m/s 0,0122 6,0612 189,63 280,19

REFERENCES
[1] [2] [3] [4] [5] M. Blanke, R. Azadi-Zamanabadi, SA. Bgh, & CP. Lunau. (1997) “Fault tolerant control systems”, Control Eng. Prcatice, pp. 693-702. M. Blanke, M. Kinnaert, J. Lunze, & M. Staroswiecki. (2003) “Diagnosis and fault-tolerant control”, Springer-Verlag. S. Bouabdellah, P.Murrieri, & R.Siegwart. (2004) “Design and control of an indoor micro quadrotor”, Proceeding of the IEEE, ICRA, New Orleans, USA. S. Bouabdellah, & R.Siegwart. (2005) “Backstepping and sliding mode techniques applied to an indoor micro quadrotor”, Proceeding of the IEEE, ICRA, Barcelona, Spain. H. Bouadi, M. Bouchoucha, & M. Tadjine. (2007) “Sliding Mode Control Based on Backstepping Approach for an UAV Type-Quadrotor”, International Journal of Applied Mathematics and Computer Sciences, Barcelona, Spain, Vol.4, No.1, ISSN 1305-5313. H. Bouadi, M. Bouchoucha, & M. Tadjine. (2007) “Modelling and Stabilizing Control Laws Design Based on Backstepping for an UAV Type-Quadrotor”, Proceeding of 6 the IFAC Symposium on IAV, Toulouse, France. M. Bouchoucha, M. Tadjine, A. Tayebi, & P. Müllhaupt. (2008) “Step by Step Robust Nonlinear PI for Attitude Stabilization of a Four-Rotor Mini-Aircraft”, 16th Mediterranean Conference on Control and Automation Congress Centre, Ajaccio, France, pp. 1276-1283. P. Castillo, A. Dzul, & R. Lozano. (2004) “Real-Time Stabilization and Tracking of a Four-Rotor Mini Rotorcraft”, IEEE Transactions on Control Systems Technology, Vol. 12, No. 4, pp. 510-516. J. Colorado, A. Barrientos, , A. Martinez, B. Lafaverges, & J. Valente. (2010) “Mini-quadrotor attitude control based on Hybrid Backstepping & Frenet-Serret Theory”, Proceeding of the IEEE, ICRA, Anchorage, Alaska, USA. A. Das, F. Lewis, K. Subbarao (2009) “Backstepping Approach for Controlling a Quadrotor Using Lagrange Form Dynamics.” Journal of Intelligent and Robotic Systems, Vol. 56, No. 1-2, pp. 127- 151. L. Derafa, T. Madani, & A. Benallegue. (2006) “Dynamic modelling and experimental identification of four rotor helicopter parameters”, IEEE-ICIT , Mumbai, India, pp. 1834-1839. N. Guenard, T. Hamel, & R. Mahony. (2008) “A Practical Visual Servo Control for an Unmanned Aerial Vehicle”, IEEE Transactions on Robotics, Vol. 24, No. 2, pp. 331-340. T. Hamel, R. Mahony, R. Lozano, & J. Ostrowski. (2002) “Dynamic modelling and configuration stabilization for an X4-flyer”, IFAC world congress, Spain. W.Khalil, Dombre. (2002) “modelling, identification and control of robots”, HPS edition.

[6]

[7]

[8] [9]

[10] [11] [12] [13] [14]

12

Vol. 3, Issue 1, pp. 1-13

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
[15] [16] R. Lozano, P. Castillo, & A. Dzul. (2004) “Global stabilization of the PVTOL: real time application to a mini aircraft”, International Journal of Control, Vol 77, No 8, pp. 735-740. A. Mokhtari, & A. Benallegue. (2004) “Dynamic Feedback Controller of Euler Augles and Wind Parameters Estimation for a Quadrotor Unmanned Aerial Vehicle”, Proceeding of the IEEE, ICRA, New Orleans, LA, USA, pp. 2359-2366. J.J.E. Slotine, & W. Li. (1991) “Applied nonlinear control”, Prentice Hall, Englewood Cliffs, NJ. A. Tayebi, & S. Mcgilvray. (2004) “Attitude stabilisation of a four rotor aerial robot”, IEEE Conference on Decision and Control, Atlantis Paradise Island, Bahamas, pp. 1216-1217. A. Tayebi, & S. McGilvray (2006) “Attitude stabilization of a VTOL Quadrotor Aircaft”, IEEE Transactions on Control Systems Technology, Vol. 14, No. 3, pp. 562-571.

[17] [18] [19]

Authors Hicham KHEBBACHE is Graduate student (Magister) of Automatic Control at the Electrical Engineering Department of Setif University, ALGERIA. He received the Engineer degree in Automatic Control from Jijel University, ALGERIA in 2009. Now he is with Automatic Laboratory of Setif (LAS). His research interests include Aerial robotics, Linear and Nonlinear control, Robust control, Fault tolerant control (FTC), Diagnosis, Fault detection and isolation (FDI).

Belkacem SAIT is Associate Professor at Setif University and member at Automatic Laboratory of Setif (LAS), ALGERIA. He received the Engineer degree in Electrical Engineering from National Polytechnic school of Algiers (ENP) in 1987, Magister degree in Instrumentation and Control from HCR of Algiers in 1992, and Ph.D. in Automatic Product from Setif University in 2007. His research interested include discrete event systems, hybrid systems, Petri nets, Fault tolerant control (FTC), Diagnosis, Fault detection and isolation (FDI).

Fouad YACEF is currently a Ph.D. student at the Automatic Control Department of Jijel University, ALGERIA. He received the Engineer degree in Automatic Control from Jijel University, ALGERIA in 2009, and Magister degree in Control and Command from Military Polytechnic School (EMP), Algiers, in November 2011. His research interests include Aerial robotics, Linear and Nonlinear control, LMI optimisation, Analysis and design of intelligent control systems.

13

Vol. 3, Issue 1, pp. 1-13

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

SCRUTINIZE THE ABRASION PHENOMENON WITH THE BRUNT FORCES IN THE JOINTS OF THE ROBOTS AND PROPOSED A NEW WAY TO DECREASE THAT
Khashayar Teimoori and Mahdi Pirhayati
Department of Mechanical and Aerospace Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran

ABSTRACT
In this paper the influences of the brunt forces on the Robots' joints is studied. Moreover, the efficacious ways that we can use to have a little corrosion in the joints are suggested. We know that through the movements of the robots we have a lot of friction phenomenon between the main body and the articulation. We propose this way that: when we use one viscoelastic material between the articulation and the main body, we can decrease the corrosion phenomenon when brunt or sudden hit forces have occurred. To avoid the exasperation of this problem we already studied on the forces of unpredictable and we shown that how can we eradicate this problem. At the end of this article we conclude that viscoelastic materials have more useful traits to use them in many cases especially at the joints of the robots, because one of the best natures of them are rebounded if any forces act on those.

KEYWORDS: Articulation, Brunt Forces, Viscoelastic Material (VEM), Joint, Torque, Shoulder roll

I.

INTRODUCTION

In any robots we have many articulations that are rotating between the joints, and through this processes, abrasion phenomenon has occurred. Especially when we have brunt or hit forces in the legs or hands of the robots corrosion phenomenon is going to be increased.

z

p

p
x

y
Figure.1: Species forces and torques can be applied in the joints [1], [2, 6]

All the hit forces that are happened in the form of unpredictable event can be jeopardous for the articulations, because friction should be happen and cause of amputate the members of the body. To exasperate this problem, at the first we try to realize forces and their working, after that we proposed a new way that is use of VEM. We exemplify our new way with the ANSYS software result and finite element analysis.

I.1

Joints and their structures:

Fig.1. shows the joints and their working in the normal position. [7] In the static state, all the forces and the torques that we have (related to the dynamic and static positions) can be showed by

14

Vol. 3, Issue 1, pp. 14-20

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Five equations that are clearly introduced the works of articulations, when they are rotating, but these equations are stated just for static analysis of the joints.

z
P

w
v
u
y

x

Figure.2: Rotation with respect to the origin [1,6]
n

1.
2. 3. 4. 5.

∑F
i =1

xi

=0,
n xi

Eq.1 : shows the sum of the forces in x direct

n

n

∑∑∑ M
i =1 j =1 k =1

M yj M zk = 0 ,

Eq.2 : shows the Torques in the free space Eq.3 : situation of each points in the spaces Eq.4 : as the same with 3 in the different coordinate Eq.5 : way of translate of two coordinates
Z

r Pxyz = p x i x + p y jy + p z k z , r Puvw = pu i u + pv jv + pw k w ,
Pxyz = RPuvw .

6.

θ = (θ1 , θ 2 , θ 3 , θ 4 , θ 5 , θ 6 ,Lθ n )

y

7.

Y = ( x , y , z ,O , A ,T )

X
Figure.3: End-effectors position and orientation, Formula

15

Vol. 3, Issue 1, pp. 14-20

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Figure.4: Way of the rotation, translation about the coordinates with the frame hand [7, 4]

And rotation of the hands or legs of the robots respect to the origin can be represented by Fig.3. [3]

Cartesian: PPP

Cylindrical: RPP

Spherical: RRP

Articulated: RRR

SCARA: RRP
(Selective Compliance Assembly Robot Arm)

Hands coordinate: N: normal vector; s: sliding vector; A: approach vector, normal to the tool mounting plate

Figure.5: Robot Configuration with the positions of coordinates [4]

Now we can scrutiny in the distances between the articulations and the main wall of the body that is important to study on the abrasion phenomenon. Spaces between two fingers or two joints are illustrated in the Fig. 4. In every rotation or any sudden hit we have a lot of excess forces that are occur between them, and we should prevent from that problem to arise. [7] This occurrence is extremely affected on the articulations of the robots. At this moment we should focus on the abrasion phenomenon. To decrease this problem in this position we should proposed to design one of the materials for filling the lacuna and after that trying multifarious Materials, Finally we chose Viscoelastic material in many reasons. Now we try to show why VEM is a good material for using it between the joints. After try to design one sample joint in ANSYS software we pictured Fig.5 from the result of the program to find out that we have maximum principal elastic strain in the brunt or excess hit forces, when we utilize VEM. This way can produced more convenient conditions to rotating and decreased friction and excess torques while they can cause of the exhaustion.

16

Vol. 3, Issue 1, pp. 14-20

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Picture1. An articulated robot, that is showed to illustrated a situation of the joints [6].

Figure.6: Finite element analysis. The figure symbolize the simulated strain field or brunt field for the shoulder roll

To illustrated more facts about of the advantages of this material we Designed one sample arm (shoulder roll) that contains it joint's. In the 4 section Fig. 6. the positions of the arm (shoulder roll) has showed.[5,6]

17

Vol. 3, Issue 1, pp. 14-20

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
In these pictures we use one VEM between the joints and then we can conclude from 4 sections (a-d) it has more convenient to rotating with this material but if we didn’t use this material maybe we had a friction or break in this joint (but this problem is showed after many movements for example the break event in this joint maybe happened after 2 year or 20000 moving around the all orientations). [6]

(a)

(b)

(c)

(d)

Figure.7: Sample positions of the shoulder roll joint in a free field of hit: a), d) rotation of the joints about x-y plane, b) rotation of the joint about the –x-y plane, c) rotation of the joint about y-z plane

I.2

RESULTS:

Figure.8: Effect of the Viscosity of the main material that we use in the spaces of the joints [8]

18

Vol. 3, Issue 1, pp. 14-20

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Finally we plotted Fig.7 to show how the viscosities of the materials effects on the joints can decrease the abrasion phenomenon with increase the friction when the brunt forces acting on those. At the moment we should mention this advantage point that is the critical point (RV=0), when it shows us while the rates of the viscosity exceeded the mentioned level, it would be effect less, and we should reconsider to choose the better one according to our response in any situation.[9]

II.

ANALYSIS OF RESULTS AND DISCUSSION

As a brief conclusion of our working we should explicated our new achievement in one Table that is clearly demonstrated the application of our proposed material (VEM). As an example we introduced four models that are shows us this fact: how can we choose the best material to have a few shear rates and shear stresses in the articulations rotation, while we don't have any way to check this exactly. In this diagram differences between the effects of the three models are showed. Then it has caparisoned between the rheological constants through Newton's, Bingham plastic, Power law and HerschelBulkley models that all of them can be choose as a best model.
Table.1: Effects of the Rheological Constants on the shear rates and shear stresses that are occur when the articulations are going to rotating. [9]

Newton

Bingham plastic

Power law

Hrschel-bulkley

µ = 0.022 pas

τ y = 3.57 Pa µ p = 0.0167 Pas
k = 0.671

τ y = 1.15Pa k = 0.362

We can use a lot of models to choose the best material for our working, Table 1. says each joints of the robots can easily simulated by ANSYS software and it is very comfortable to use one of the above models in that software for analyzing the shear rates and shear stresses that are occurs when the Brunt forces or sudden Forces acting on the joints. [10] By doing this way we can choose our suitable material to decrease the abrasion phenomenon between the joints. [11]

III.

CONCLUSIONS AND FUTURE SCOPE

In any rotations of an articulations about the joints of the robots, we have a lot of corrosion phenomenon and this problem is going to be increase in every movement, but we can easily decrease this problem even if going to be zero in many situations by using one viscoelastic material between the spaces that are exists between the joints and bulbs. But if we have a good outcome to get to this target, we should design an applicatory space between the main body and joints, or we should try to find reasonable materials to accommodate in them, and this action should be scrutiny exactly to have a few abrasion phenomenons. Finally, in this paper we attempted to illustrate many facts about that we can use Viscoelastic materials, because they can prepare more convenient to rotating and it is more reasonable way to use them.

ACKNOWLEDGEMENT
We wish to thank Dr. N Ashrafi for helpful discussion.

REFERENCES
[1]. S. C. Jacobsen, E. K. Iversen, D, F. Knutti, R. T. Johnson, and K.B. Biggers, Design of the Utah/M.I.T. Dextrous Hand, Proc. IEEE International Conference on Robatics and Automation, pp. 15201532.1986 [2]. J. Zhang, G. Guo and W.A. Gruver, “Optimal Design of a Six-Bar Linkage for an Anthropomorphic Three-Jointed Finger Mechanism,”Proceedings of the ASME Mechanisms Conference, Phoenix,vol. DE-45, pp. 299–304, 1992.

19

Vol. 3, Issue 1, pp. 14-20

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
[3]. N. Dechev, W.L.Cleghorn and S.Naumann, “Multiple Finger, Passive Adaptive Grasp Prosthetic Hand,” Mechanism and Machine Theory, Vol. 36, No. 10, pp. 1157–1173, 2001. [4]. A.Bicchiand,G.Tonietti,“Fastand”softarm”tactics[robotarmdesign],”Robotics&AutomationMagazine,I EEE,vol.11,no.2,pp.2233,June2004.[Online].Available: http://dx.doi.org/10.1109/MRA.2004.1310939 [5]. Machine Intelligence & Robotic Control, Vol.4, No. 3, 1-11 (2002) [6]. Ashrafi, N., "Statistical Analysis of the Articulating Surfaces of the Elbow" IASTED, Aug. 1997, Singapore. [7]. Ashrafi N., “Analysis of Articulation of the Natural Joints by Contact Kinematics” Second International Conference on Mechanical Engineering, May 1996, Shiraz, Iran. [8]. Ashrafi, N., "Analytical investigation of Articulation Incorporating Elasticity and Contact Kinematics," International Conference of Mechanics in Biology, June 30 - July 4, 1996, Slovenia. [9]. N. Ashrafi and H. Karimi Haghighi, " Stability of Shear thickening and Shear thinning Fluids in Narrow Gap between Rotating Cyllinders", MAJLESI Journal of Mechanical Engineering, Vol.4No.2-2011 (Serial No. 14)- P. 65-76. [10]. Siciliano, Khatib (Eds.), ‘Springer Handbook of Robotics’, ISBN: 978-3-540-23957-4,

Springer-Verlag Berlin Heidelberg, 2008, p. 371-378.
[11]. Chowdhury D, Prediction of Standpipe Pressure Using Real, Time Data, MSc Thesis Report (for the course TPG 4920),IPT, NTNU, Trondheim, 2009

Authors
Khashayar Teimoori was born in Tehran, Iran in 1992. Currently he is pursuing B.Tech-final year degree in the field of mechanical engineering from Islamic Azad University, Science and Research Branch of Tehran-Iran. Currently he is A manager of the Backstretch-team (In the fields of Robotics) in the address of www.backstretch-team.info, and he is a manager of web- designing in the Rheosociety Group researches that it can be see in the address of www.rheosociety.com . Now he is a member in the technical society as ASME, ISME (Iranian society of mechanical Engineering), and IMS (Iranian Mathematical Society). His special interests are Computational Mechanics, Rheology, Viscoelasticity, and robotics. Mahdi Pirhayati was born in Tehran, Iran in 1985. He received his Bachelor degree in Mechanical Engineering from Shahid Chamran University, Ahvaz,Iran in 2009; MEng in Mechanical Engineering from Science and Research Branch, Islamic Azad University, Tehran, Iran in 2012. His research interests include hybrid vehicle, green vehicle and robot.There are several projects he is working on hybrid vehicle such as energy generation and suspension system.

20

Vol. 3, Issue 1, pp. 14-20

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

KNOWLEDGE MANAGEMENT IN SUCCESS OF ERP SYSTEMS
Usman Musa Zakari Usman, Mohammad Nazir Ahmad Department of Information Systems, Faculty of Computer Science and Information Systems Universiti Teknologi Malaysia (UTM), Skudai, Johor Bahru 81310, Malaysia

ABSTRACT
Special attention to critical success factors in the implementation of Enterprise Resource Planning systems is evident from the bulk of literature on this issue. In order to implement these systems,which are aimed at improving the sharing of enterprise-wide information and knowledge, organizations must have the capability to effectively share knowledge to start with. Based on a review of the literature on knowledge management in enterprise system implementation, this paper identifies two major areas of concern regarding the management of knowledge in this specific type of project: managing tacit knowledge, and issues regarding the process-based nature of organizational knowledge viewed through the lens of organizational memory. The more capable an organization is in handling these issues, the more likely it is that the implementation will result in competitive advantage for the organization. The competitive advantage arises from the organization’s capabilities in internalizing and integrating the adopted processes with the existing knowledge paradigms and harmonizing the new system and the organizational culture towards getting the most out of the implementation effort.

KEYWORDS: Knowledge, Knowledge Management, Enterprise Resource Planning, Project Management

I.

INTRODUCTION

Enterprise resource planning (ERP) systems are popular among enterprises, with many organizations wanting to implement an ERP system, yet the rate of failure is quite high. Enterprise resource planning software presents a framework for an organization to help them improve their business processes. It consists of a wide range of software products supporting daily organizational business operations and decision making. ERP systems automate operations in supply chain management, inventory control, manufacturing scheduling, sales support, customer relationship management, financial and cost accounting, human resource and other business functional areas within an organization. Knowledge management (KM)is playing an important role in society, and becoming compelling issue within enterprises.In this article, we report on a systematic review of empirical studies of knowledge management in enterprise resource planning projects. Our main goal is to provide a clear overview of empirical studies within the ERP research field, identifying the concepts that have been explored in ERP projects, the main findings, and the research methodsthat have been used within this area. The target readership of the review is four groups which we expect will be interested in an overview of empirical research on knowledge management in ERP projects: (1) academic and enterprise researchers on knowledge management in general, who would be interested in making comparisons ERP projects; (2) practitioners within enterprises, who will be interested in learning about knowledge management initiatives in ERP project implementation; (3) knowledge management researchers who are interested in designing studies to address important research gaps in this field; and (4) researchers who are interested in identifying the relevant studies, and the major findings and their implication within the field. The structure of our research paper is as follows. Section 2 presents the background and general theories on knowledge management for ERP projects. Section 3 describes the research method that we use to select and review the data material for our research, and presents our chosen framework for

21

Vol. 3, Issue 1, pp. 21-28

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
analysis. Section 4 presents the results of the systematic review according to our chosen framework. In Section 5, we discuss and conclude the findings and their implications. For the implications for research, we identify what we believe are the most important research gaps. For practitioners, we provide advice on how to use the results in practice.

II.

BACKGROUND

In this section, a brief introduction to enterprise resource planning and our research focus together with our research question are presented. The remainder of the article presents the overview of the current work on knowledge management in ERP projects.

2.1 Knowledge Management
Knowledge is derived from data and information [20]. Knowledge management is the management of information and knowledge and their usage in organizational business processes within the organization. The main focus of knowledge management is steering strategy and, identifying and communicating the various types of knowledge that reside in processes, people, products and services in order to support integration to improve productivity and efficiency.Based on the knowledge management literature review, the conclusion can be made that knowledge resides in organizational resources, employees and external partnerships. Knowledge is categorized to pursue different research interests, [17] namely,the tacit and explicit dimensions of personal knowledge and processes required for managing to create organizational knowledge. Three knowledge types are identified by Petrash’s framework [18]Based on the knowledge management literature, knowledge management processes are studied in accordance with distinctive and various types of knowledge and organizational objectives. It has traditionally been assumed that there are three broad types of knowledge processing: generation, transfer, and utilization. For example, Probst et al.[20] identified six knowledge processes required for managing organizational knowledge. Knowledge integration is viewed asan important process for innovation and building organizational capability [10],[23]. Coombs and Hull [5] identified ten distinctive processes,namely, identification, transfer, utilization, creation, acquisition, retention, codification, validation, developing, and integration of knowledge.

2.2 Knowledge Management from the ERP Project Perspective
An ERP system allows an organization to have a convergent and integrated view of the organizational information by means of centralized databases and integrated business processes across the lines of different divisions and departments [9,23,19]. It could be said that as a result of enterprise system implementation, the organizational information and knowledge converges across different divisions and departments on an organization-wide scope. IT experts need to know more about the business processes and business process experts need to leverage their knowledge about the IT systems in place in their organization. Eventually, the overlap between the knowledge of different divisions increases and the knowledge on the organizational scale follows a converging pattern. However, this convergence on the organizational level tends to turn into divergence as we move down to the individual level [2,17]. A broader knowledge of the organization is required for end users of enterprise systems compared to the traditional legacy systems that were adapted to each island of automation. As the view changes from the taskfocus to the process focus by implementing enterprise systems, employees need to know how their tasks fit into the overall process and how that process contributes to the achievement of organizational objectives. For example, an employee working in the customer billing section will need to know more about the IT systems as well as other business areas such as production and accounting. Similarly, the IT experts need to know more about different subject areas to adapt the new system to the areas’ requirements and configure the enterprise system to operate optimally. Therefore, as the organizational view of knowledge regarding the tasks and processes that are conducted in the organization tends to converge by the use of the enterprise system, the individual knowledge must diverge to accommodate the changes posed by enterprise system implementation [2,50, 55]. One major implication of such a view of enterprise system projects is that knowledge sharing needs to be significant across organizational boundaries to allow for the maximum sharing of observations and experiences among employees from different divisions with different mindsets about how the

22

Vol. 3, Issue 1, pp. 21-28

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
business is done along the lines of process. Knowledge sharing in enterprise system projects exists along different lines of interaction among organizational members, the ERP team, and external consultants which echoes the need for improved knowledge sharing along different organizational dimensions and in different levels of engagement with the implementation project. The next section reviews different lines of ERP specific knowledge sharing in more detail.

2.3 Knowledge Management in ERP Projects
The simultaneous implementation of enterprise resource planning and knowledge management systems in organizations implies some sort of contradiction by its nature. Enterprise systems are meant to increase the organizational efficiency by enhancing the information processing capability of the enterprise [15,19, 60, 62]. This capability enhancement is enabled by the systematization and centralization of information management and the adoption of standard approaches to the codification and processing of information. On the other hand, knowledge management initiatives aim at mobilizing the knowledge through organized knowledge repositories of explicit knowledge and communities of practice as a means of sharing and creating tacit knowledge, having their overall focus on improving innovation capabilities by increasing flexibility [4,7, 22, 24]. While it is traditionally believed that it is impossible for an organization to focus on both efficiency and flexibility, Newell et al. [19] show, by analyzing a case, that enterprise system and knowledge management initiatives are complementary rather than contradictory. Assuming enterprise systems are integrated databases of organizational information and explicit knowledge, as opposed to knowledge management initiatives being methods of managing tacit knowledge, their findings suggest that a balanced perspective of ERP and KM systems can assist in exploiting explicit knowledge as well as exploring and sharing tacit knowledge simultaneously. In other words, utilizing the respective strength of the enterprise system and KM in tandem enables the alignment of organizational capabilities in information processing, knowledge exploration and exploitation [19]. Knowledge management techniques are used over the course of enterprise system implementation and during different steps of implementation projects to facilitate this knowledge sharing [8]. A detailed view of how the knowledge of ERP project members evolves during these different stages is discussed next.

III.

METHOD

In this research, we use the systematic review approach based on [61-62]. Guided by, the research question, we identify the relevant research, carry out a selection process, appraise, synthesize and draw inferences. Finally, we address each of them.

3.1. Review Planning
We begin by providing a protocol for the systematic review, specifying the process and methods that areused. We aim to use the protocol to specify the research questions, research strategy and method of synthesis.

3.2. Research Identification
This systematic research started with the identification of the keywords and search terms. We used general keywords to search for many and various relevant papers regarding knowledge management and ERP projects. The search strategy for the review was directed towards finding published papers in archival journals, conference proceedings and technical reports from the contents of four electronic databases, namely, ACM portal, Elsevier’s Science Direct, IEEE Xplore and Springer-Verlag’s Link. The search terms used were: software selection criteria, software evaluation techniques, software selection methodologies, evaluating and selecting software packages, method for evaluating and selecting software packages, criteria for evaluating and selecting software package, software evaluation criteria, systems/tools for evaluation and selection of software packages, knowledge-based systems for software selection, framework for evaluating and selecting software packages, and software selection process. Other relevant journals found while searching the articles on this topic are Information and Management, Information and Software Technology, and European Journal of Operational Research. Articles published in proceedings of the IEEE on Software Engineering, Springer-Verlag,

23

Vol. 3, Issue 1, pp. 21-28

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
International Conference on COTS-Based Software System were also found relevant to this topic. The series of articles on evaluating software engineering methods and tools (part 5 to part 8), ACM SIGSOFT, is one of the major contributions to this topic.

3.3. Paper Selection
Our selection process had two parts: (i) an initial selection from the search results, based on reading the abstract of the papers; and (ii) final selection from the initially selected list of papers, based on reading the entire paper. The initial list consisted of 130 papers which we found relevant to the topic and potential candidates for inclusion in our review. Initial selection of the paper was done jointly by both the authors on the basis of reading the title and abstract of the paper. The first author then read all 130 papers in detail and considered 62 papers to be included in the final list for review. In the second phase of selection, we eliminated 4 papers that did not give any useful information on evaluation criteria, evaluation technique, selection methodology, and systems/tools for software selection. The second author cross-checked whether the papers in the final list considered for review addressed the research question and contributed to the basic purpose of the review. A random sample of 25 papers was selected for the cross-checking. There was no disagreement on the final selection of papers. The search began in early 2006 and was completed in early 2007.

3.4. Data Extraction
In the data extraction phase, the first author read every selected paper and extracted information about the attributes as set out in Table 1. The extracted data were then cross-checked by the second author by random selection of 20 papers, that is,about 30% of the total. During the data extraction phase we found that four papers did not give any useful information on software selection methodology, evaluation criteria, evaluation technique, and systems/tools for software selection. Therefore, those papers werenot considered when presenting the results of the review.

3.5. Synthesis
For the synthesis, we chose to only use the papers classified as empirical studies in our framework, in order to avoid problems associated with lessons learned reports stemming from their lack of scientific rigor. We extracted concepts covered, main findings and the research method for each article. One researcher (the first author) focused on the studies in the technical schools, while the other researcher (the second author) focused on the behavioural schools.

IV.

RESULTS

This section describes the analysis of the data extracted from our selected studies. The contribution of the reviewed literature in the field of knowledge management in ERP projects is presented, whichfocuses on the dimensions that should be considered when implementing an ERP projectItshows clearly that various areas of knowledge have been acquired from the literature review. There are similarities between the areas of knowledge, and the consistent expression of the need for this knowledge from the case studies emphasizes that this knowledge should be made explicit. These areas of knowledge are organized to a more manageable form in the following section. From the literature reviewed, three dimensions of knowledge are clearly identified for the successful implementation of an ERP system. The three dimensions to be considered for successful implementation are: i. Project management knowledge ii. Business and management knowledge iii. Technical knowledge. Project management knowledge refers to the knowledge required to manage the entire implementation process as a single project. Business and management knowledge refers to the knowledge about issues and knowledge to deal with these issues during and after implementation. These issues are often people-related and occur on a higher management level.Technical knowledge refers tothe knowledge required to install and implement the ERP system [58].

24

Vol. 3, Issue 1, pp. 21-28

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Table 1. Knowledge management for ERP project success Stage of ERP Selecting ERP for implementation Implementation process Using ERP after implementation Changing ERP when needed KM Types Business and management/ project management Technical Role of KM Identifying the right package Creating good integration b/w business and system Using, transferring and storing data Re/using Successful Output of KM Business/organizational satisfaction Business/organizational impact Flow quality information System quality/ Business

Business and management Business and management/Project management

V.

DISCUSSION AND CONCLUSION

By comprehensive review of the literature on enterprise system knowledge management, this paper investigated the major concerns of the different lines. The first area concerns the effects and implications of the tacit category of ERP-specific knowledge. The subject of tacit knowledge management is addressed extensively in the literature and different issues along with their respective mitigating solutions are provided in various research works [16, 5, 10]. These solutions include the presence of tacit knowledge sharing facilitators during enterprise system implementation [28, 20, 10], and paying attention to the structure of team interactions and the atmosphere of the team. Proper utilization of each method can assist the adopting organization to overcome the difficulties of tacit knowledge sharing. Organizing communities of practice composed of the different groups involved in different stages of the enterprise system lifecycle is another way to overcome the difficulties of transferring such knowledge from where it resides to where it is needed. In the case of running the enterprise system project on distant locations [18], virtual communities centered on company intranets or the internet act as the facilitating bridge among separate bodies of knowledge across the entire enterprise. The process-based nature of organizational knowledge is the second area of concern in enterprise system knowledge management which was examined from the lens of organizational memory [29,30]. Organizational processes embed substantial knowledge of the organization’s history and can be regarded as the organizational memory. Viewing the ERP knowledge through the lens of organizational memory sheds light on some interesting issues of concern in ERP implementation projects [31, 33]. Arranging powerful core enterprise system implementation teams and effective utilization of external consulting were identified to be among the most preferred methods of dealing with the knowledge barriers connected with enterprise system configuration caused by difficulties associated with organizational memory. The standardization which results from adopting the same best practices of enterprise system packages by many organizations might give rise to concerns about losing competitive advantage. In particular, the two subjects reviewed here are very illustrative. Finally, managing ERP-related knowledge across its lifecycle (pre-, implementation and postimplementation) is also an interesting area. For example, exploiting the contribution from disciplines such as ontology engineering into this area would give benefits within the context of ontology-based applications for enterprise systems. This may enhance the whole performance of ERP lifecycle knowledge management activities. An initial insight into this direction is systematically presented in [56] and an example is available from previous work such as [57]. REFERENCES
[1] S.A. Ajila,& Z. Sun, “Knowledge management: impact of knowledge delivery factors on software product development efficiency”, in: Proceedings of the IEEE International Conference on Information Reuse and Integration, Las Vegas, NV, United States, 2004, pp. 320–325.

25

Vol. 3, Issue 1, pp. 21-28

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
[2] A.J. Al-Shehab, R.T. Hughes,& G. Winstanley, “Facilitating organisational learning through causal mapping”, in: Proceedings of the Seventh InternationalWorkshop on Learning Software Organizations, Springer Verlag, Kaiserslautern, Germany, 2005, pp. 145–154. M. Alavi, D&.E. Leidner, Review:” knowledge management and knowledge management systems”: conceptual foundations and research issues, MISQuarterly 25 (1) (2001) 107–136. N. Angkasaputra, D. Pfahl, E. Ras, &S. Trapp, “The collaborative learning methodology CORONETtrain: implementation and guidance”, in: Proceedings of the Fourth International Workshop on Learning Software Organizations, Springer Verlag, Chicago, IL, USA, 2002, pp. 13–24. J. Arent, &J. Nørbjerg, “Software process improvement as organizational knowledge creation: a multiple case analysis”, in: Proceedings of the HawaiiInternational Conference on System Sciences, Maui, USA, 2000, p. 105. J. Arent, J. Nørbjerg, &M.H. Pedersen, “Creating organizational knowledge in software process improvement”, in: Proceedings of the 2nd Workshop on Learning Software Organizations, Oulu, Finland, 2000, pp. 81–92. L. Argote, B. McEvily, &R. Reagans, “Managing knowledge in organizations: an integrative framework and review of emerging themes”, Management Science 49 (4) (2003) 571–582. C. Argyris, “Overcoming Organizational Defences: Facilitating Organizational Learning”, Prentice Hall, Boston, 1990. C. Argyris, &D.A. Schön, “Organizational learning II: theory, method and practice”, Organization Development Series, Addison Wesley, Reading, MA, USA, 1996. A. Aurum, R. Jeffrey, C. Wohlin,& M. Handzic, “Managing Software Engineering Knowledge”, Springer Verlag, Berlin, 2003. M.D.O. Barros, C.M.L. Werner,& G.H. Travassos, “Supporting risks in software project management”, Journal of Systems and Software 70 (1–2) (2004) 21 35. V.R. Basili, G. Caldiera, F. McGarry, R. Pajerski, &G. Page, “The software engineering laboratory – an operational software experience factory”, in: Proceedings of the 14th International Conference on Software Engineering, 1992, pp. 370–381. V.R. Basili, G. Caldiera, &H.D. Rombach, “The experience factory, in: J.J. Marciniak (Ed.), Encyclopedia of Software Engineering”, 1, John Wiley, New York, 1994, pp. 469–476. R. Baskerville,& J. Pries-Heje, “Knowledge capability and maturity in software management”, (1999). A. Birk, “A Knowledge Management Infrastructure for Systematic Improvement in Software Engineering”, Dr. Ing thesis, University of Kaiserslautern, Department of Informatics, 2000. F.O. Bjørnson,& T. Dingsøyr, “A study of a mentoring program for knowledge transfer in a small software consultancy company”, in: Lecture Notes inComputer Science 3547, Springer Verlag, Heidelberg, 2005, pp. 245–256. F.O. Bjørnson, “Knowledge Management in Software Process Improvement”, PhD thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2007. F.O. Bjørnson,& T. Stålhane, “Harvesting knowledge through a method framework in an electronic process guide”, in: Proceedings of the Seventh International Workshop on Learning Software Organizations, Springer Verlag, Kaiserslautern, Germany, 2005, pp. 86–90. P. Brössler, “Knowledge management at a software engineering company – an experience report”, in: Proceedings of the 1st Workshop on Learning Software Organizations, Kaiserslautern, Germany, 1999, pp. 77–86. A.F. Buono, &F. Poulfelt, “Challenges and Issues in Knowledge Management”, Information Age Publishing, Greenwich, CT, USA, 2005. B. Chatters, “Implementing an experience factory: maintenance and evolution of the software and systems development process”, in: Proceedings of IEEE M. Alavi, &D.E. Leidner, Review. “Knowledge management and knowledge management systems”: conceptual foundations and research issues, MISQuarterly 25 (1) (2001) 107–136. R. Baskerville, S. Pawlowski,& E. McLean, “Enterprise resource planning and organizational knowledge”: patterns of convergence and divergence, in: Proceedings of the 21st ICIS conference, 2000. M. Beer, &N. Nohria, “Cracking the code of change”, Harvard Business Review 78 (3) (2000) 133–141. M. Earl, “Knowledge management strategies: toward a taxonomy”, Journal of Management Information Systems 18 (1) (2001) 215–233. T.L. Griffith, J.E. Sawyer, &M.A. Neale, “Virtualness and knowledge in teams: managing the love triangle of organizations, individuals, and information technology”, Mis Quarterly 27 (2) (2003) 265– 287. V. Grover, &T.H. Davenport, “General perspectives on knowledge management”: fostering a research agenda, Journal of Management Information Systems 18 (1) (2001) 5–21.

[3] [4]

[5]

[6]

[7] [8] [9] [10] [11] [12]

[13] [14] [15] [16]

[17] [18]

[19]

[20] [21] [22] [23] [24] [25] [26]

[27]

26

Vol. 3, Issue 1, pp. 21-28

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
[28] C. Holland, &B. Light, “A stage maturity model for enterprise resource planning systems use”, The DATA BASE for Advances Information Systems 32 (2) (2001). [29] J. Huang, S. Newell,& S. Pan, “Knowledge integration processes within the context of enterprise resources planning (ERP) systems implementation”, in: Proceedings of the 9th ECIS Conference, 2001. [30] M. Jones, “Tacit knowledge sharing during ERP implementation”: a multi-site case study, Information Resource Management Journal 18 (2) (2005) 1–23. [31] M. Jones, &R. Price, “organizational knowledge sharing in ERP implementation”: lessons from industry, Journal of Organizational and End User Computing 16 (1) (2004). [32] M.C. Jones, M. Cline, &S. Ryan, “Exploring knowledge sharing in ERP implementation: an organizational culture framework”, Decision SupportSystems 41 (2) (2006) 411–434. [33] J. Kallinikos, Deconstructing information packages, Information Technology and People 17 (1) (2004). [34] A. Kwang-Tat, T. James, &Y., Chee-Sing, “IT implementation through the lens of organizational learning”: a case study of insuror, in: Proceedings of the Eighteenth International Conference on Information Systems, 1997. [35] J. Lee, K. Siau, &S. Hong, “Enterprise integration with ERP and EAI”, Communications of the ACM 46 (2) (2003) 54–60. [36] Z. Lee, &J.Y. Lee, “An ERP implementation case study from a knowledge transfer perspective”, Journal of Information Technology 15 (4) (2000) 281–288. [37] Y. Malhorta, “integrating knowledge management technologies in organizational business processes: getting real time enterprise to deliver real business performance”, Journal of Knowledge Management 9 (1) (2005) 7–28. [38] M.L. Markus, C. Tanis, &P.C. van Fenema, “Multisite ERP implementations”, Communications of the ACM 43 (4) (2000) 42–46. [39] S. Newell, J. Huang, &R. Galliers, “Implementing enterprise resource planning and knowledge management systems in tandem”: fostering efficiency and innovation complementarity, Information and Organization 13 (2003). [40] S. Newell, C. Tansley, &J. Huang, “Social capital and knowledge creation in an ERP project team”, in: Proceedings of the 7th AMCIS, 2001. [41] I. Nonaka, &H. Takeuchi, “Knowledge-Creating Company”, Oxford University Press, 1995. [42] S. Newell, J. Huang, &R. Galliers, “Implementing enterprise resource planning and knowledge management systems in tandem”: fostering efficiency and innovation complementarity, Information and Organization 13 (2003). [43] D. Robey, J.W. Ross, &M.C. “Boudreau, Learning to implement enterprise systems: an exploratory study of the dialectics of change”, Journal of Management Information Systems 19 (1) (2002) 17–46. [44] U. Schultze, &D.E. Leidner, “Studying knowledge management in information systems research: discourses and theoretical assumptions”, Mis Quarterly 26 (3) (2002) 213–242. [45] J.E. Scott, “Post implementation usability of ERP training manuals: the user’s perspective”, Information Systems Management 22 (2) (2005) 67–77. [46] J.E. Scott, &I. Vessey, “Managing risks in enterprise systems implementations”, Communications of the ACM 45 (4) (2002) 74–81. [47] C. Soh, S.S. Kien, &J. Tay-Yap, “Cultural fits and misfits: is ERP a universal solution?”, Communications of the ACM 43 (4) (2000) 47–51. [48] D. Stenmark, “Leveraging tacit organizational knowledge”, Journal of Management Information Systems 17 (3) (2000) 9–24. [49] E. Stijin, &A. Wensley, “organizational memory and completeness of process modeling in ERP systems”, Business Process Management Journal 7 (3) (2001). [50] M. Sumner, “Risk factors in enterprise-wide/ERP projects”, Journal of Information Technology 15 (4) (2000) 317–327. [51] S.W. Sussman, &W.S. Siegal, “Informational influence in organizations: an integrated approach to knowledge adoption”, Information Systems Research14 (1) (2003) 47–65. [52] P. Weill, &R. Woodham, “Don’t Just Lead Govern, Implementing Effective IT Governance”, MIT Sloan School of Management Working Paper, 2002. [53] L. Willcocks, &R. Sykes, “The role of the CIO and IT function in ERP”, Communications of the ACM 43 (4) (2000) 32–38. [54] J. Worley, K. Chatha, &R. Weston,” implementation and optimization of ERP systems”: a better integration of processes, roles, knowledge and user competencies, Computers in Industry 56 (2005). [55] L. Pries-Heje, &Y. Dittrich, "ERP Implementation as Design: Looking at participatory design for means to facilitate knowledge integration," Scandinavian Journal of Information Systems: Vol. 21: Iss. 2, Article 4, (2009).

27

Vol. 3, Issue 1, pp. 21-28

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
[56] [57] [58] M. N., Ahmad,N. H. Zakaria., &D. Sedera,“Ontology-based Knowledge Management for Enterprise Systems”. International Journal of Enterprise Information Systems (IJEIS), 7(4), 64-90, (2011). G. C., Peng,&M. P. Nunes,“Surfacing ERP exploitation risks through a risk ontology”, .Industrial Management & Data Systems, 109 (7), 926-942, (2009) R. Chan.,J. Esteves., J. Pastor.,&M. Rosemann..“An Exploratory Study of Knowledge types Relevance along Enterprise Systems Implementation Phases”. In Proceedings of the 4th European Conference on Organizational Knowledge and Learning Capabilities (OKLC), pp. 1-14, Barcelona,(2003). D. Sedera., G. Gable., &T. Chan.“Measuring Enterprise Systems Success: A preliminary model” in proceedings of the 2003 Aericas Conference on Information Systems (AMCIS 2003), Tampa, Florida, USA, August 4-6. (2003). M. Easterby-Smith,&M.A. Lyles, “The Blackwell Handbook of Organizational Learning and Knowledge Management, Blackwell Publishing”, Malden, MA, USA, (2003) B. Kitchenham, “Procedures for Performing Systematic Reviews”, in Technical Report TR/SE-0401, Keele University, (2004) B. Kitchenham, R. Pretorius,D. Budgen,P. Brereton,M. Turner,&M. Niazi.” Systematic literature reviews in software engineering”,A tertiary study. Information and Software Technology, 52, 792–805, (2010)

[59]

[60] [61] [62]

Authors
Usman Musa Zakari Usman is an M.Sc. research student at the Faculty of Computer Science and Information Systems of University Technology Malaysia (UTM), Skudai, Johor Malaysia. He holds his B.Sc. in Business Information Technology from Limkokwing University of Creative Technology Cyberjaya Malaysia (LUCT). His research interests are innovative solutions for "knowledge-based" information systems that span several areas applying ontology and knowledge management for interoperating information systems, software engineering and enterprise systems.

Mohammad Nazir Ahmad is currently working in the Faculty of Computer Science and Information Systems at the Universiti Teknologi Malaysia (UTM), Skudai, Johor Malaysia. Nazir holds a PhD from the University of Queensland in Information Systems and a Masters degree in Information Systems from the Universiti Teknologi Malaysia. He holds a Bachelor degree in Industrial Computing from the Universiti Kebangsaan Malaysia (UKM). His main research interests are innovative solutions for "knowledge-based" information systems that span several areas applying ontology and knowledge management for interoperating information systems, software engineering and enterprise systems. Currently, he is an Editorial Board/Review Member of the International Journal of Knowledge Management Practice (JKMP), International Journal of Computer Science and Emerging Technologies (IJCSET) and International Journal of Information, Knowledge and Management (IJIKM). Moreover, he is a member of the Association for Information Systems (AIS) and the International Association for Ontology and its Applications (IAOA).

28

Vol. 3, Issue 1, pp. 21-28

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

A NEW PROPOSAL ERICA+ SWITCH ALGORITHM FOR TRAFFIC MANAGEMENT
Ehab Aziz Khalil
Department of Computer Science & Engineering, Faculty of Electronics Engineering, Menoufiya University, Menouf-32952, EGYPT

A BSTRACT
A new proposal to the Explicit Rate Indication for Congestion Avoidance+ (ERICA+) switch algorithm for traffic management is present. The new proposal can be used for enhancing quality of service of multimedia; the using of non_zero MCR is very useful for carrying multimedia over ATM network. We have adopted continuous event driven simulation methodology to evaluate performance of integrated video and data traffics on the ATM network when using ABR service. The study confirms that the system parameters (e.g., dynamic/constant queue control functions, ICRs for sources, number of video and data traffic intensity) have sensitive effects on the performance characteristics of the network. The method we have used depends on a separate queues for each traffic types to isolate them from overlapping, so the delay will reduce especially for video traffic, however, the new proposal algorithm gives better performance than the original algorithm, its promising enough. K EYW ORDS: ERICA+ switch algorithm, ATM-ABR service, Performance of Video and Data traffics.

I.

INTRODUCTION

It is well known that ATM (Asynchronous Transfer Mode) has emerged as most promising technology which can provide high speed networks with the capability of sending all types of traffic including video and data, and provides high speed communications for different types of data [1]. ATM supports multiple Quality of Service (QoS), which include Constant Bit Rate (CBR), Variable Bit Rate (VBR), Available Bit Rate (ABR), and Unspecified Bit Rate (UBR). These services share a common link and thus not all of them can get the bandwidth they require. In ATM networks, the ABR service and UBR service are used to support non-delay sensitive data applications. ABR normally uses the available bandwidth. This is often the left-over of the higher priorities services, which are CBR and VBR. Though the current standards for ABR service do not require the cell transfer delay and cell loss ratio to be guaranteed, it is desirable for switches to minimize the delay and loss as much as possible. The ABR service requires network switches to constantly monitor their load and feed this information back to the sources, which in turn dynamically adjust their input into the network. This is mainly done by inserting Resource Management (RM) cells into the traffic periodically and getting the network congestion state feedback from the returned RM cells, which may contain congestion information reported by the switches and destinations. Depending upon the feedback, the source is required to adjust its transmission rate. Obviously, that the congestion control mechanisms are essential for the support of ABR service to provide efficient and fair bandwidth allocation among ABR applications [2-19]. Figure 1 shows an ABR traffic management model. The RM cell contains an Explicit Rate (ER) field. The switches along the path put some information to indicate the rate that the source should use after the receipt of the RM cell. ABR users are allowed to declare a Minimum Cell Rate (MCR), which is guaranteed to the Virtual Connection (VC) by the network. Most VCs use zero as the default MCR value. However, for an ABR with higher MCR, the connection may be denied if sufficient bandwidth is not available. Both ABR data traffic and the available bandwidth for ABR are variable.

29

Vol. 3, Issue 1, pp. 29-40

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Figure 1 Traffic Model in ABR

If there is not enough buffer the bursty traffic (either from VBR, which requires more bandwidth), too many losses will result in low performance. This paper presents a new proposal algorithm to the switch depending on the General weighted Fair ERICA+ (GWFairERICA+) switch algorithm were described in [20,21]. As shown in Figure 2, we have considered two types of traffic (video and data), each traffic has n sources (Source 1,.,Source n) accommodates in one queue and the service of these queues occurs at different levels of priorities, that is to enhance throughput guarantees to support multimedia applications.

Figure 2

The output link bandwidth divided dynamically between these queues according to the level of priority. It is to mention here that the use of separate queues resulting in reduces the delay of the cells transmission. Also, we assume that the Average Interval (AI) period for each traffic type is different and depends on the RM cells, for example if RM cell of data source is sending every X period an data cell and RM cell of video source is sending every 50X period a video cell, that indicates that the AI for video sources much more than that of data sources 50 times, i.e., when one feedback arrives at video sources there are several feedback arrive at data sources. The remaining of the paper organized as the following, section 2 gives brief overview of ERICA+ switch algorithm and then section 3 discusses the new proposal algorithm. In section 4 the configuration parameters, and simulation are discussed. The results are discussed in section 5. Section 6 gives discussion and finally section 7 presents the conclusion.

II.

OVERVIEW OF THE ORIGINAL ERICA+ SWITCH ALGORITHM

At the beginning it is well known that the main advantages of ERICA are its low complexity, fast transient response, high efficiency, and small queuing delay [22-25], also, in ERICA, the time is divided into consecutive equal-sized slots called “switched averaging intervals” [26]. The ERICA+ algorithm is concerned with the fair and efficient allocation of the available bandwidth to all contending sources. Like any dynamic resource algorithm, it requires monitoring the available capacity and the current demand on the resources. There, the key “resource” is the available bandwidth at a queuing point. In most switches, output buffering is used, which means that most of the queuing happens at the output ports. Thus, the ERICA+ algorithm is applied to each output port. Assuming that measurements do not suffer from high variance, the above algorithm is sufficient to converge to efficient operation in all cases and to the max-min fair allocations in most cases [20, 27-32]. As mentioned above that the ERICA+ operates at the output port of a switch. It periodically monitors the load, active number of VCs and provides feedback in the backward RM (BRM) cells. The measurement period is called the “Averaging Interval”. The measurements are done

30

Vol. 3, Issue 1, pp. 29-40

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
in the forward direction and feedback is given in the reverse direction. The complete description of ERICA+ algorithm and its performance in one of these references [20,33-35], and some related research in [36-44].

III.

THE NEW PROPOSAL SWITCH ALGORITHM [45, 46]

In the new proposal switch algorithm, two types of traffic and two queues (as shown in Figure 2) are used instead of one type of traffic and one queue which have been developed in several switch algorithm to compute the feedback to be indicated to ABR sources in RM cells [31-33, 48, 49]. Also, the service of these queues depends on different levels of priorities that will enhance throughput guarantees to support multimedia applications and dividing the output link bandwidth between these queues dynamically depending on the traffic’s priority level. The separate queues also protect each traffic type from overlapping , so the delay of both traffic will reduce. When using more than one queue as shown in Figure 2, the treatment will be different because the case of one queue does not bother about the status of the traffic within the network, for example may be the video’s queue is full and at the same time the data’s queue is empty. Also, there are two functions of queue length, one for each traffic, Fqv for video traffic, and FqD for data traffic. Each queue function defines the feedback for the source dealing with that queue and the queue functions are independent from each other. We assume that the AI (Averaging Interval ) period for each traffic type is different and depends on the Resource Management (RM) cells. Instead of using one queue function there are two queues and two functions which may be dynamic or static functions. These functions operate independently and the total bandwidth which divided on the active sources will depend on the two applied functions. A simple choice is to use a Constant Queue control Function (CQF), where the queue Factor is set to a value less than one. The (1-Factor) is used for queue draining. Another choice is to use a Dynamic Queue control Function (DQF) [40]. In case of DQF, the Factor’s value equals one for the short queue length and drops sharply by increasing of the queue length. ERICA+ switch algorithm uses a hyperbolic or inverse hyperbolic function for calculating the value of the DQF factor [49]. GWFairERICA+ and ERICA+ Switch algorithms were described in [20,21,26,50-52] using target ABR capacity which is obtained by multiplying the total available ABR capacity by a fraction term. Fraction amount of the link capacity is used to drain the queue[26]. Fraction can be either a constant less than one or dynamic function of the switch queue length (Fq). The using of one queue for all traffic with dynamic function of the switch queue length (Fq), resulting in the queue length will be very important to define the feedback for each active source and the status of the network depends on that queue, when the queue is full beyond threshold2 value (represents the transient point from steady state to over load ) the network is congested and if between threshold1 ( represents the transient point from under load to steady state ) and threshold2 values, the network is in steady state and if less than threshold1, the network is under load. The GWFairERICA+ Switch algorithm can operate with the new proposal because of the using of that algorithm to weight function which used to distribute the excess bandwidth among sources depending on their weights. In this paper we have followed the same general weighted function as in [48] with the new proposal N Gi = Ui + Wi (A-U) / ∑ Wj ………… where j =1 to N 1 Gi = GW fair allocation for connection i. Ui = MCR of connection i. Wi= Preassigned weight associated with the connection i.

31

Vol. 3, Issue 1, pp. 29-40

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
U = Sum of MCRs of active connections Bottlenecked at this link. A = Excess bandwidth, to be shared by Connections bottlenecked on this link. N = total bottleneck sources. Pseudo code of the New Proposal Algorithm At the end of the averaging interval for data: For data sources ( target ABR data capacity = data factor × total ABR data capacity . data input rate=sum of all inputs of data queue . data over load factor = data input rate ÷ target ABR data capacity . ) repeat for video sources in case of the end of the AI for video. For each source calculate weight (cost+ mcr) for each source . End . For each source Excess fair share = target capacity× source_ weight ÷ sum of weights for certain traffic type. End. For each queue Queue fair share = target capacity× sum of weights for sources dealing with this queue ÷ sum of weights for all traffic type. End. When a BRM is received: For each source virtual channel share (Vcshare)=max(0,source_ rate - mcr) ÷ over load factor . explicit rate (ER) = mcr + max(Excess fair_ share, Vcshare) . ER in RM cells=min(ER in RM,ER, Target rate). End The main different between the original GWFairERICA+ algorithm and the new proposal algorithm is the calculating of the variables of each traffic type independently and divide the available output link bandwidth dynamically among the queues depending on the sum of all weights of traffic types. Dividing the output link bandwidth among queues will happened at the end of the Averaging Interval period. We assume that only one feedback is given in each averaging interval to the sources. This avoids unnecessary conflicting feedbacks to the sources. May be its very difficult to all sources to reach a steady state region at the same time because of the independency among queues but each source will obtain its need from the available bandwidth, and that will realize our main goals to maximize link utilization, minimize queuing delays, achieve fair allocation, reduce transient response time and achieve stable and robust operation.

IV.

SIMULATION CONFIGURATION AND PARAMETERS

In this section, the simulation configuration and parameters are discussed. We use the common original configuration shown in Figure 3, to test the performance of the new proposal switch algorithm. We assume that the sources are greedy, i.e., they have infinite cells to send at Allowed Cell Rates (ACRs). In the configuration the traffic is unidirectional, from source to Destination. If bi-directional traffic is used, similar results will be achieved, except that the convergence time will be longer since the RM cells in the backward direction will travel along with cells from destination to source. In this configuration cells are traveling from the sources to the destinations through the two switches (SW1 and SW2) and the bottleneck link. We assume that only one feedback is given in each averaging interval to the sources, that is to avoid unnecessary conflicting feedbacks to the sources. The common original configuration is used to confirm that the new proposal switch algorithm can achieve the general fairness for different set of weight functions.

32

Vol. 3, Issue 1, pp. 29-40

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Figure 3 N Source – N Destinations Configuration

Definition of the Parameters within the Configuration N infinite sources sends to N destinations. The direction of the traffic is unidirectional. The Initial Cell Rate (ICR) values of all the sources are chosen randomly in the range between (0, link rate). All links are of length 1000 Km, which correspond to the propagation delay of 5 ms. All links have a bandwidth of 149.76Mbps(155.52Mbps less the Synchronous Optical NETwork SONET) overhead ). The sources start at random time in the range between (0, RTT), where RTT is the Round Trip Time. RTT = 30 ms for the mentioned above configuration. Hyperbolic function parameters for dynamic queues: a = 1.15, b = 1.05, where a and b are the parameters which control the degree of curvature of the hyperbolic function. QDLF (the Queue Drain Limit Factor) = 0.5. TCP Maximum Segment Size (MSS) of 512 bytes. Weight = Cost + MCR. Using Motion Picture Experts Group (MPEG-2)[53] to generate video frames and using a Leaky Bucket shaper [54] to smooth out the traffic at the sources. See the Simulator Flow chart at the end of the paper. In the next section, we shall explore the simulation results of the new proposal algorithm. The performance studies of different rates, queue lengths and utilization are present. All the performance studies are done within the switch (SW1), our future research will look after the two switches (SW1 and SW2) within the configuration..

V.

SIMULATION RESULTS

At the beginning the video and data queues at the switch grow depending on the Initial Cell Rates (ICRs). So the maximum queue depends on the Initial Cell Rates (ICRs) and Round Trip Time (RTT) and is independent of the queue control function used. Influence of ICRs appear only during first Round Trip Time. The feedback information reaches the sources and the sources adjust their rates accordingly.

33

Vol. 3, Issue 1, pp. 29-40

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Figure 4: Video and Data ACRs Vs Time

Figure 4 shows the variance in rates for video and data traffics during 350 milliseconds where the initial cell rates are 70, and 50 Mbps for video and data sources respectively. In this case the sum of initial rates (70 + 50 = 120 < ABR capacity. Moreover the two sources achieve the General Weighted (GW) fairness rates. The weight function used in this case is Cost + MCR (25+30 for video traffic and 5+10 for data traffic). The left over capacity (149.76-30-10 = 109.76), is divided proportional to (55,15). Hence the GW fair for each source is (30+55/70*109.76, 10+15/70*109.76) = (116.24,33.52) Mbps. All sources enter to a steady-state region during the first 100 milliseconds from the simulation period.

Figure 5 Video and data queues Vs Time using a CQF

Figure 6 Video and Data Queues Vs Time Using DQF

Figures 5 and 6 show the video and data queues lengths during 350 milliseconds respectively. Results here in two cases, when using a Dynamic Queue Control Function (DQF) and a Constant Queue Control Function (CQF). The GW fair for video and data sources are 116.24,and 33.52 Mbps respectively in case of using a DQF while when using a CQF are 104.47, and 30.31 Mbps respectively. All values and parameters like costs, weights, and ICRs used in this case is same as in figure 4. Comparing results when using a DQF with results when using a CQF, better values obtained in case of a CQF while the link utilization when using a DQF is better than a CQF (see Figure 7).

Figure 7: Link Utilization Vs Time

Figure 8: Video and Data Queues Vs Time using a CQF

34

Vol. 3, Issue 1, pp. 29-40

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Figure 9: Video and Data Queues Vs Time using a DQF

Figure 10: Link Utilization Vs Time

Figures 8, 9, and 10 show queues lengths and utilization when using a CQF and DQF if there are four active sources, two video sources and two data sources during 350 milliseconds. Where the initial cell rates are (35,40) Mbps for video and (15,20) Mbps data sources. The weight function used in this case is Cost + MCR (25+30, 25+35 for video traffic and 10+10, 10+15 for data traffic). When number of active video and data sources increased as in this case the performance is acceptable as seen in figures 8, 9, and 10. Also confirm that our proposed algorithm and simulator work efficiently.

VI.

DISCUSSION

This section presents a comparison discussion between our proposal and previous algorithms. Most of the prior studies were used the ERICA+ switch algorithm with one queue carrying multimedia traffic over ATM-ABR service [49, 53, 54], but in our proposal we used separate queues. The main different between the prior studies results and our study results can be summaries in the following points: 1- In almost all studies each source obtains its fair share rate. 2- In our study the separate queues protect each traffic type from overlapping, so the delay reduces. However, the using of one queue for more than one traffic type resulting in the overlapping may be occurs, so the delay will be longer. 3- In case of three sources sending to ABR switch. Our results when using Constant Queue Function (CQF) are identical to that results when using GWFairERICA+ switch algorithm with one queue of multimedia traffic studied in [49]. But it is to mention here that, our results when using Dynamic Queue Function (DQF) are more steady than the results when using one queue of multimedia traffic studied in [49]. 4- The weakness here is the cost in which it will increase by using several queues in ABR switch hardware.

VII.

CONCLUSION

The paper has discussed a new proposal algorithm that can be used for enhancing quality of service of multimedia application when using ABR service in ATM network. Using a non_zero MCR was very useful for carrying multimedia over ATM network. The new proposal algorithm depends upon the use of an independent queue for each traffic type to protect the traffic from overlapping that makes sure for reducing the delay particularly for the traffics sensitive to the delay such as video and audio traffics. This is will be very good to the video cells which they are very sensitive to the delay resulting in continuity increases of the throughput of the switch. Also, the new proposal algorithm divides the output link bandwidth dynamically among different queues. This method depends on summing of all the weights of the sources which dealing with a specific queue and divide it on the total weight of all the active sources and the result will be the ratio of the bandwidth which the queue will use of it to transmit the cells. Dividing the output bandwidth among the queues will happen at the end of the averaging interval period which different for each queue type. The simulation results indicate that the using of general weighted fair ERICA+ switch algorithm (GWFairERICA+) with separate queues will maximize link utilization, minimize queuing delays, achieve stable and robust operation, achieve fair allocation, and reduce transient response. Obviously

35

Vol. 3, Issue 1, pp. 29-40

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
that, the ATM network using ABR service can effectively handle multimedia traffic in real-world network environment.

REFERENCES
[1] Ning L 46370961, Yue Xu (Debbie) 44676963, “TCP over ATM ,” http://www.cs.ubc.ca/spides/dux/course_project/527.html. [2] V. Singhal, A. K. Vatsa, "A Novel Congestion Control Mechanism With Accelerating Effect," International Journal of Computer Applications (0975-8887), Vol.22, No.5, May 2011. [3] Yuxing Wang, "TCP-FIT: An Improved TCP Congestion Control Algorithm and its Performance," Proceedings of IEEE/INFOCOM'11, pp.2894-2902, 10-15 April, 2011. [4] Zhang Mu, "Research on FAST TCP Congestion Control Algorithm," Proceedings of Future Information Technology and Management Engineering (FITME'10), pp. 464-466, 9-10 Oct.2010. [5] Sahin I., and Simaan M. A.,: "Competitive Flow Control in General Multi-Node Multi-Link Communication Networks," International Journal of Communication Systems, Vol.21, No.2, pp.167-184, Feb. 2008. [6] Ignaciuk P., and Bartoszewicz A. :, "Congestion Control Protocol for Connection Oriented Networks With a Periodic Feedback and Non-Persistent Sources," Theoretical and Applied Informatics, Vol.19, No.3, pp.217-233, 2007. [7] Arjan Durresi, Leonard Barolli, Raj Jain, and Makoto Takizawa, "Congestion Control Using Multi Level Explicit Congestionn Notification," IPSJ Digital Courier, Vol.3, pp.42-54, 2007. [8] Ijaz Haider Naqvi, and Tanguy Perennou, "A DCCP Congestion Control Mechanism for WiredCum-Wireless Environments," IEEE Communication Society, and in WCN 2007 Proceedings. [9] R.S. Deshpande, Dr. P.D. Vyavahare, “Recent Advances and a survey of congestion control mechanisms in ATM networks”, IE(I) Journal, Vol. 88, pp. 47-54, , 2007. [10] W Li, Z Che, Y Li, “Research on the congestion control of Broadband Integrated Service Digital Network based on ATM”, Proceedings of the fifth international conference on Machine Learning and Cybernetics, Daliaan, pp 2510-2512, 2006. [11] S. Floyd, E. Kokler, "Profile for Datagram Congestion Control Protocol (DCCP) Congestion Control ID 2:TCP-Like Congestion Control, RFC 4341, March 2006. [12] Ignaciuk P.,and Bartoszewicz A. :, "Congestion Control in Connection-Oriented Communication Networks With Unisochronic Feedback," Proceedings of International Conference on Signals and Electronic Systems, Lodz, Poland, pp.445-448, Sept. 2006. [13] Bruni C., Delli Priscoli F., Kock G., and Vergari S.,: "Traffic Management in a Band Limited Communication Network: An Optimal Control Approach," International Journal of Control, Vol.78, No.16, pp.1249-1264, Nov. 2005. [14] Alpcan T., and Basar T.,:" A Globally Stable Adaptive Congestion Control Scheme for InternetStyle Networks With Delay," IEEE/ACM Transactions on Networking, Vol.13, No.6, pp.12611274, Dec., 2005. [15] Minseok Kwon, and Sonia Fahmy, "On TCP Reaction to Explicit Congestion Notification," Journal of High Speed Networks, Vol.13, No.2, pp.123-138, 2004. [16] E Al-Hammadi and M M Shasavari, “Engineering ATM networks for congestion avoidance”, Mobile Networks and Application, Vol. 5, pp.157–163, 2000. [17] A Hac, H.Lin, “Congestion Control for ABR traffic in an ATM network”, International Journal of Network Management, Vol. 9, pp. 249-264, 1999. [18] Su C. F. Veciano G., and Walrand J., "Explicit Rate Flow Control For ABR Services in ATM Networks," IEEE/ACM Transactions on Networking, Vol.8, No.3, pp.350-361, June 2000. [19] Zhao Y., Li S. Q., and Sigarto S., : "A Linear Dynamic Model for Design of Stable ExplicitRate ABR Control Schemes," Proccedings of the IEEE INFOCOM'97, Kobe, Japan, Vol.1, pp.283-292, April 1997. [20] B. Vandalore, S. Fahmy, R. Jain , R. Goyal , and M. Goyal , “A Definition of General Weighted Fairness and its Support in Explicit Rate Switch Algorithms”, Proceedings of 6th International Conference on Network Protocols 1998 (ICNP’98), Austin, Texas, USA, pp.22-30, Oct.13-16, 1998, [21] B. Vandalore, S. Fahmy, R. Jain, R. Goyal , and M. Goyal , “ General Weighted Fairness and its Support in Explicit Rate Switch Algorithms”, Journal of Computer Communication, Vol.23, Issue 2, , pp.149-161, January 2000. [22] M. Sreenivasulu, E.V. Parasad, and G.S.S. Raju, "Performance Evaluation of EFCI, and ERICA Schemes for ATM Networks," IJCTA, Vol.2 (4), pp. 981-986, July-August, 2011. [23] M. Sreenivasulu, E. V. Parasad, and G.S.S. Raju, " Performance Evaluation of Rate Based Congestion Control Schemes for ATM Networks," IJCSNS, Vol.11,No.6, pp.190-196, June,2011.

36

Vol. 3, Issue 1, pp. 29-40

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
[24] M. Sreenivasulu, E. V. Parasad, and G.S.S. Raju, " Enhanced ERICA Congestion Control Scheme for ATM Networks," IJCSNS, Vol.11, No.5, pp.133-140, May 2011. [25] Sonia Fahmy, Raj Jain, Rohit Goyal and Bobby Vandalore, “ On Determining the Fair Bandwidth Share for ABR Connections in ATM Networks”, Journal of High Speed Networks, Vol.11, Issue 2, pp.121-135, 2002 [26] Shivkumar Kalyanaman, Raj Jain, Sonia Fahmy, Rohit Goyal, and Bobby Vandalore, "The ERICA Switch Algorithm For ABR Traffic Management in ATM Networks," IEEE/ACM Transactions on Networking, Vol. 8, No. 1, pp 87-98, February 2000. [27] Y. T. Hou, H. H. Y Tzang, S. S. Panwer, “ A Generalized max-min Rate Allocation Policy and Its Distributed implementation using the ABR Flow Control Mechanism”, Proceedings of INFOCOM’98, April 1998. [28] S. P. Abraham and A. Kumas, “A Stochastic Approximation Approach for a max-min Fair Adaptive Rate Control of ABR sessions with MCRs”, Proceedings of INFOCOM’98 , April , 1998. [29] Y. T. Hoa , H. Tzeng , and S. S. Panwar, “A Simple ABR Switch Algorithm for The Weighted max-min Fairness Policy”, Proceeding IEEE ATM97 workshop, Lisbon, Portugal, pp.329-338, May 25-28, 1997. [30] Y. Yin ,“Max-min Fairness vs. MCR Guarantee on Bandwidth Allocation For ABR", Proceeding of IEEE ATM’96 , workshop, San Francisco, CA , August 25-27, 1996. [31] D. H. K. Tsang and W. K. F. Wong, “A New Rate Based Switch Algorithm for ABR Traffic to Achieve max-min Fairness with Analytical Approximation and delay Adjustment", Proceedings IEEE INFOCOM’96, pp.1174-1181, March 1996. [32] L. Kalampoukas, A. Varma, and K. K. Ramakrishman, “ An Efficient Rate Allocation Algorithm for ATM Networks Providing max-min Fairness”, Proceedings of the 6th IFIP International Conference on High Performance Networking, Sept 1995. [33] R. Jain, L. Kalampoukas, R. Goyal, S. Fahmy, and R. Viswanathan, "ERICA Switch Algorithm: A Complete Description," ATM Forum/96-1172, August, 1996. [34] R. Jain, L. Kalampoukas, S. Fahmy, S. Kalyanaraman, and R. Goyal, "ABR Switch Algorithm Testing: A Case Study With ERICA," ATM Firum/96-1267, October 1996. [35] Sonia Fahmy, Raj Jain, Rohit Goyal, and Bobby Vandalore, " Design and Simulation of ATMABR End System Congestion Control, " Transactions of the Society for Computer Simulation, Volume 78, Issue 3, March/April 2002. [36] A Subramani, A Krishnan, "Doubly Finite Queues (DFQ) Supporting for ABR Traffic Load in ATM Networks Using MSVDR Algorithm,"IEEE International Advance Computing Conference IACC, pp.13-19, 2009. [37] Su Bing, Yu Haiyang, Lu Jieru, Ma Zhenghua,"Traffic Optimization on the Dynamic Switching of ABR for OSPF Networks," 2009 International Conference on Information Technology and Computer Science, pp.429-432, 2009. [38] X Li, Y Zhou, G M Dimirovski, Y Jing, "Simulated Annealing Q-learning Algorithm for ABR Traffic Control of ATM Networks," 2008 American Control Conference, pp.4462-4467, 2008. [39] K. G. Shin, D. Saha, D. D. Kandlur, "Scalable Flow Control for Multicast ABR Services in ATM Networks," IEEE/ACM Transactions on Networking, pp.67-85, 2002. [40] N. Ghani, J. W. Mark, "Enhanced Distributed Explicit Rate Sllocation for ABR Services in ATM Networks," IEEE/ACM Transactions on Networking, pp.71-86, 2000. [41] B. K. Kim, C. Thompson, "ABR Traffic Control in ATM Networks Using Optimal Control Theory," 1998 ICATM98 1998 1st IEEE International Conference on ATM, pp.327-333, 1998. [42] Hiroyuki Ohsaki, Masayuki Murata, Hideo Miyahara, "Designing Efficient Explicit-Rate Switch Algorithm with Max-Min Fairness for ABR Service Class in ATM Networks," In Proc. of the IEEE ICC97, pp.182-186, 1997. [43] A Pitsillides, P Ioannou, "An Integrated Switching Strategy for ABR Traffic Control in ATM Networks," in Proceedings Second IEEE Symposium on Computer and Communications (1997) Issue: July, Publisher: IEEE Comput. Soc., pp. 501-506, 1997. [44] Nasir Ghani, John W Mark,"Dynamic Rate-Based Control Algorithm for ABR Service in ATM Networks," In Proc of the IEEE/GLOBECOM'96, pp.1074-1079, 1996. [45] E. A. Khalil, I. Z. Morsi, and M. Mashem, "New Proposal to the ERICA+ Switch Algorithm," Accepted for publication in the 3rd International Conference on Networking (ICN'04), Mrach 1-4, 2004, Pointe-a-Pitre, Guadeloupe, Franch Canbbean. [46] E. A. Khalil, I. Z. Morsi, and M. Mashem, "Achieving QoS For TCP Multimedia Traffic Over rd ATM-ABR Services," Accepted for publication in the 3 International Conference on Networking (ICN'04), Mrach 1-4, 2004, Pointe-a-Pitre, Guadeloupe, Franch Canbbean.

37

Vol. 3, Issue 1, pp. 29-40

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
[47] Y. Afek, Y. Mansour ,and Z. Ostfeld, "Phantom: A Simple and Effective Flow Control Scheme," Proccedings of ACM SIGCOMM'96, August 1996. [48] K. Siu, and T. Tzeng, "Intelligent Congestion Control for ABR Service in ATM Networks," Computer Communication, Review, Vol. 24, No. 5, pp. 81-106, October 1995. [49] K-Yeung Siu, and Hong-Yi Tzeng, " Performance of TCP Over ATM with Time-Varying Available Bandwidth," Computer Communication, Vol. 19, pp.927-936, 1996. [50] Bobby Vandalore, "Traffic Management to Enhance Quality of Service (QoS) of Multimedia Over Available Bit Rate (ABR) Service in Asynchronous Transfer Mode (ATM) Networks," Ph.D. Dissertation, the Ohio Sate University, June 2000.
[51] Bobby Vandalore, Raj Jain, Rohit Goyal, Sonia Fahmy, " Dynamic Queue Control Functions For ATM ABR Switch Scheme: Design and Analysis," Computer Networks, Vol. 31, Issue 18, pp. 1935-1949, August 1999.

[52] B. Vandalore, R. Jain, R. Goyal, and S. Fahmy, “Design and Analysis of Queue Control Functions for Explicit Rate Switch Schemes”, Proceedings of ICCCN’98, pp. 780-786, October ’98. [53] Christos Tryfonas, “MPEG-2 Transport Over ATM Networks”, M.S Thesis, UC Santa Cruz, Sept 1996. [54] M. Graf, “VBR video over ATM: Reducing Network Resource Requirement through end System traffic shaping,”, Proceedings IEEE INFOCOM’97, Kobe, Japan, pp. 48-57, Apr. 7-11 1997.

Flow charts

38

Vol. 3, Issue 1, pp. 29-40

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

39

Vol. 3, Issue 1, pp. 29-40

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Author
Ehab Aziz Khalil, (B.Sc’78 – M.Sc.’83 – Ph.D.’94), Ph.D. in Computer Network and Multimedia in the Dept. of Computer Science & Engineering, Indian Institute of Technology (IIT) Bombay-400076, India in July 1994, Research Scholar from 1988-1994 with the Dept. of Computer Science & Engineering, Indian Institute of Technology (IIT) Bombay-400076, India, M.Sc in the Systems and Automatic Control, Faculty of Electronic Engineering, Minufiya University, Menouf – 32952, EGYPT, Oct. 1983, B.Sc. in the Dept. of Industrial Electronics, Faculty of Electronic Engineering, Minufiya University, Menouf – 32952, EGYPT, May 1978. Since July 1994 up to now, working as a Lecturer, with the Dept. of Computer Science & Engineering, Faculty of Electronic Engineering, Minufiya University, Menouf – 32952, EGYPT.. Participated with the TPC of the IASTED Conference, Jordan in March 1998, and With the TPC of IEEE IC3N, USA, from 2000-2002. Consulting Editor with the “Who’s Who?” in 2003-2004. Member with the IEC since 1999. Member with the Internet2 group. Manager of the Information and Link Network of Minufiya University, Manager of the Information and Communication Technology Project (ICTP) which is currently implementing in Arab Republic of EGYPT, Ministry of Higher Education and the World Bank. Published more than 85 research papers and articles review in the international conferences, Journals and local newsletter.

40

Vol. 3, Issue 1, pp. 29-40

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

AUTOMATED TEST JIG FOR UNIFORMITY EVALUATION OF LUMINARIES
Deepa Ramane1, Jayashri Bangali2, Arvind Shaligram3
1

Department of Electronic science ,Dr. D. Y. Patil Arts, Commerce and Science College, Pimpri, Pune, Maharashtra, India 2 Department of Electronics, Kaveri College of Science and Commerce, Erandwane, Pune, Maharashtra, India 3 Department of Electronics Science, University of Pune, Pune, Maharashtra, India

ABSTRACT
Uniformity of illumination is of prime importance in the designing of lighting system. Often it is achieved by measuring illuminance values on the target plane at predefined locations using luxmeter. To speed up the experimentation and to automate the measuring procedure, a portable test-jig is developed at work place. It acquires illuminance data from sixteen predefined locations on target surface simultaneously. The data is used for computing uniformity of illumination. The test-jig consists of photo sensors, Data Acquisition System (DAS) and control system. The photo sensors are fixed on a flex sheet which can be rolled. The sheet is spread over the surface whose uniformity is to be measured. The portable, automated test-jig gives the uniformity of illumination quickly. The test-jig is used to evaluate the performance of three luminaries viz. incandescent bulb, CFL and LED. Results confirm that LED bulbs are energy efficient than others giving more illuminance without hampering illumination uniformity. LED luminaire of 6 W illuminates 1m X 1m target surface having uniformity in the range of 0.4 – 0.73 for target-source distance of 50cm – 170cm. Dependence of uniformity on source wattage and view angle is also reported.

KEYWORDS:

lighting system, uniformity of illumination, photo sensor, DAS (Data Acquisition System)

I.

INTRODUCTION

Light Emitting Diodes (LEDs) have been the subject of growing interest of lighting designers over recent years. LEDs are replacing conventional light sources such as incandescent bulb, compact fluorescent tube etc. in almost all of the illumination applications [1, 2]. Wide range of illumination applications demand different specifications such as recommended lux level, uniformity of illumination, minimum glare, low cost, less energy consumption etc. The recommended values of these specifications are provided in IESNA handbook and are achieved by proper luminaire design. Uniformity is one of the major specifications in many applications. Before installation of luminaire, lighting designer has to verify whether the proposed illumination system design, fulfils the specified value of uniformity along with the recommended value of illuminance level. Computation of uniformity needs illuminance data on target surface from number of locations. The data provides maximum, minimum and average illuminance values. Acquisition of illuminance data from number of locations simultaneously and accurately is a challenging task. The proposed automated, portable test-jig measures the illuminance in lux at different points on the target surface and computes the uniformity of illumination which can be used as a metric to evaluate the performance of the luminaire. The illumination uniformity evaluation methods reported in literature are reviewed in the beginning. The test-jig developed for automatic acquisition of illuminance data is presented in subsequent section followed by description of experimental setup. Using the test-jig performance of three different

41

Vol. 3, Issue 1, pp. 41-47

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
luminaries viz. incandescent bulb, CFL and LED is evaluated. The results of this testing and discussion on the obtained results are given at the end.

II.

LITERATURE REVIEW

Lot of research is being done to design luminaire with improved uniformity. Majority efforts claim to provide optimal solution so as to have uniform illumination over a planer surface by optimizing number of parameters like number of source elements, their geometrical placement, optical characteristics of sources, source to target plane distance etc [3-8]. Papers have reported use of secondary optics such as diffuser, lenses, and reflectors for further improvement in uniformity [9]. To verify the feasibility of LED luminaire design, optical simulation programs either based on ray tracing or on analytical equations are used [10-13]. Experimentally, performance of luminaire can be evaluated by three different methods: illuminance, luminance, and small target visibility [14]. Among these methods, the illuminance measurement method is mostly used to evaluate the luminaire performance. Zeljko et. al. have measured illuminance with automated inspection of ceramic tiles using “dot-method”[15]. Many times the uniformity is computed by capturing image of illuminated surface by CCD camera. The image is used to plot iso-contours using MATLAB software which are used as metric of uniformity [7]. Simple method to measure illuminance levels is using handheld luxmeter. The illuminance data is acquired from different points on the target surface by manually positioning the luxmeter at desired locations. The data is used to determine maximum, minimum and average illuminance values which predict uniformity. This method of manual collection of data is sometimes troublesome and labour-intensive. In applications like evaluation of roadway lighting systems, safety of operators is key issue. Zhou et . al. have reported a measurement system consisting of light meter, a distance measurement system, a computer and software for roadway lighting system [14]. Considering need for development of portable, handy and speedy automated test-jig for uniformity evaluation, an attempt has been made.

III.

TEST – JIG DEVELOPMENT

The block diagram of the test-jig developed for this task is shown in figure 1. It consists of a luminaire under test, the control system, 16 – channel Data Acquisition System (DAS) and a personal computer to store, analyze and display the captured data. The idea is to sense the illuminance on the target surface at number of predetermined points for the computation of uniformity as per IESNA guideline. Further illumination analysis is carried out in MATLAB version 7.8. The development of test-jig involves the following tasks: • hardware development for automatic collection of illuminance data; • software development to collect, store and analyze illuminance data.

3.1 Hardware Development
Development of hardware involves designing of control system and 16 – channel DAS. The control system comprises of photo sensors, signal conditioners, an analog multiplexer, a counter and a pulse generator. The photo sensors are fixed on the flex sheet of 1m X 1m dimension at predetermined locations. The sheet is spread over the target plane of which uniformity is to be evaluated. This photo sensor sheet can be folded or rolled so that it becomes handy for carriage and for storage. Here photo sensors used are LDR whose output resistances are calibrated against intensity values using luxmeter. The locations are dependent on number of luminaires for which guideline is given in IESNA handbook [16]. The photo sensors capture illuminance values. The outputs of these photo sensors are connected to the input channels of the analog multiplexer. The analog multiplexer sequentially passes channel illuminance values to output of DAS. To automate the selection of the channels, 4-bit counter is used which generates binary sequence from 0000 to 1111. The rate of channel selection is controlled by the frequency of the pulse generator. The output of the multiplexer is connected to Data Acquisition System of Rishabh company.

42

Vol. 3, Issue 1, pp. 41-47

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Figure 1. Block Diagram of Test-jig for Uniformity Evaluation of Illumination System

3.2 Software Development
Through USB port the resistance data of photo sensors are transferred to Rishabh system software one after the other. The equivalent illuminance data values are retrieved in excel sheet as a function of position of target point. The sampling frequency of data acquisition can be adjusted in the software. For the results produced in the next section sampling frequency is kept as 1 second. From each target point five data samples have been taken. Average of these data samples is taken as illuminance of that target point. Further the uniformity is computed using formula:

Uniformity of illumination =

Average illuminance Maximum illuminance

…… 1

In this case average illuminance on target plane is the average of illuminance values at ‘n’ target points.
n

∑ Illuminance values
Average illuminance =
1

at target point
..….. 2

n

Perfect uniform illumination is said to be achieved when maximum and average illuminance levels match. Under this condition uniformity ratio become one. Designer tries to achieve uniformity value as maximum as possible. For general illumination applications uniformity ratio upto 0.6 is tolerable while task illuminations need uniformity ratio greater than 0.9. Illuminance data at ‘n’ target points is imported in MATLAB 7.8. The 3-D plots of illuminance distribution helps in further analysis.

43

Vol. 3, Issue 1, pp. 41-47

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

IV.

EXPERIMENTAL SETUP

Experimental setup used for uniformity measurement is given in figure 2. It consists of source assembly, pulley arrangement and a luxmeter. The luxmeter is used to note down illuminance readings to validate the results of the developed test-jig. The pulley arrangement is used for source mounting and source-target height adjustment. The source assembly is mounted on a wooden board along with its driving circuitry. The source height is adjusted by moving the wooden board assembly up and down by pulling the rope. The target surface is at a distance of ‘h’ meter from the source. The flex sheet is spread over the illuminated target surface. Thirteen photo sensors are placed on the flex sheet as shown in figure 3. The placement of photo sensors is as per the guidelines given in reference 16 for average illuminance measurement for regular area with symmetrically located single luminaire. Illuminance values are measured using test – jig and also by luxmeter.

rope Pulleys LED Source Incandescent source Compact fluorescent source

Luxmeter Target Surface Illumination Pattern

Figure 2 Experimental setup for uniformity measurement

Figure 3 Placement of photo sensors on the flex sheet

44

Vol. 3, Issue 1, pp. 41-47

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

V.

RESULTS AND DISCUSSIONS

The test-jig is used to evaluate the uniformity of illumination when the target surface of 1m X 1m dimension is illuminated by incandescent, CFL and multiple LED source luminaire at different heights. Considering LED as future of illumination, experiments are performed for multiple LED source luminaire with variable source geometry and with optical characteristics.

Uniformity values are computed using illuminance results obtained using test-jig as well as by actual measurement with luxmeter. The results of experimentation are tabulated in table 1 and 2. Table 1 compares the performance of light sources at different heights. Figure 4 shows 3-dimensional graphs of illuminance distribution over flex sheet, plotted in MATLAB.
Table 1 Uniformity results for source comparison ( T = test-jig results; E = luxmeter results )
Light source LED, 6 W, CFL, 11 W Incandescent, 60 W Uniformity at height 50 cm T E 0.41 0.42 0.30 0.44 0.46 0.32 Uniformity at height 80 cm T E 0.52 0.51 0.41 0.57 0.53 0.45 Uniformity at height 110 cm T E 0.61 0.62 0.51 0.64 0.65 0.53 Uniformity at height 140 cm T E 0.65 0.63 0.61 0.67 0.67 0.64 Uniformity at height 170 cm T E 0.70 0.72 0.80 0.73 0.76 0.84

(a) LED luminaire : 6 W

(b) CFL luminaire : 11 W

(c) Incandescent luminaire : 60 W

Figure 4 : 3-dimensional view of Illuminance distribution over flex sheet

3-D graphs show greater peak illuminance at center of target plane for all luminaries. The peak values are 61.8, 29.5 and 41.6 lux for LED, CFL and Incandescent luminaire respectively. The illuminance is greater for LED luminaire in spite of lower wattage. Table 1 results show that uniformity of LED bulb is at par with other two sources. So one can say that LED bulbs are energy efficient giving more illuminance without hampering uniformity. LED luminaire performance is evaluated for increased in wattage and increasing view angles at different heights. Table 2 summarizes the uniformity results for the same.
Table 2 Uniformity results for LED luminaire ( T = test-jig results; E = luxmeter results )
LED luminaire wattage with spatial distribution of each LED 6W, 135⁰ 6 W, 60⁰ 12 W, 60⁰ Uniformity at height 50 cm Uniformity at height 80 cm Uniformity at height 110cm Uniformity at height 140 cm Uniformity at height 170 cm

T 0.41 0.34 0.41

E 0.44 0.38 0.44

T 0.52 0.41 0.63

E 0.57 0.44 0.67

T 0.61 0.54 0.65

E 0.64 0.58 0.66

T 0.65 0.60 0.70

E 0.67 0.62 0.72

T 0.70 0.63 0.81

E 0.73 0.65 0.83

45

Vol. 3, Issue 1, pp. 41-47

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Readings show that as height increases uniformity of illumination improves for all three luminaires. For same height wider angle LED luminaire gives better uniformity. If one increases source flux then also uniformity greatly improves.

VI.

CONCLUSION

The paper reports an automatic test-jig using 16-channel DAS useful for the illumination uniformity measurement. The photo sensors are mounted on flex sheet of size 1m X 1m and the paper is spread over the area on the illuminated surface of which analysis is to be done. Using developed test-jig illuminance values on target plane are measured at predefined locations. It simultaneously acquires illuminance data from sixteen predefined locations. The portable, automated test-jig speeds up the illuminance measurement procedure and hence useful for computing uniformity of illumination on the target plane. The jig is tested for three types of conventional sources and for LED luminaire with variable wattage and view angle. The developed test-jig results are compatible with the results obtained using luxmeter. The developed system is useful for evaluation of uniformity of illumination for all types of luminaire and thus helps the designer to optimize luminaire design.

REFERENCES
[1]. Evans D.L., “High Luminance LEDs Replace Incandescent Lamps in New Applications,” SPIE, vol. 3002, 142-153. [2]. Peon R, Doluweera G, Platonova, Halliday D., (2005) “Solid state lighting for the developing world- the only solution, ” SPIE, 5th international conference on solid state lighting, vol. 5941,59410N-1 to 59410N-15. [3]. Moreno I., (2004) “Configuration of LED arrays for uniform illumination,” Proc. SPIE, Vol. 5622, pp 713-718. [4]. Kopparapu S.K., (2006) “Lighting design for machine vision application,” Image Vision Computation, Vol. 24, pp 720-726. [5]. Moreno I., (2006) “Design of LED spherical lamps for uniform far-field illumination,” Proc. SPIE, Vol. 6046, pp 60462E 1- 60462E7. [6]. Wittels, Norman, Gennert, Michael A., (1994) “Optimal lighting design to maximize illumination uniformity,” SPIE, Vol. 2348, pp 46-56. [7]. Hongming Yang, Jan W. M. Bergmans, Tim C. W. Schenk, Jean-Paul M. G. Linnartz, and Ronald Rietman, (2009) “ Uniform Illumination Rendering Using an Array of LEDs: A Signal Processing Perspective,” IEEE Transactions on Signal Processing, Vol. 57, No. 3, pp 1044 – 1057. [8]. Gennert, Michael A., Wittels, Norman, Leatherman, Gary L, (1993) “Uniform frontal illumination of planer surfaces: where to place the lamps,” Optical Engg. Vol. 32 (06), pp 1261-1271. [9]. Fei Chen, Sheng Liu, Kai Wang, Zongyuan Liu, Xiaobing Luo, (2009) “Free-form lenses for high illumination quality light-emitting diode MR16 lamps,” Optical Engg, Vol. 48 (12), pp 123002-7. [10]. Zong Qin, Kai Wang, Fei Chen, Xiaobing Luo, Sheng Liu, (2010) “Analysis of condition for uniform lighting generated by array of light emitting diodes with large view angle,” Optics Express, Vol. 18 (16), pp 17460-76. [11]. Allen Whang, Yi-Yung Chen, Yuan-Ting Teng, (2009) “Designing Uniform Illumination Systems by surface-tailored Lens and configurations of LED arrays,” Journal of Display Technology, Vol. 5 (3), pp 94-103. [12]. Jin-Jia Chen, Kuang-Lung Huang, Te-Yuan Wang, Yi-Chih Wang, Chuen-Ching Wang, Tsung-Yi Guo, (2010), “LED lighting module design based on a prescribed candle-power distribution for uniform illumination,” Proc. of SPIE, Vol. 7849, pp 78490X-1 to 78490X8. [13]. Ramane D.V., Shaligram A.D., (2011) “Optimization of multielement LED source for uniform illumination of plane surface,” Optics Express, Vol. 19, No. S4, A639-A648.

46

Vol. 3, Issue 1, pp. 41-47

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
[14]. Huaguo Zhou, Fatih Pirinccioglu, Peter Hsu, (2009) “A new roadway lighting measurement system,” Transportation Research Part C, vol.17, 274-284. [15]. Zeljko Hocenski, Adriana Dizar, Verica Hocenski, (2008), “Illumination Design of a control system for visual inspection of ceramic tiles,” IEEE, pp 1093-97. [16]. Measurement of Light and other Radiant Energy , IES Lighting handbook , 1981: reference volume, section 4.

Authors Biographies
D.V. Ramane born in 1970, is Asst. Prof. in the Department of Electronics, Dr. D.Y. Patil Arts, Commerce and Science College, Pimpri, Pune, India. She received her M. Sc. in 1992 and pursuing her Ph.D. at University of Pune. She has more than 10 publications in the National and International Journals/Conferences. She has authored 11 books on Electronics subjects. Her research interest includes solid state lighting, mathematical modeling and simulation and PC/Micro controller based Instrumentation.

J.A. Bangali born in 1974, is Asst. Prof. in the Department of Electronics, Kaveri College of Science and Commerce, Erandwane, Kothrud, Pune, India. She received her M. Sc. in 1996 and M.Phil. in 2009 from University of Pune. She has around 10 publications in the National and International Journals/Conferences. Her research interest includes smart space automation, lighting research, PC/Micro controller based Instrumentation and sensors.

A. D. Shaligram born in 1960, is professor and Head of the Department of Electronic Science, University of Pune, Pune, India. He received his M. Sc. in 1981 and Ph. D. in 1986 from the same University. He has more than 70 publications in the National and International Journals. His research interest includes fiber optic and optical waveguide sensors, PC/Micro controller based Instrumentation and Biomedical Instrumentation and sensors.

47

Vol. 3, Issue 1, pp. 41-47

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

TRACKING OF INTRUDERS ON GEOGRAPHICAL MAP USING IDS ALERT
Manish Kumar1, M. Hanumanthappa2, T.V. Suresh Kumar3
Asst. Professor, Dept. of Master of Computer Applications, M. S. Ramaiah Institute of Technology, Bangalore-560 054, India 2 Dept. of Computer Science and Applications, Jnana Bharathi Campus, Bangalore University, Bangalore -560 056, India 3 Professor & Head, Dept. of Master of Computer Applications, M. S. Ramaiah Institute of Technology, Bangalore-560 054, India
1

ABSTRACT
Intrusion Detection System (IDS) is the most powerful system that can handle the intrusions of the computer environments by triggering alerts to make the analysts take actions to stop this intrusion. Knowing from where an intrusion originated or where a spammer or suspect intruder is located is key information to identify security threats to your system and confidential information. In this paper we are discussing the method which tracks the intruders on the geographical map. The technique is based on the source IP address of Intruders. In current Internet communication world, validity of the source of IP packet is an important issue. The problems of IP spoofing alarm legitimate users of the Internet. IP spoofing is a technique used to gain unauthorized access to computers, whereby the intruder sends messages to a computer with an IP address indicating that the message is coming from a trusted host. This paper also discusses some of the technique to defense the IP spoofing problem.

KEYWORDS: Intrusion Detection, Geolocation, IP Spoofing.

I.

INTRODUCTION

A computer intrusion is, “An intentional event where an intruder gains access that compromises the confidentiality, integrity, or the availability of computers, networks, or the data residing on them.” The amount of damage done by an intruder to a system can vary greatly. Some intruders are malicious in nature and others are just curious and want to explore what is on a local network. Computer users must protect themselves from intrusion. While there are no 100% effective methods of eliminating intruders completely, some methods must be used to reduce intrusions. In the event that an intrusion has taken place the last line of defense is an intrusion detection system. An intrusion detection system can alert the system administrator in the event that the system has been breeched. Once the intrusion detection system has detected an event, an intrusion investigation should be conducted to note the extent of the intrusion and any damages that may have occurred and to locate the source of the attack. While intrusion prevention, detection and tolerance all play an important role in solving the problem of today’s network-based intrusion, they are all passive and not adequate to solve the intrusion problem [22]. One fundamental problem with existing intrusion prevention, detection and tolerance is that they do not effectively eliminate or deter network-based intrusions. The best they can do is to avoid being victims of network-based intrusions temporarily. Because they do not address the root cause of intrusions – intruders, those intruders can always explore new system vulnerabilities, find new accomplices – potentially insiders, and launch new attacks from different places. Because intrusion prevention, detection and tolerance do not effectively address the problem of compromised

48

Vol. 3, Issue 1, pp. 48-54

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
system recovery, intruders can use those compromised system as new base for further intrusion. What we need is an effective way to hold network-based intruders accountable for their intrusions. So far most computer security research regarding network-based intrusions has focused on prevention and detection. Intrusion response has been an afterthought and is generally limited to logging, notification and disconnection at local host. Any further response usually involves manual interactions such as off-line analysis, reporting incidents to CERT and installing fixes. Given today’s high-speed network, many network-based intrusions can be very fast and short across wide areas of networks. Current ad hoc and manual intrusion response process neither provides the needed real-time intrusion response nor scales with network. Furthermore, because current automated intrusion responses are passive and lack network-wide response, they do not, however, eliminate or even effectively deter network-based intrusions. In this paper, we propose Tracing-the intruders on geographical map using the log report IP address to address the problem of network-based intrusions. It differs from existing intrusion defense approaches in that it addresses the intrusion problem by targeting the root cause of the problem: intruders. It collaborates with IDS and will trigger automatically when intrusions are detected. By actively tracing intrusions at real-time, it helps to apprehend the intruders on the spot and hold them accountable for the intrusions. Therefore it is likely a more effective deterrent against further intrusion attempts.

II.

INTRUSION DETECTION SYSTEM IN PRACTICE

IDS have historically been categorized as network, host, anomaly or misuse (signature) based. This simple categorization is, however, no longer adequate. IDS can also be distributed or centralized, can be passive or reactive, can be application-specific or general-purpose, can focus on real time or afterthe-event analysis [9] [14]. The five IDS described later are not intended to be exemplifiers of these or other various categories, but are presented as an indication of how major trends have developed. Almost all IDS will output a small summary line about each detected attack [15]. This summary line typically contains the information fields shown below. 1. Time/date; 2. Sensor IP address; 3. Vendor specific attack name; 4. Standard attack name (if one exists); 5. Source and destination IP address; 6. Source and destination port numbers; 7. Network protocol used by attack. Other more general information is also often provided; information such as a textual description of the attack, identification of the software attacked, information that identifies the patches required to fix the vulnerability, and advisories regarding the attack. Attacks on systems and networks have increased as rapidly changing technology, systems integration, global networks, information warfare and hacker boredom have become prevalent. For a long time, enterprises have relied upon solutions like intrusion detection to alert them of potential attacks. As the Internet and e-business have evolved, so has the need for a more proactive solution. In this paper we are discussing the techniques for detecting the intrusion, tracking the intruders IP address and showing the intruders location on geographical map The most challenge part is to track the intruders IP address. Many time intruders use the spoofed IP address to hide their identity. We need to first track the correct IP address of the intruders. The following section discusses the various techniques for tracking the intruders IP address.

III.

METHODS OF IP TRACEBACK

The purpose of IP traceback is to identify the true IP address of a host originating attack packets [21]. Normally, we can do this by checking the source IP address field of an IP packet. Because a sender can easily forge this information, however, it can hide its identity. If we can identify the true IP address of the attack host, we can also get information about the organization, such as its name and the network administrator’s e-mail address, from which the attack originated. With IP traceback technology, which traces an IP packet’s path through the network, we can find the true IP address of

49

Vol. 3, Issue 1, pp. 48-54

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
the host originating the packet. To implement IP traceback in a system, a network administrator updates the firmware on the existing routers to the traceback support version, or deploys special tracing equipment at some point in the network. Existing IP traceback methods can be categorized as proactive or reactive tracing.

3.1 Proactive Tracing
Proactive tracing prepares information for tracing when packets are in transit. If packet tracing is required, the attack victim (or target) can refer to this information to identify the attack source. Two proactive tracing methods — packet marking and messaging is explained. Packet Marking: - In packet marking, packets store information about each router they pass as they travel through the network. The recipient of the marked packet can use this router information to follow the packet’s path to its source. Routers must be able to mark packets, however, without disturbing normal packet processing. With IP’s record route option, for example, the IP packet can store router addresses in its option field. In another proposed approach, the router writes its identifier probabilistically in the packet’s IP header identification field. Each marked packet contains information in its identification field about only one or two routers on the attack path. In a flooding-style attack, however, the target network receives many attack packets and can collect enough information to identify the attack path. The identification field is used to reassemble fragmented packets. Because few fragments are created on the Internet, however, modifying the identification field rarely affects normal packet processing. Messaging: In messaging approaches, routers create and send messages containing information about the forwarding nodes a packet travels through. The Internet Engineering Task Force’s proposed method, the Internet control message protocol (ICMP) traceback message. A router creates an ICMP traceback message, which contains part of a traversing IP packet, and sends the message to the packet’s destination. We can identify the traversed router by looking for the corresponding ICMP traceback message and checking its source IP address. Because creating an ICMP traceback message for every packet increases network traffic, however, each router creates ICMP traceback messages for the packets it forwards with a probability of 1/20,000. If an attacker sends many packets (for example, in a flooding-style attack), the target network can collect enough ICMP traceback messages to identify its attack path.

3.2 Reactive Tracing
Reactive tracing starts tracing after an attack is detected. Most of the methods trace the attack path from the target back to its origin. The challenges are to develop effective traceback algorithms and packet-matching techniques. Various proposals attempt to solve these problems.

3.3 Hop-by-hop tracing.
In hop-by-hop tracing, a tracing program, such as MCI’s DoS Tracker, logs into the router closest to the attacked host and monitors incoming packets. If the program detects the spoofed packet (by comparing the packet’s source IP address with its routing table information), it logs into the upstream routers and monitors packets. If the spoofed flooding attack is still occurring, the program can detect the spoofed packet again on one of the upstream routers. This procedure is repeated recursively on the upstream routers until the program reaches the attack’s actual source.

3.4 Hop-by-hop tracing with an overlay network.
In hop-by-hop tracing, the more hops there are, the more tracing processes will likely be required. As a result, a packet will take longer to trace, and necessary tracing information might be lost before the process is complete. To decrease the number of hops required for tracing, one approach builds an overlay network by establishing IP tunnels between edge routers and special tracking routers and then reroutes IP packets to the tracking routers via the tunnels. Hop-by-hop tracing is then performed over the overlay network.

3.5 IPsec authentication.
Another proposed reactive tracing technique is based on existing IP security protocols. With this method, when the IDS detects an attack, the Internet key exchange (IKE) protocol establishes IPsec

50

Vol. 3, Issue 1, pp. 48-54

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
security associations (SAs) between the target host and some routers in the administrative domain (for example, autonomous system boundary routers). Routers at the SA ends add an IPsec header and a tunnel IP header containing the router’s IP address to traversing packets. If the attack continues and one of the established SAs authenticates a subsequent attack packet, the attack must come from a network beyond the corresponding router. The receiver checks the source IP address of the tunnel IP header to find out which routers the attack packet traversed. Repeating this process recursively, the receiver finally reaches the attack source. Because this technique uses existing IPsec and IKE protocols, implementing a new protocol for tracing within an administrative domain is unnecessary. To trace beyond the administrative domain, however, a special collaboration protocol is needed. The IETF Intrusion Detection working group (IDWG) is discussing such a protocol. Traffic Pattern Matching. A fourth proposed technique traces an attack path by comparing traffic patterns observed at the entry and exit points of the network with the network map.

IV.

CURRENT ACTIVE RESEARCH ON IP TRACEBACK

Two network tracing problems are currently being studied: IP traceback and traceback across stepping-stones (or a connection chain). IP traceback is to identify the origins of sequences IP packets (e.g., identify the origin of DDOS packets) when the source IP addresses of these packets are spoofed. IP traceback is usually performed at the network layer, with the help of routers and gateways. Traceback across stepping-stones is to identify the origin of an attacker through a chain of connections (e.g., connections established with telnet, rlogin, or ssh), which an attacker may use to hide his/her true origin when he/she interacts with a victim host. Traceback across stepping-stones is beyond the network layer, since at each intermediate host the data is transmitted to application layer in one connection, and then resent to the network in the next connection. Research on IP trace back has been rather active since the late 1999 DDOS attacks [17,24,18]. Several approaches have been proposed to trace IP packets to their origins. The IP marking approaches enable routers to probabilistically mark packets with partial path information and try to reconstruct the complete path from the packets that contain the marking [10,16,4]. DECIDUOUS uses IPSec security associations and authentication headers to deploy secure authentication tunnels dynamically and trace back to the attacks’ origins [19,1]. ICMP traceback (iTrace) proposes to introduce a new message “ICMP trace back” (or an iTrace message) so that routers can generate iTrace messages to help the victim or its upstream ISP to identify the source of spoofed IP packets [2]. An intention-driven iTrace is also introduced to reduce unnecessary iTrace messages and thus improve the performance of iTrace systems [3]. An algebraic approach is proposed to transform the IP traceback problem into a polynomial reconstruction problem, and uses techniques from algebraic coding theory to recover the true origin of spoofed IP packets [13]. An IP overlay network named CenterTrack selectively reroutes interesting IP packets directly from edge routers to special tracing routers, which can easily determine the ingress edge router by observing from which tunnel the packets arrive [12]. A Source Path Isolation Engine (SPIE) has been developed; it stores the message digests of recently received IP packets and can reconstruct the attack paths of given spoofed IP packets [11,7]. There are other techniques and issues related to IP traceback (e.g., approximate traceback [8], legal and societal issues [23], vendors’ solutions [5]). An archive of related papers can be found at [20]. Though necessary to make attackers accountable (especially for DDOS attacks where there are a large amount of packets with spoofed source IP addresses), IP traceback has its own limitations. In particular, IP traceback cannot go beyond the hosts that send the spoofed IP packets. Indeed, a typical attacker will use a fair number of steppingstones before he/she finally launches, for example, a DDOS attack. Thus, only identifying the source of IP packets is not sufficient to hold the attackers responsible for their actions. Similar to IP traceback, there have been active research efforts on tracing intruders across stepping-stones. In general, approaches for traceback across a connection chain can be, based on the source of tracing information, divided into two categories: host-based and network-based. In addition, depending on how the traffic is traced, traceback approaches can be further classified

51

Vol. 3, Issue 1, pp. 48-54

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
into either active or passive. Passive approaches monitor and compare all the traffic all the time, and they do not select the traffic to be traced. On the other hand, active approaches dynamically control when, where, what and how the traffic is to be correlated through customized processing. They only trace selected traffic when needed.

V.

GEOLOCATION TRACKING OF INTRUDER’S IP ADDRESS

After tracking the IP address of intruders, our next objective is to find the geolocation of the intruders. IP to geolocation tracking is the technique of determining a user's geographic latitude, longitude and, by inference, city, region and nation by comparing the user's public Internet IP address with known locations of other electronically neighboring servers and routers IDS can detect the intrusion. We can find the IP address of intruders but barely having a IP address, it do not give the idea that from which place attack is generated.

5.1 Advantage of Geolocation Tracking
Tracking the intruders IP address and plotting the trace on geogrphical (Figure:- 1) map gives a clear picture that whether the attack is distributed and initiated from multiple country or it is initiated from one specific country our region. This information may be the vital information for the organization to take any further action or any precaution measures.

Figure 1: Intruder’s IP address tracking on Map

5.2 System Architecture
The overall system (Figure 2) works on IDS alert analysis. Each alerts generated by IDS is passed to IDS alerts log report. All the alerts from IDS log report is further analyzed for tracking the Intruders source IP address. Once the correct source IP address of the intruders is confirmed, it is passed to the API which map the source IP address on geographical map.
Intrusion Detection System IDS Alerts Log Report Intruder’s IP Address Validatior Mapping the Intruder’s IP Address on Geographical Map

Figure 2: Architecture of Intruders Geographical Location Mapping System

5.3 Implementation Detail
We have implemented the system using Snort and Google API for geolocation mapping of intruders. Snort is the well known open source IDS software which detect the intrusion event. Snort log this report in alert file. The intruders IP address is analyzed and traced back. The traced IP address is passed to Google Geolocation API which enables a web application to: • Obtain the user's current position, using the getCurrentPosition method • Watch the user's position as it changes over time, using the watchPosition method • Quickly and cheaply obtain the user's last known position, using the lastPosition property The Geolocation API provides the best estimate of the user's position using a number of sources (called location providers). These providers may be onboard (GPS for example) or server-based (a

52

Vol. 3, Issue 1, pp. 48-54

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
network location provider). The getCurrentPosition and watchPosition methods support an optional parameter of type PositionOptions which lets you specify which location providers to use.

VI.

EVALUATION

Geolocation of intruders are obtained by tracking the IP addresses of intruders using databases that map Internet IP addresses to geographic locations. Google uses MaxMind’s database for mapping IP addresses to a geographical location [6]. They claim it is 99% accurate. What is in the fine print, is that it is 99% accurate in determining the country, but pinpointing the exact position is still a challenging issues which need to be addressed.

VII.

CONCLUSIONS

Our system is able to trace the intruders geographical map but our whole system is depended on the IP traceback. Traceback has several limitations, such as the problem with tracing beyond corporate firewalls. To accomplish IP traceback, we need to reach the host where the attack originated. Another limitation relates to the deployment of traceback systems. Most traceback techniques require altering the network, including adding router functions and changing packets. To promote traceback approaches, we need to remove any drawbacks to implementing them. Moreover, even if IP traceback reveals an attack’s source, the source itself might have been used as a stepping-stone in the attack. IP traceback methods cannot identify the ultimate source behind the stepping-stone; however, techniques to trace attacks exploiting stepping-stones are under study. Some operational issues must also be solved before IP traceback can be widely deployed. To trace an attack packet through different networks, for example, there must be a common policy for traceback. We also need guidelines for dealing with traceback results to avoid infringing on privacy. Furthermore, we need to consider how to use information about an attack source identified by IP traceback. In the future, we will likely need to focus on the authenticity of results from IDSs and IP traceback systems.

REFERENCES
[1] Computer Emergency Response Team. (2000) CERT Advisory CA-2000-01 Denial-of Service Development. http:// www.cert.org/advisories/CA-2000-01.html. [2] Computer Emergency Response Team. (1999) Results of the Distributed-Systems Intruder Tools Workshop. http://www.cert.org/reports/dsit_workshop.pdf. [3] Computer Security Institute. Annual CSI/FBI Computer Crime and Security Survey. (2001) http://www.gocsi.com/prelea_000321.htm. [4] H. Y. Chang, R. Narayan, S.F. Wu, B.M. Vetter, X. Y. Wang et al. (1999) DecIdUouS: Decentralized Source Identification for Network-Based Intrusions, In Proceedings of 6th IFIP/IEEE International Symposium on Integrated Network Management. [5] H. Jung, et al. Caller Identification System in the Internet Environment. (1993) In Proceedings of 4th USENIX Security Symposium. [6] http://netsolutions.net.au/web-design/geo-targeting-by-ip-address/ ( Accessed on 28/01/2012) [7] J. D. Howard. (1997) An Analysis of Security Incidents on The Internet 1989 - 1995, PhD Thesis, http://www.cert.org/research/JHThesis/Start.html. [8] J. Ioannidis and M Blaze. (1993) The Architecture and Implementation of Network-Layer Security under Unix. In Proceedings of 4th USENIX Security Symposium. [9] K2. ADMmutate README. ADMmutate source code distribution. Version 0.8.4. URL: http://www.ktwo.ca/c/ADMmutate-0.8.4.tar.gz (Jan 2002) [10] K. L. Calvert, S. Bhattacharjee and E. Zegura. (1998) Directions in Active Networks. IEEE Communication Magazine. [11] L. T. Heberlein, K. Levitt and B. Mukherjee. (1992) Internetwork Security Monitor: An Intrusion-Detection System for Large-Scale Networks. In Proceedings of 15th National Computer Security Conference. [12] M. B. Greenwald, S. K. Singhal, J. R. Stone and D. R. Cheriton. (1996) Design an Academic Firewall: Policy, Practice and Experience with SURF. Internet Society Symposium on Network and Distributed System Security (NDSS ‘96). [13] N.G. Duffield and M. Grossglauser. (2000) Trajectory Sampling for Direct Traffic Observation. Proceedings of the ACM SIGCOMM ’2000. [14] P. Stephenson “The Application of Intrusion Detection Systems in a Forensic Environment”, Proceedings of the RAID 2000 Conference, Toulouse, France, 2000.

53

Vol. 3, Issue 1, pp. 48-54

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
[15] Peter Sommer “Intrusion Detection Systems as Evidence”, Presented in RAID 98 Conference, Louvain-laNeuve, Belgium. [16] R. H. Campbell, Z. Liu, M. D. Mickunas, P. Naldurg and S. Yi. (2000) Seraphim: Dynamic Interoperable Security Architecture for Active Networks. In Proceedings of IEEE OPENARCH’2000. [17] S. M. Bellovin. (2000) ICMP Traceback Messages. Internet Draft: draft-bellovin-itrace-00.txt. [18] S. Bhattacharjee, K. L. Calvert and E. W. Zegura. (1997) An Architecture for Active Networking. High Performance Networking (HPN’97), White Plans, NY. [19] S. Staniford-Chen, L. T. Heberlein. (1995) Holding Intruders Accountable on the Internet. In Proceedings of IEEE Symposium on Security and Privacy. [20] S. Kent, R. Atkinson. (1998) Security Architecture for the Internet Protocol. IETF RFC 2401. [21] Tatsuya Baba and Shigeyuki Matsuda, “Tracing Network Attacks to Their Sources”, IEEE INTERNET COMPUTING, MARCH - APRIL 2002 [22] Wang, X., Reeves, D., & Wu, S. F. (n.d.). Tracing based active intrusion response. Retrieved from http://arqos.csc.ncsu.edu/papers/2001-09-sleepytracing-jiw.pdf [23] W. Jansen, P. Mell, T. Karygiannis, D. Marks. (1999) Applying Mobile Agents to Intrusion Detection and Response. NIST Interim Report (IR) – 6416. [24] W. Bender, D. Gruhl, N. Morimoto and A. Lu. (1996) Technique for Data Hiding. IBM Systems Journal, Vol. 35, Nos. 3&4

AUTHORS
Manish Kumar is working as Asst. Professor in Department of Master of Computer Applications, M. S. Ramaiah Institute of Technology, Bangalore, India. His areas of interest are Cryptography and Network Security, Computer Forensic, Mobile Computing and eGovernance. His specialization is in Network and Information Security. He has also worked on the R&D projects relates on theoretical and practical issues about a conceptual framework for E-Mail, Web site and Cell Phone tracking, which could assist in curbing misuse of Information Technology and Cyber Crime. He has published several papers in International and National Conferences and Journals. He has delivered expert lecture in various academic Institutions. M Hanumanthappa is currently working as a Associate Professor in the Department of Computer Science and Applications, Bangalore University, Bangalore, India. He has over 15 years of teaching (Post Graduate) as well as Industry experience. His areas of interest include mainly Data Structures, Data Base Management System, Data Mining and Programming Languages. Besides, he has conducted a number of training programs and workshops for computer science students/faculty. He is also the member of Board of Studies /Board of Examiners for various Universities in Karnataka, India. He is also guiding the research scholars in the field of Data Mining and Network Security. T V Suresh Kumar is working as Professor and Head, Department of Master of Computer applications, M S Ramaiah Institute of Technology, Bangalore. He delivered lectures at various organizations like Honeywell, SAP Labs, Wipro Technologies, DRDO, Mphasis, Indian Institute of Science (Proficience), HCL Technologies, L&T Infotech, Nokia and various Universities/Academic Institutions. His areas of interest are Software Performance Engineering, Object Technology and Distributed Systems etc. He has published several research papers in various National and International Conferences and Journals. He has carried out software projects for various organizations. He is life member of ISTE and member of IEEE.

54

Vol. 3, Issue 1, pp. 48-54

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

DESIGN AND ANALYSIS OF MULTIDIELECTRIC LAYER MICROSTRIP ANTENNA WITH VARYING SUPERSTRATE LAYER CHARACTERISTICS
Samir Dev Gupta1, Amit Singh2
1

Department of Electronics and Communication Engineering, JIIT Noida, U. P., India. 2 Agilent Technologies, Manesar, Haryana, India

A BSTRACT
The multidielectric layer microstrip antenna structure involves addition of a superstrate layer over the substrate. It is important that the superstrate layer must act as a part of the antenna. Design of the multidielectric layer microstrip patch antenna based on different thickness and permittivity of the superstrate layer has significant effect in gain and antenna efficiency. The designer must however ensure that the superstrate layer does not adversely affect the performance of the antenna. With proper choice of the thickness of substrate and superstrate layer, significant increase in gain can be achieved for practical applications. This paper discusses a set of closed form expressions for the resonant frequency for the general case of multidielectric layers. Multidielectric layer microstrip antenna designed for applications where various physical properties of antenna viz. permittivity, patch dimensions, height of the substrate and superstrate layer parameters which significantly affect accuracy of the resonant frequency is analyzed. Considering the effect of superstrate layer, a method for accurately determining the resonant frequency of such structures have been obtained using variation of the patch dimension. The antenna performances have been evaluated for variety of cases of permittivity and thickness of the superstrate layer.

K EYW ORDS:
Layer

Microstrip Antenna, Multidielectric Layer, Resonant Frequency, Permittivity, Superstrate

I.

INTRODUCTION

Microstrip antennas have inherent limitation of narrow bandwidth. When a microstrip antenna is covered with a superstrate (cover) dielectric layer, its properties like resonance frequency, gain and bandwidth are changed which may seriously degrade the system performance [1- 4]. By choosing the thicknesses of substrate layers and the superstrate layer, a very large gain can be realized [5-9]. ShunShi Zhong, Gang Liu, and Ghulam Qasim [1] have described the significance of determining accuracy of the resonant frequency in the design of a microstrip antenna with multidielectric layers. Therefore in view of the inherent narrow bandwidth of the microstrip patch antennas, the antenna with multidielectric layered structure must be designed to ensure that there is a minimum drift in the resonant frequency [10]. Theoretical methods for calculating the resonant frequency of such structures have been reported using the variation technique, the multiport network approach, the spectral domain analysis and other full-wave analysis methods. Numerical methods are highly accurate but too laborious and time consuming for direct use in CAD programs. Generalization of the transmission line model treats a rectangular microstrip antenna with several dielectric layers as a multilayer microstrip line. With quasi-TEM wave propagating in the microstrip line, a quasistatic value of the effective permittivity εeff is derived by means of the conformal mapping technique. Relatively simple expressions based on the conformal mapping technique and the transmission line model is therefore used even for more complicated multidielectric structures. The conformal transformation used by Wheeler [11] and by Svacina [4] has been used. For the general case of multidielectric layers, it has been suggested to obtain a set of closed form expressions for the resonant frequency. The frequency

55

Vol. 3, Issue 1, pp. 55-68

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
dependence of εeff and the open-end extension of a patch, results in determination of resonant frequency of such antenna structures with good accuracy. Samir Dev Gupta et. al [12] have shown that the dimension of substrate and the patch of the microstrip antenna, and the corresponding calculated resonant frequency are such that a very small variation in the dimension of the antenna parameters results into a very significant change in the actual resonant frequency. Since these calculations are repetitively carried out, the errors are cumulative at every step, thereby resulting in a notable change in the frequency. To minimize the compounding errors an algorithm has been used [12]. The algorithm minimizes errors at each step thereby providing a result which is highly accurate. In the following sections the discussions and analysis are devoted to (i) The design of a multidielectric layer microstrip antenna at 10 GHz. The muldielectric layer design considers the effect of the cover layer on antenna performance. (ii) Studies on performance analysis of the multidielectric antenna based on parameters involving combination of the superstrate layer permittivity and thickness. (iii) Finally analyzing the characteristics of the designed antenna with/without superstrate layer. (iv) Analysis related to axial ratio on the basis of superstrate thickness. In addition improvement in bandwidth with changes in the superstrate thickness.

II.

EFFECT OF CHANGING SUPERSTRATE LAYER THICKNESS ON THE ANTENNA PARAMETERS

The microstrip antenna under consideration is designed to operate at a frequency of 10 GHz. The design is based on various selection criteria such as the thickness of the substrate and the superstrate, width and length of the element. Effects on antenna parameters with respect to the change in thickness of the superstrate layer have also been analyzed in the following subsections.

2.1 Substrate selection in the design of the patch antenna
Suitable dielectric substrate of appropriate thickness and loss tangent is chosen. A thicker substrate is mechanically strong with improved impedance bandwidth [13]. However it will increase weight and surface wave losses. The substrate’s dielectric constant εr plays an important role similar to that of the substrate thickness. A low value of εr for the substrate will increase the fringing field of the patch and thus the radiated power. A high loss tangent increases the dielectric loss and therefore reduces the antenna efficiency. The substrate parameters so chosen are as follows: The top layer chosen is RT Duroid 5880 having a thickness of 0.787mm, permittivity εr = 2.2 and the loss tangent tanδ = 0.0009. The bottom layer dielectric is RT Duroid 5870 with a thickness of 0.787mm, permittivity εr = 2.33 and the loss tangent tanδ = 0.0012.

2.2 Element width and length
The selection criteria for an efficient radiator with patch size which is not too large are: (i) A low value of the patch width W and (ii) The ratio between the width and the length of the patch leading to antenna with a good radiation efficiency. For the antenna to be an excellent radiator, the ratio between W and L should lie between 1< W

L

< 2 [14], [15]. The patch dimensions determine the

resonant frequency. Various parameters in design of microstrip antenna are critical because of the inherent narrow bandwidth of the patch. Using the algorithm [12], we first calculate antenna design parameters. Calculated length and width of the patch obtained is 8.69638mm and 9.6 mm respectively. Effective permittivity obtained is 2.15644, for the net height of the substrate 1.574 mm. So accordingly, the calculated Rin (the value of the patch resistance at the input slot) comes out to be 326. 8508ohms. The condition h <

1 is taken into account, we obtain the values of self conductance 10
   1    × 1 −    × (k 0 h )2        24     

G1 and susceptance B1 [16] using equation (1) and (2) respectively.

 W G1 =   120λ 0 

… (1)

56

Vol. 3, Issue 1, pp. 55-68

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
 W B1 =   120λ 0    × (1 − 0.636 log e (k 0 h ))  

… (2)

Treating element as two narrow slots, one at each end of the line resonator, the interaction between the two slots is considered by defining a mutual conductance. Considering far fields expressions, the directivity of a patch and the mutual conductance between patches are calculated [17].

  W   cos θ  π  sin  k 0  2  sin 3 θ dθ ∫   cos θ 0     G3 = 2 120π

2

... (3)

  W   cos θ  π  sin  k 0  1    2  (J (k L sin θ )sin 3 θ )dθ G12 =  2 ∫   0 0 cos θ  120π  0     1 Therefore the calculated input resistance of the patch is Rin = 2(G3 + G12 )

2

... (4) ... (5)

   120π h   × ( A + B )  × 100 log1 +   4 × 0.5     We  2 × 2 0.5 × π × (ε r + 1)     Z= 100
  1 1+    ε eff  where We = W + ∇We and ∇We = ∇W ×  2   
We obtain ∇W from the following equation

... (6)

       

... (7)

               e  t    ∇W =    × log 4 × 2  π    1      2   t  π     +     h    W  + 1 .1               t     
Also the parameters A and B are obtained using the following equations.

... (8)

57

Vol. 3, Issue 1, pp. 55-68

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

   14 +  8 ε   eff A= 11   

where ε eff

0. 5   1         1 +   ε eff   × 4 h  and B =  A2 +  2 ×π     2  We                   2  ε r + 1 ε r − 1  1  + 0.041 −  W       = + 1   h  2 2 2      1 + 12 h      W      

... (9)

      

... (10)

Thus applying the formula for the impedance matching we get the matching line impedance to be (326.8508*90) ^0.5=171.512 ohm. Calculation of the strip impedance process based on the microstrip patch antenna parameters width W, height h of the substrate having relative permittivity εr and t the thickness of the patch involves the following sets of equations. Considering the condition W/h>1, the characteristic impedance of the strip is obtained based on the equation 6. The ADS based calculator determines the impedance matching feed line width which works out to be around 0.31 mm. Design changes incorporated based on the calculated values of the patch length and width is 8.69638 mm and 9.6 mm respectively.

2.3 Impedance Calculation
The matching transformer impedance =178.813 ohm and width = 0.26 mm, therefore the calculated Patch ohm. The input impedance is obtained from the ADS design. The patch with width 9.6 mm and length 8.58 mm corresponds to input impedance= 324.0673 ohm. Corresponding to the effective permittivity

(Impedance matching transformer impedance) 2 (178.813) Impedance= = =325.09025 98.3545 Feed line impedance
2

εeff is equal to 2.15644, the input resistance of the patch is equal to 326.8508 ohm. Impedance of the port used is 90 ohm. Hence the impedance line parameters are: width = 0.26 mm and height = 7.25 mm, Feed line impedance=98.3545ohm and width = 1.5 mm, this comes fairly same as the calculated value obtained through the design.
The multidielectric microstrip antenna designed is seen to radiate 1.034 mW power, having directivity of 6.77 dB and gain of 5.95 dB thereby achieving antenna efficiency of 87.88%. Antenna is seen to resonate at the designed frequency of 10 GHz at a return loss of -31.797 dB as depicted in Figure 1. Considering the frequency of resonance at the designed frequency along with significantly low return loss as shown in Figure 1, it confirms accuracy of feed design to achieve impedance match along with perfect radiation pattern Eφ and Eθ as shown in Figure 2 and Figure 3 respectively. In addition, veracity of algorithm used [12], ensures multidielectric layer microstrip antenna design to accuracy overcomes anomalies likely to occur during fabrication along with measurement errors. Importantly operation in X-band at 10 GHz for defence application demands design accuracy for operational efficacy.

58

Vol. 3, Issue 1, pp. 55-68

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Figure 1. Return Loss of the Multidielectric Antenna without Superstrate layer

Figure 2. Radiation Pattern Eφ both front and back

Figure 3. Radiation Pattern Eθ both front and back

III.

EFFECT OF CHANGING SUPERSTRATE LAYER THICKNESS ON THE ANTENNA PARAMETERS

3.1 Superstrates Selection

59

Vol. 3, Issue 1, pp. 55-68

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
The addition of a cover layer over the substrate can also result in structural resonance referred to as the resonance gain method [18]. Superstrates are selected to compare the effects of its permittivity and its thickness on various antenna parameters. The two superstrate selected are from the data sheet of Roger’s Corporation are: High Permittivity: RT/Duroid 6010LM having relative permittivity of 10.2 and loss tangent = 0.0023. Low Permittivity: RT/Duroid 5880LZ having relative permittivity of 1.96 and loss tangent = 0.0019. Low and high thickness of the substrate under consideration are 0.254mm and 2.54mm respectively. Analysis of the antenna structure is based on method of moments utilizing Momentum tool in Advanced Design System (ADS) of Agilent Technologies. The Momentum based optimization process varies geometry parameters automatically to help us achieve derived antenna structure.

3.2 Analysis based on Superstrate Layer Properties
The effect of the superstrate layer on antenna parameters including radiation pattern involves selection of combination of superstrate layer viz. high/low permittivity and thick/thin superstrates.
Table 1. Comparative Chart Depicting Effect of Superstrate Layer on Antenna Parameters Parameter Cover thickness Frequency Return Loss Power Radiated Directivity Gain Efficiency High Permittivity Thick Superstrate 2.54 mm 8.537 GHz -3.574 dB 0.1929 mW 8.842 dB 3.329 dB 37.65 % Low Permittivity Thick Superstrate 2.54 mm 9.392 GHz -14.405 dB 0.8413 mW 7.443 dB 5.883 dB 79.04 % High Permittivity Thin Superstrate 0.254 mm 8.607 GHz -21.331 dB 0.9895 mW 7.1255 dB 6.1425 dB 86.20 % Low Permittivity Thin Superstrate 0.254 mm 9.721 GHz -35.041 dB 1.015 mW 6.876 dB 5.977 dB 86.92 %

The resonance gain method for the practical application has been studied using moment method [19]. This resonance gain method involves a limited structural geometry, resonant frequency drift [18]. As described in section 3.1, superstrate relative permittivity chosen is either
10.2 or 1.96 corresponding to high or low relative permittivity respectively. Thickness of the superstate considered 2.54 or .254 mm corresponds to thick or thin superstrate respectively. Table 1 shows the effect on antenna parameters due to change in permittivity and thickness of superstrate layer. 3.2.1 Case 1 High Permittivity Thick Superstrate has effect on antenna parameters. Poor gain accompanied by very low antenna efficiency of the order of 37.65%. In addition for the Case 1, return loss is also very poor and at frequency of 8.537 GHz, it is -3.754 dB as shown in Figure 4.

Figure 4. Return Loss for High Permittivity Thick Superstrate Antenna

60

Vol. 3, Issue 1, pp. 55-68

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Figure 5. Radiation Pattern Eφ both front and back

Figure 6. Radiation Pattern Eθ both front and back

The lossy nature of the antenna combined with poor return loss is substantiated by the radiation pattern in both φ and θ plane highlighting minor lobes and distorted pattern as can be seen in Figure 5 and Figure 6. It is therefore concluded that combination of thick superstrate of high relative permittivity will result in antenna behavioural pattern not conforming to the design. 3.2.2 Case 2 Antenna parameters in case of Low Permittivity Thick Superstrate show improvement in antenna gain and efficiency. Return loss at 9.392 GHz is -14.405 dB shows marginal improvement as seen in Figure 7. Radiation patterns seen in Figure 8 and Figure 9 shows significant improvement so also radiated power output as compared to Case 1.

61

Vol. 3, Issue 1, pp. 55-68

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Figure 7. Return Loss for Low Permittivity Thick Superstrate Antenna

Figure 8. Radiation Pattern Eφ both front and back

Figure 9. Radiation Pattern Eθ both front and back

62

Vol. 3, Issue 1, pp. 55-68

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
3.2.2 Case 3

High Permittivity Thin Superstrate when used, shows a significant improvement in antenna parameters viz. antenna gain and efficiency. Return loss at 8.607 GHz is -21.331 dB, shows good improvement as seen in Figure 10. Radiation pattern seen in Figure 11 and Figure 12, shows similar radiation plots as seen in Case 2. Marginal increase in radiated power output is seen as compared to Case 2.

Figure 10. Return Loss for High Permittivity Thin Superstrate Antenna.

Figure 11. Radiation Pattern Eφ both front and back

Figure 12. Radiation Pattern Eθ both front and back

63

Vol. 3, Issue 1, pp. 55-68

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
3.2.4 Case 4 Low Permittivity Thin Superstrate used as shown in Table 1 shows drop in antenna gain and marginal increase in efficiency. Return loss at 9.721 GHz, is -35.041 dB, shows significant improvement and as seen in Figure 13. Figure 14 and Figure 15, shows radiation plots. Radiation pattern is seen to be with perfect null in both the plane. Increase in radiated power output is 1.01mW which is the best among all the other three cases discussed above. Hence for antenna to resonate close to the desired frequency with return loss better than -30 dB, radiated power is around 1 mW and pattern with no sidelobes and perfect null. Case 4 viz. Low Permittivity Thin Superstrate is the best choice for multi-dielectric antenna design.

Figure 13. Return Loss for Low Permittivity Thin Superstrate Antenna

Figure 14. Radiation Pattern Eφ both front and back

Plots shown in Figures 4,7,10 and 13 indicate changes in resonant frequency, effect on return loss. There are variations in antenna directivity, gain, efficiency and finally the radiated power due to change in superstrates characteristics. To minimize losses and resonate close to the desired designed frequency, choice of thin and low permittivity superstrate having sufficient mechanical strength to withstand stress and weather vagaries is recommended for aerospace applications.

64

Vol. 3, Issue 1, pp. 55-68

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Figure 15. Radiation Pattern Eθ both front and back

3.2.5

Combined Result

Table 2 gives the bird’s eye view to analyse multidielectric antenna and effect of relative permittivity and thickness of Superstate (Cover) layer on antenna parameters and radiation pattern in both the planes.
Table 2. Multidielectic Layer Antenna Parameters with and without Superstrate Layer Permittivity Superstrate Types Without Superstrate Low Permittivity Thin Superstrate High Permittivity Thin Superstrate Low Permittivity Thick Superstrate High Permittivity Thick Superstrate Resonant Frequency (fz) (GHz) 10 9.721 8.607 S11 (dB)/ (Normalised) -31.8/(0.91) -35/(1.0) -21/(0.61) Power Radiated (milliwatt) 1.03 1.02 1.0 Gain (dB)/ (Normalised) Directivity (dB)/ (Normalised) 6.77/(0.77) 6.88/(0.78) 7.13/(0.81) Efficiency / (Normalised)

5.95/(0.97) 5.98/(0.97) 6.14/(1.0)

87.89/(1.0) 86.92/(0.99) 86.20/(0.98)

9.392

-14.4/(0.4)

0.84

5.88/(0.96)

7.44/(0.84)

79.04/(0.90)

8.537

-3.6/(0.1)

0.2

3.24/(0.53)

8.84/(1.0)

36.63/(0.42)

Though the results are reasonably attractive for low permittivity dielectric of the superstrate layer thickness of the order of 0.254mm, this choice may lead to fragile structure. Hence it is desirable to go for designs with low permittivity superstrate layer thickness of 2.54mm with good gain, antenna efficiency and radiated power output implying low losses.

IV.

Effect on Axial Ratio due to Superstrate Thickness & Improvement in Bandwidth

A very important parameter is the polarization of an antenna. The axial ratio helps to quantify the polarization. The axial ratio is the relationship between major and minor axes of an elliptically polarized wave and it varies between one and infinity. A linearly and a circularly polarized antenna, the axial ratio tends to infinity and 1 respectively. Aspect ratio observed in case of multidielectric antenna without and with superstrate layer is of the order of 1. It is a linearly polarized multidielectric antenna. Similarly with the superstrate layer incorporated in a multidielectric layer microstrip antenna, it is observed that there is a bandwidth enhancement of the order of 25% to 43.8% with a

65

Vol. 3, Issue 1, pp. 55-68

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
exception related to case 1 (viz. Thick superstrate with high permittivity dielectric constant). Figure 16 shows combined plot of all antenna parameters normalized. The multidielectric layer antenna with low permittivity thin superstrate achieves best result vis-à-vis all other combinations of permittivity and superstrate thickness. However a discussed in the previous sections, the combination with next best results that suits for practical applications is the multidielectric layer microstrip antenna with low permittivity and thick superstrate.
Table 3. Aspect Ratio and Bandwidth variations in Multidielectic Layer Antenna with and without Superstrate Layer Permittivity Superstrate Types Without Superstrate Low Permittivity Thin Superstrate High Permittivity Thin Superstrate Low Permittivity Thick Superstrate High Permittivity Thick Superstrate Aspect Ratio(dB) Normalized 1 0.988949735 0.980851123 0.899375385 0.416799696 Bandwidth (MHz)/ (Normalized) 280/(0.7) 400/(1.0) 350/(0.88) 400/(1.0) 0/(0)

Figure 16. Antenna Parameters plot for Multidielectric Antenna without and with Superstrate Layers.

V.

CONCLUSION

Parameters’ of microstrip antenna which inherently limits the gain, directivity, returns loss and radiated power is improved upon. Considering the effect of superstrate layer method for accurately determining the resonant frequency of such structures have been reported using the variation of patch dimension. To overcome the time consuming and laborious accurate numerical methods a direct use in algorithm for the design of the antenna is suggested. Data obtained from simulation with variation of the height of the transformed antenna and its effect can be used to predict the antenna parameters including resonant frequency, return loss, power radiated, directivity and gain for a multilayer microstrip antenna subjected to the limits for the thickness of the superstrate layer (0.254mm-2.54mm). Gain of a multilayered structure increases as the height of the cover layer is decreased. As regard thin cover layer dielectric, conductor losses are dominant while for thicker cover layer surface wave losses

66

Vol. 3, Issue 1, pp. 55-68

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
are significant. It is found choice of low permittivity dielectric both thin and thick as cover layer is suitable for applications requiring high antenna efficiency.

REFERENCES
[1] Shun-Shi Zhong, Gang Liu & Ghulam Qasim, (1994) “Closed Form Expressions for Resonant Frequency of Rectangular Patch Antennas With Multi-dielectric Layers”, IEEE Transactions on Antennas and Propagation, Vol. 42, No. 9, pp. 1360-1363. H. A. Wheeler, (1964) “Transmission line properties of parallel wide strips by a conformal mapping approximation”, IEEE Trans. Microwave Theory Tech., Vol. M1T-12, pp. 280-287. M. Kirschning & R. H. Jansen, (1982) “Accurate model for effective dielectric constant of microstrip with validity up to millimeter-wave frequencies”, Electronic Letter., Vol. 18, pp. 272-273. J. Svacina, (1992) “Analysis of multilayer microstrip lines by a conformal mapping method”, IEEE Trans. Microwave Theory Tech., Vol. 40, No. 4, pp.769- 772. N. G. Alexopoulos, & D. R. Jackson, (1984) “Fundamental superstrate (cover) effects on printed circuit antennas”, IEEE Trans. Antennas Propagation, Vol. AP-32, pp. 807–816. G. Alexopoulos & D. R. Jackson, (1985) ”Gain enhancement methods for printed circuit antennas”, IEEE Trans. Antennas Propagation, Vol. AP-33, pp. 976–987. H. Y. Yang & N. G. Alexopoulos, (1987) “Gain enhancement methods for printed circuit antennas through multiple substrates”, IEEE Trans. Antennas Propagation, Vol. AP-35, pp. 860–863. X. Shen, G. Vandenbosch & A. Van de Capelle, (1995) “Study of gain enhancement method for microstrip antennas using moment method”, IEEE Trans. Antennas Propagation, Vol. 43, pp. 227–231. X. Shen, P. Delmotte & G.Vandenbosch, (2001) “Effect of superstrate on radiated field of probe fed microstrip patch antenna”, Proc. Inst. Elect.Eng.-Microwave Antennas Propagation, Vol. 148, pp. 141– 146. Zhong, S.Z., Liu, G. & Qasim, G. (1994) "Closed Form Expressions for Resonant Frequency of Rectangular Patch Antennas with Multidielectric Layers," IEEE Transactions on Antennas and Propagation, Vol. 42, No. 9, pp. 1360-1363. H. A. Wheeler, (1964) “Transmission line properties of parallel wide strips by a conformal mapping approximation”, IEEE Trans. Microwave Theory Tech., Vol. M1T-12, pp. 280-287. Samir Dev Gupta, Anvesh Garg & Anurag P. Saran (2008) "Improvement in Accuracy for Design of Multidielectric Layers Microstrip Patch Antenna”, International Journal of Microwave and Optical Technology (IJMOT), Vol.3, No. 5, pp 498-504. U.K. Revankar & K.S.Beenamole, (2003) “Low Sidelobe Light Weight Microstrip Antenna Array For Battlefield Surveillance Radars”, IEEE Radar Conference, pp. 97-101. Richards W.F., Y.T. Lo & D.D. Harrison, (1981) “An Improved Theory for Microstrip Antennas and Applications”, IEEE Trans on Antennas and Propagation, Vol. AP-29, pp. 38-46. Lo Y.T., D. Solomon & W.F. Richards, (1979) “Theory and Experiment on Microstrip Antennas”, IEEE Trans. on Antennas and Propagation, Vol. AP-27, pp. 137-145. C. A. Balanis, (1997) Antenna Theory Analysis and Design, John Willy & Sons, 2nd Edition, Chapter 14, pp.730-734. Anders G. Derneryd, (1978) “A Theoretical Investigation of the Rectangular Microstrip Antenna Element”, IEEE Transactions on Antennas and Propagation, Vol. AP-26, No. 4, pp. 532-535. Chisang You & Manos M. Tentzeris, (2007) “Multilayer Effects on Microstrip Antennas for Their Integration With Mechanical Structures”, IEEE Transactions on Antennas and Propagation, Vol. 55, No. 4, pp. 1051-1058. X. Shen, G. Vandenbosch & A. Van de Capelle, (1995) “Study of gain enhancement method for microstrip antennas using moment method,” IEEE Transactions on Antennas and Propagation., Vol. 43, pp. 227– 231.

[2] [3] [4] [5] [6] [7] [8] [9]

[10]

[11] [12]

[13] [14] [15] [16] [17] [18]

[19]

Authors Samir Dev Gupta received his B.E. (Electronics) from U.V.C.E. Bangalore, M.Tech (Electrical Engg.) from I.I.T. Madras and M.Sc.(Defence Studies) from Madras University. His current area of research is Conformal Microstrip Antenna Design. Experience of 19 years in teaching profession at Post Graduate and Graduate level including three years teaching at Institute of Armament Technology, Pune (Defence Research and Development Organisation) then affiliated to Pune University, now Defence Institute of Advanced Technology (Deemed University). His areas of specialization include Antennas, Microwave Communication and Radar Systems. He was a recognized Post Graduate Teacher in Microwave Communication at Pune University.

67

Vol. 3, Issue 1, pp. 55-68

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Work experience of about 15 years in maintenance, operations and modifications of Radar, Microwave Communication, Aircraft Simulators and Avionics related systems. Amit Singh received B.Tech degree in Electronic and Communication Engineering from the Jaypee Institute Of Information Technology, Noida, India, in 2010. He is a Research and Development Engineer at the EEs of, Agilent Technologies, Gurgaon, India. His main research interests include electromagnetic and its applications, in particular conformal microstrip antennas.

68

Vol. 3, Issue 1, pp. 55-68

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

LOW COST INTELLIGENT ARM ACTUATOR USING PNEUMATIC ARTIFICIAL MUSCLE WITH BIOFEEDBACK CONTROL
Salman Afghani, Yasir Raza, Bilal Haider
Department of Research & Development Electronics Engg., APCOMS, Rawalpindi Pakistan

ABSTRACT
Hard Exoskeletons and arm actuators have already been designed and available in industry but they are expensive, heavy and cumbersome so to reduce these drawbacks a low cost lightweight intelligent arm actuator has been designed, constructed and evaluated. It consists of a pneumatic artificial muscle, constructed using a football bladder and covered by parachute material to limit its expansion. Pneumatic artificial muscle is under the control of micro controller that keep monitoring and controlling the air pressure of artificial muscle using biofeedback amplifier provided to micro controller from cuff bladder - which detects the muscle tension.

KEYWORDS
Intelligent Arm actuator, less price pneumatic Artificial Muscle, biofeedback control, low cost Arm actuator.

I.

INTRODUCTION

A pneumatic artificial muscle is non linear and contractile type actuators, they are not commonly used in robotic or man machine systems. At present Electromagnetic motors such as AC motors, DC motors, stepper motors, and linear motors are widely used which are not as lightweight as a pneumatic artificial muscle, also a pneumatic artificial muscle has a direct transmission capability whereas electromagnetic motors has not. Electric motors has power to weight ratio of about 100 (W/kg) and their torque to weight ratios range is within 1-10 (Nm/Kg)[1]. Whereas power to weight ratios of PAM actuators is reported in excess of 1000 (W/kg)[2]. Pneumatic artificial muscles are not use widely due to difficulty in controlling them accurately although they have high power to weight ratio but with less controlling capabilities as compared to electric motors In next section after Pneumatic Artificial Muscle we explained our system design considering the feedback control system and new design for detection of muscle tension.

II.

PNEUMATIC ARTIFICIAL MUSCLE

In order to develop PAM, initially we used motorcycle and cycle tyre tubes but displacement produced from tubes wasn’t enough to load object. So for more displacement we used football bladder which gave us our desired displacement. Lightweight parachute material was used to make cover for football bladder, to limit the expansion of bladder to some specific extent. Parachute material was sewed in folding to more strengthen the muscle. Special input nozzle for bladder was designed to insert and release air more easily. Hooks were installed on both sides of muscle to easily hook with body. Following formula can be used to calculate the generated force of PAM if energy losses and energy needed to contact PAM is neglected.

F = −p

dV dl

(1)

69

Vol. 3, Issue 1, pp. 69-75

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
In above stated formula ‘p’ represents the gauge pressure, ‘V’ represents the volume of PAM and ‘l’ represents the length of PAM after contraction as shown in fig 1 and dV/dl is the muscle’s effective area [3].

Figure 1. PAM’s concept of operation

Figure 2. Testing of PAM

III.

FEEDBACK CONTROL SYSTEM

Feedback of PAM is taken and provided back to microcontoller after signal amplification, where micro controller reads and processes the feedback signal and control PAM accordingly. Two types of system were design and experimented – Three state system and two state system

3.1

Three State System

Figure 3 System Block diagram

70

Vol. 3, Issue 1, pp. 69-75

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Where Vref1 is the voltage nominal voltage level of Bio feedback amplifier and Vref2 is the voltage where PAM’s expansion limit has reached. E1 and E2 give us the difference of voltages between Vref1 to Bio feedback voltage and Vref2 to PAM feedback voltage respectively. Fuzzy controller reads the value of E1 and produces a 3 state system considering trapezoidal logic.
State # State 1 State 2 State 3 Table 1 Three state values Biofeedback Voltage E1 Bio feedback Fuzzy Range (v) Representation 0.67 – 1.50 Positive Positive Feedback 1.35 – 3.35 0 Nominal Value 2.00 – 4.20 Negative Negative Feedback PAM status Stretched Hold position Inflated Full

Figure 4 Fuzzy logic diagram

3.1.1

Three State Diagram

Figure 5 Three State Diagram

71

Vol. 3, Issue 1, pp. 69-75

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
3.1.2. Flow Diagram for Three State system

Figure 6 Flow Diagram for Three State system

3.2

Muscle tension sensing unit/ Bio feedback amplifier

Automatic control of PAM needs a system which automatically detects the muscle tension and provide to micro controller. A new idea of muscle tension detection is used i-e a cuff bladder is attached at muscle of a person and tension is detected by detection the change in air pressure of cuff bladder occurred due to expansion or contraction of person’s muscle, using air pressure sensor[4]. To achieve this we did back engineering of digital blood pressure monitor which measures blood pressure by detecting the change in pressure using pressure sensor so we used its bladder, pump and pressure sensor and controlled all of them from micro controller.

Cuff Bladder

Air Pressure Sensor

Instrumentation Amplifier with signal conditioning

Vout

Figure 7 Block diagram of sensing unit

We can sense muscle tension at two different locations of our Arm i-e Bicep and Wrist. Experiments are done on both but shown for wrist configuration.

72

Vol. 3, Issue 1, pp. 69-75

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963 3.2.1 Instrumentation Amplifier

We performed many experiments on instrumentation amplifier to get our optimal value as the output of the sensor was very low value of voltage. We used different values of resistors in our circuit to get maximum output. During experiments we came to know that when we adjust the values of resistors at 10k we gain much better output but yet its value was low but as compare to our other experimental values of resistors it was much batter so fixed the value of 10k except feedback resistors R1 and Rgain. For R1 we again perform different experiments to increase its gain, and by placing the value of 670k we got the maximum output from our circuit and along with Rgain where we place the resistor of value 570 ohm we achieved the best and the maximum output value of voltage as compare to any other values of resistors. Our final range of voltage was from 0.64v to 4.20v.

Figure 8 Instrumentation Amplifier Schematic

3.3

Two state system

In case of operational amplifier, there is a trade off between linearization and Amplification. Output voltage of a pressure sensor was very less so we also tried another type of system by increasing amplification and decreased linearization which finally gave us almost a digital output value of Instrumentation Amplifier i-e 0.67V in relaxed position and 4.5V at tensed position of muscle. This reduced our system to only 2 states as shown in fig 7.

3.3.1

Two State Diagram

Figure 9 Two State Diagram

73

Vol. 3, Issue 1, pp. 69-75

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963 3.3.2
Flow Diagram for Two State System

Figure 10 Flow Diagram

IV.

DISCUSSION AND RESULTS

We considered this two state system as well as three state system for this arm actuator and both performed pretty well, Two state system just reduced the complexity involved in fuzzy logic control and our system became simple. Table 2 contains the values and results obtained from two state system. Whereas three state system includes complexity of system but give more smooth and intelligent behaviour of system. Table 1 contains values and results of three state system. As mentioned above that we can sense muscle tension from two different locations of a body but when we used bicep location for sensing, we observed disturbance in muscle tension sensor due to expansion of PAM. Whereas it works fine for wrist configuration.
Table 2 Two state values State # State 1 State 2 Muscle tension detected? No Yes Inst Amp Voltage (V) 0.67 4.5 Bio feedback binary representation 0 1 PAM status Stretched Inflated

V.

CONCLUSION AND EXPERIMENT

Low Cost Arm Actuator with Pneumatic Artificial Muscle can be designed using a simple football bladder and parachute material and also it can easily be controlled and made intelligent using biofeedback control. Biofeedback can be taken from two locations of Arm i-e Bicep or Wrist. Biofeedback amplifier can be modified in two types of systems – Three states system and two states system. Three states system is more intelligent due to its fuzzy logic control but it’s more complex than two states system. We can also enhance and design our project to whole body of a person to

74

Vol. 3, Issue 1, pp. 69-75

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
make a complete soft exoskeleton lieu of hard exoskeletons that has already been developed throughout the world.

PAM

Sensing Unit Lightweight harness

REFERENCES
[1]. F. Daerden, “Conception and Realization of Pleated Pneumatic Artificial Muscles and Their Use as Compliant Actuation Elements,” PhD Dissertation, Vrije Universiteit Brussel, July 1999 [2]. Frank Daerden, Dirk Lefeber “THE CONCEPT AND DESIGN OF PLEATED PNEUMATIC ARTIFICIAL MUSCLES” Vrije Universiteit Brussel [3]. H. M. Paynter, "Hyperboloid of revolution fluid-driven tension actuators and methods of making", US Patent No. 4 721 030, 1988. [4]. DIGITAL BLOOD PRESSURE MONITOR Journal of Applied Research and Technology, december, lamb / vol. 2, number 003 University National Autonomous Mexico, Federal District, Méxicopp. 224-229

Authors Salman Afghani Was born in Pakistan in 1958. Professor, PhD, Advanced man machine
systems, MPhil, Industrial automation, Ms Mech Engg. Research Interest: Man Machine systems, robotics intelligent all terrain vehicles & submersibles, Zoological and Botanical computers, Non-intrusive botanical genetics using computer simulated time acceleration technique.

Yasir Raza was born in Rawalpindi, Pakistan in 1989. He did BSEE in Electrical Engineering
from University of Engineering and Technology, Taxila Pakistan

Bilal Haider Was born in Rawalpindi, Pakistan in 1987. He did BSEE in Electrical
Engineering from University of Engineering and Technology, Taxila Pakistan

75

Vol. 3, Issue 1, pp. 69-75

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

TWO STAGE DISCRETE TIME EXTENDED KALMAN FILTER SCHEME FOR MICRO AIR VEHICLE
Sadia Riaz and Ali Usman
Department of Mechanical Engineering, NUST College of EME, Rawalpindi, Pakistan

ABSTRACT
Navigation of Micro Air Vehicle (MAV) is one of the most challenging areas of twenty first century’s research. Micro Air Vehicle (MAV) is the miniaturized configuration of aircraft with a size of six inches in length and below the weight of hundred grams, which includes twenty grams of payload as well. Due to its small size, MAV is highly affected by the wind gust and therefore the navigation of Micro Air Vehicle (MAV) is very important because precise navigation is a very basic step for the control of the Micro Air Vehicle (MAV). This paper presents two stage cascaded discrete time Extended Kalman Filter while using INS/GPS based navigation. First stage of this scheme estimates the Euler angles of Micro Air Vehicle (MAV) whereas the second stage of this scheme estimates the position of Micro Air Vehicle (MAV) in terms of height, longitude and latitude. As the system is considered as non-linear, so Extended Kalman Filter is used. On-board sensors in first stage included MEMS Gyro, MEMS Accelerometer, MEMS Magnetometer whereas second stage includes GPS.

KEYWORDS:

Micro Electro Mechanical System (MEMS), Micro Air Vehicle (MAV), Measurement Covariance Matrix, Process Covariance Matrix

NOMENCLATURE
PN PE WN WE V air p q r

φ θ ψ •

m m m Q R

ox oy oz

Inertial North Position of MAV Inertial East Position of MAV Wind from North Wind from East Total Airspeed Angular Rate about x-axis Angular Rate about y-axis Angular Rate about z-axis Roll Angle Pitch Angle Yaw Angle As superscript shows the rate of change Northern Magnetic Field Component Eastern Magnetic Field Component Vertical Magnetic Field Component Process Covariance Noise Matrix Measurement Covariance Noise Matrix

I.

INTRODUCTION

Micro Air Vehicle (MAV) is that type of aircraft, which is smaller enough than the other aircrafts. According to DARPA (Defence Advanced Research Project Agency), Micro Air Vehicle (MAV) are small aircraft usually defined to be less than fifteen centimeter ( six inches) in length and below the weight of hundred grams of weight which include twenty grams of payload as well [1]. In developing these airborne spies the primary concerns are the size, weight, sensors, communication and the computational power. These are remote controlled, semi-autonomous or the autonomous aircrafts, which can be used in military, surveillance and reconnaissance, and civilian, search and rescue [2].

76

Vol. 3, Issue 1, pp. 76-83

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Micro Air Vehicle (MAV) mainly have three types which include Fixed Micro Air Vehicle (MAV), Flapping Micro Air Vehicle (FMAV) and Rotary Micro Air Vehicle (MAV). Fixed Micro Air Vehicles (MAV) are the most efficient because they have higher loitering time [3], but they cannot hover. Flapping Micro Air Vehicle (FMAV) can hover but they have very short flight time [4]. Rotary Micro Air Vehicle (MAV) can hover in any direction but at the cost of flight duration [5]. Figure 1 shows different types of Micro Air Vehicles which includes Fixed Wing Micro Air Vehicle, Flapping Wing Micro Air vehicle, Wing Morphing Micro Air Vehicle and Rotary Wing Micro Air Vehicle. o

Figure 1: Types of Micro Air Vehicles

Applying Micro Electromechanical Systems (MEMS) inertial sensors for the Navigation of an autonomous Micro Aerial Vehicle (MAV) is an extremely challenging area [6]. The Global rial Positioning System (GPS) can provide long term stability with high accuracy and world1wide long-term coverage. Since the performance of the low cost micro GPS receiver can be easily degraded in high maneuvering environments, fusing the navigation data with other sensors such as a magnetometer or barometer is necessary [7]. Two stage discrete time Extended Kalman Filter is used in this paper to Two-stage estimate the Euler angles in first stage and the position in the second stage. To combine MEMS gyro, MEMS accelerometer and MEMS magnetometer in one stage is beneficial because they support each other in estimating the precise states of Micro Air Vehicle (MAV).

II.

DISCRETE TIME EXTENDED KALMAN FILTER FORMULATION
x k = f k − 1 (x k − 1 , u y k = h h (x k , v k wk ≈ vk ≈
k −1

The system and measurement equations are given as follows:

, w k −1

)

)

(0 , Q k ) (0 , R k )

Initialize the filter as follows:

) ) P0+ = E x0 − x0+ x0 − x0+

[(

) x0+ = E ( x0 )

)( )

)]
T

77

Vol. 3, Issue 1, pp. 76-83

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
For k = 1, 2, 3 . . . . Compute the following partial derivative matrices [8]:

Fk −1 =

∂ f k −1 ∂x

)+ x k −1

L k −1 =

∂ f k −1 ∂w

)+ x k −1

Perform the time update of the state estimate [9] and estimation error covariance [10] as follows:

Pk− = F k −1 Pk+−1 F kT−1 + L k −1Q k −1 LT −1 k )− )+ x k = f k −1 x k −1 , u k −1 , 0

(

)

Compute the following partial derivative matrices

HK =

∂ hk ∂x

)− xk

MK =

∂ hk ∂v

)− xk

Perform the measurement update [11] of the state estimate and estimation error covariance as follows
T T T K k = Pk− H k H k Pk− H k + M k Rk M k ) ) ) xk+ = xk− + K k yk − hk xk− ,0

(

)

−1

(

(

))

Pk+ = (I − K K H k )Pk−
The following formatting rules must be followed strictly. This (.doc/.docx) document may be used as a template for papers prepared using Microsoft Word. Papers not conforming to these requirements may not be published in the journal.

III.

TWO STAGE EXTENDED KALMAN FILTER ESTIMATION SCHEME

Because the heading update equation uses the same inputs as the Pitch and Roll equations, it is not unnatural to lump them together in the same estimation block. The exclusiveness of the heading state lies in the output equations. There is not a sensor output equation which will relate heading to accelerometer readings, which is why it was convenient to split heading estimation into it’s own stage as mentioned earlier. One of the merits of heading and attitude at the same time is that magnetometer information may be beneficial in the estimation of pitch and roll, since no maneuver will upset the earth’s magnetic field, as they may the accelerometer readings. And, depending on the attitude and heading of the MAV, projecting pitch and roll onto the magnetic field vector may refine the pitch and roll estimates. Figure 2 shows a schematic for Two Stage Discrete Time Extended Kalman Filter Scheme.

Figure 2: Two stage Cascaded Discrete time Extended Kalman Filter

78

Vol. 3, Issue 1, pp. 76-83

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
First stage state variables and inputs can be shown as following:

 p  φ   acc x    θ , u =  q , y = acc  x=   y  r  ψ   acc z        Vair  Stage two state variables and inputs can be shown as following:
 GPS N   PN   GPS  P  ψ  E  x =  E , u =   , y =   GPS Velocity  W N  Vair       GPS Heading  WE   
In two stage discrete time Extended Kalman Filter state estimation scheme, the states can be related to the inputs in the following two steps as shown below: Stage 1 consists of the following:

• φ  • θ  = f ( x , u ) = • ψ     
Linearization through jacobian:

   p + q sin φ tan θ + r cos φ tan θ    q cos φ + r sin φ   sin φ cos φ   q +r   cos θ cos θ  
q sin φ + r cos φ cos 2 θ 0  0 0  0  

  q cos φ tan θ − sin φ tan θ ) ∂ f (x , u )  A= = − q sin φ + r cos φ  ∂x  (q cos φ − r sin φ )sec θ  
Process Covariance Matrix is given as follows:

(q sin φ + r cos φ )sec θ tan θ

Q = E wwT
Where,

(

)

 wφ    w =  wθ   wψ   
And

wT = [wφ

wψ ]
wφ wθ wθ2 wψ wθ wφ wψ   wθ wψ  2 wψ  
Vol. 3, Issue 1, pp. 76-83

so,
2  wφ  Q = E  wθ wφ  wψ wφ 

79

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Noise is uncorrelated:

 wφ2  Q = E 0 0 

0 wθ2 0

0   0  2 wψ  

The output equations for attitude estimator are as follows:

  Vair q sin θ + sin θ   g   Vair ( r cos θ − p sin θ ) ) − cos θ sin φ  h( x , u ) =    g   −V airq cos θ   − cos θ cos φ g    
Linearization through jacobian :

 0   ) ∂h( x , u )  Hk = = − cos θ cos φ  ∂x   − cos θ sin φ  

Vair q cos θ + cos θ g − Vair (r sin θ + p cos θ ) + sin θ sin φ g Vair q sin θ + sin θ cos φ g

 0  0   0  

Measurement Covariance Matrix is given as follows:

R = E (vvT )
Where,

vaccx    v = vacc y  vacc   z
v = vacc x

And

[

vacc y

vacc z

]
v acc x v acc z   v acc y v acc z  2 v acc z  

So,

 v 2 acc x  R = E  v acc y v acc x v v  acc z acc x
Noise is uncorrelated:

v acc x v acc y
2 v acc y

v acc z v acc y

80

Vol. 3, Issue 1, pp. 76-83

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

 v 2 acc x  R = E 0  0 

0
2 v acc y

0

0   0  2 v acc z  

Stage 2 consists of the following:

 •   PN  Vair cosψ − WN  •  P   Vair sinψ − WE    •E  =    0 WN   •   0   W E   
Process Covariance Matrix is given as follows:

Q = E wwT
Where,

(

)

 wP  w= N  wPE 
And

wT = wPN
Noise is uncorrelated:
2  wPN Q = E  0 

[

wPE

]

0   wPE  

Output equations by inertial estimator:

GPS N   PN  ) h( x , u ) =  =  GPS E   PE 
Linearization through Jacobian:

) ∂ h (x , u )  1 = ∂x 0
81

0 1 
Vol. 3, Issue 1, pp. 76-83

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Measurement Covariance Matrix is given as follows:

Where,

R = E (vvT )

and

 GPSN   GPS  E  v=  GPSVelocity    GPSHeading  

vT = GPS N
So

[

GPS E

GPSVelocity

GPS Heading

]
GPS N GPS Velocity GPS E GPS Velocity 2 GPS Velocity GPS Heading GPS Velocity GPS N GPS Heading GPS E GPS Heading GPS Velocity GPS Heading 2 GPS Heading       

2  GPS N  GPS E GPS N R = E  GPS Velocity GPS N   GPS Heading GPS N 

GPS N GPS E 2 GPS E GPS VElocity GPS GPS Heading GPS

E E

The noise is uncorrelated

 GPS  0 R = E  0   0 

2 N

0 GPS 0 0
2 E

0 0 GPS
2 Velocity

0 0 0 GPS
2 Heading

0

      

IV.

CONCLUSIONS

The MAVs are playing a significant role military surveillance and reconnaissance, and civilian search and rescue. To control MAV, it is important to navigate it properly. Kalman Filter is one of the techniques, which can be used for the navigation of MAV. Three different schemes can be used for the discrete time Extended Kalman filter as shown in the paper above. Work is in process on the simulations based on this mathematical model.

ACKNOWLEDGEMENTS
The authors are indebted to the College of Electrical and Mechanical Engineering (CEME), National University of Sciences and Technology (NUST), Higher Education Commission (HEC) and Pakistan Science Foundation (PSF) for having made this research possible.

REFERENCES
[1]. James M. McMichael (Program Manager Defense Advanced Research Projects Agency) and Col. Michael S. Francis, USAF (Ret.) (Defense Airborne Reconnaissance Office), “Micro air vehicles – Toward a new dimension in flight” dated 8/7/97

82

Vol. 3, Issue 1, pp. 76-83

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
[2]. Application of Micro-Technology to Integrated Navigation of Autonomous Micro Air Vehicles by S. Technology Navigation Winkler_, M. Buschmann_, T. Kordes_, H. W. Schulz_, and P. V orsmanny Institute of Aerospace H.Systems, TU Braunschweig, D 38108 Braunschweig, Germany. CANEUS 2004 D-38108 2004--Conference on Micro-Nano-Technologies 2004, Monte Technologies Monterey, California [3]. Micro Air Vehicle: Configuration, Analysis, Fabrication, and Test by Huaiyu Wu, Member, IEEE, Dong Sun, Member, IEEE, and Zhaoying Zhou, Senior Member, IEEE. IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 9, NO. 1, MARCH 2004. [4]. http://www.ornithopter.net/history_e.html [5]. A Fixed-Wing Aircraft for Hovering in Caves, Tunnels, and Buildings William E. Green and Paul Y. Wing Oh, Drexel Autonomous Systems Lab (DASL) Drexel University, Philadelphia, PA[william.e.green,paul.yu.oh]@drexel.edu] [william.e.green,paul.yu.oh]@drexel.edu] [6]. “Discrete time adaptive control for a MEMS gyroscope” by Sungsu Park1 and Roberto Horowitz2 1 Department of Aerospace Engineering, Sejong University, Seoul, Korea. 2 Department of Mechanical Engineering, University of California, Berkeley, CA 94720, U.S.A. INTERNATIONAL JOURNAL f OF ADAPTIVE CONTROL AND SIGNAL PROCESSING Int. J. Adapt. Control Signal Process. 2005; 19:485–503 Published online 1 April 2005 in Wiley InterScienc (www.interscience.wiley.com) 503 DOI:10.1002/acs.868 [7]. “Estimation with Applications to Tracking and Navigation” by Yaakov Bar Bar-Shalom, X-Rong Li, Thiagolingam Kirubarajan. A Wiley Interscience Publication. John Wiley & sons. INC.. Wiley[8]. “State estimation for micro air vehicles” by Randal W. Beard, Department of Electrical and Computer Engineering, Brigham Young University, Provo, Utah. Studies in Computational Intelligence (SCI) 70, 173–199 (2007). Springer-Verlag Berlin Heidelberg 2007 Verlag [9]. “Applied optimal estimation” Gelb, A. Cambridge, MA: MIT Press, 1974. [10]. “Kalman filtering theory and practice” by Grewal, M. and A. Andrews. Englewood Cliffs, NJ: Kalman Prentice-Hall, 1993.

Authors
Sadia Riaz is a Mechanical Engineer from National University of Sciences and Technology (NUST). She did her MS in Mechanical Engineering from National University of Sciences Engineering and Technology (NUST). Now she is post graduate student in National University of post-graduate Sciences and Technology (NUST). Her area of research is state estimation with the application area of Micro Air Vehicles (MAVs).

Ali Usman is Mechatronics Engineer from Air University. He did his MS in Mechanical Engineering from National University of Sciences and Technology. Now he is post graduate post-graduate student in National University of Sciences and Technology. His area of research is classification of Wing design and navigation of Micro Air Vehicles (MAVs). .

83

Vol. 3, Issue 1, pp. 76-83

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

IDENTIFICATION OF SUSPICIOUS REGIONS TO DETECT ORAL CANCERS AT AN EARLIER STAGE– A LITERATURE SURVEY
K. Anuradha1 and K.Sankaranarayanan2
2

Research Scholar, Karpagam University, Coimbatore Dean, Electrical Sciences, Easa College of Engg. and Technology, Coimbatore

1

ABSTRACT
Oral cancer is a significant health problem throughout the world. Most oral cancers are identified at a later stage where, treatment becomes less successful. It is very important to detect such types of cancer at an earlier stage. Early detection helps surgeons to provide necessary therapeutic measures which also benefit the patients. Dental Radiographs are the X – rays that help to identify problems with teeth, mouth and jaw. In this paper, a detailed survey has been done on various methods analyzed by the researchers for the detection of oral cancers at an earlier stage. A comparison is made between various methods for the identification and classification of cancers. An overview of algorithms in each step of the cancer detection algorithms is given. K EY W ORDS: Oral cancer, Dental Radiographs, X – rays.

I.

INTRODUCTION

Oral cancer ranks as the fourth most frequent cancer among men and eighth for women worldwide and may affect the tongue, cheeks, peridontium or any other part of the oral pharynx. Oral cancer is a significant health problem throughout the world. Tumors that arise from odontogenic (tooth forming) tissues are referred to as odontogenic tumors. Tumors are either benign or malignant. Malignant tumors are cancerous. Oral cancer can affect any area of the oral cavity including the lips, gum tissues, tongue, cheek lining and the hard and soft palate. This paper focuses on identifying oral cancers at an earlier stage. Oral cancer detection at an earlier stage saves lives [1]. Beyond oral cancer, many problems can occur within the bones of the mouth. Systemic problems those that affect the entire body many times appear in the mouth first. In general, the mouth is a good indicator of what’s going on in the body, which is why the physicians for generations have asked patients to open their mouth. X – Rays are an essential part of dental care. Although X – rays are effective diagnostic tools, some dental practices particularly those that handle a large number of dental implant cases, are using more advanced imaging techniques to ensure an even higher degree of accuracy. Oral cancers are often located on the tongue, MR images may become blurry because of moving artifacts induced by the moving tongue and jaw. So an efficient image processing algorithm is needed to identify the suspicious region in the cancer area with high accuracy. Dental radiographs are used for screening oral pathologies continuously and it is often a difficult task to detect early stage cancer tissues in a dental radiograph. Unlike other types of cancers, oral cancers are visibly seen with the naked eye, some cancers are located internally in the mouth, making their detection difficult. And also some non cancerous tissues are not harmful. The organization of the chapter is as follows: In section 2, a literature study has been performed for cancer. Some of the recent methods for cancer detection are presented in Sub Section 2.1. Sub Section 2.2 is devoted to feature extraction and cancer classification methods. Discussions about the techniques are made in Section 3. Finally Section 4 summarizes and concludes the section.

84

Vol. 3, Issue 1, pp. 84-91

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

II. LITERATURE SURVEY If oral cancer is detected at an earlier stage it is curable. The exact cause of cancer is not known, however, there are certain risk factors, which may trigger this disease in individuals. Among the common factors are chemicals, which may be in the form of tobacco or chemicals present in food, air, water etc [2]. These chemicals are known as Carcinogens. Detection and Diagnosis of cancer has become one of the most significant areas of research in the Medical Imaging and Image Processing Communities. Symptoms: The symptoms for an oral cancer at an earlier stage [3] are: 1) Patches inside the mouth or on lips that are white, red or mixture of white and red 2) Any sore or area in the mouth which does heel for discolored more than 14 days 3) Bleeding in the mouth 4) Difficulty or pain when swallowing 5) A lump in the neck. These symptoms identify the suspect for a cancer. The system identifies that a cancer has occurred with image processing Techniques. A surgeon who suspected the presence of cancer in a patient has few options: X –rays studies to find the cancer’s exact location, providing proper medications can recovery from cancer, excise a portion of the unhealthy tissue for biopsy, remove the cancer, explore the surrounding tissues to determine if the cancer has spread. Over the last few decades, imaging technology refinements have substantially widened the range of medical options: • The Tests now provide much clearer and more detailed pictures of organs and tissues. • New imaging technology allows us to do more than simply view anatomical structures such as bones, organs and tumors. • Functional imaging - the visualization of physiological, cellular, or molecular processes in living tissues – enables us to observe activity such as blood flow, oxygen consumptions or glucose metabolism in real time. Imaging technology already has had lifesaving effects on our ability to detect cancer early and more accurately to diagnose the disease. Image Processing [4] algorithms have been continuously applied to get better results. The literature survey carried out has revealed that a fair amount of research has been put in the areas of cancer imaging.
2.1 Cancer Detection Techniques
[Banumathi.A et al 2009] [5] have proposed cyst detection and severity measurement of cysts using image processing techniques and neural network methods. The suspicious cyst regions are diagnosed using Radial Basis Function Network. The severity of the cysts is calculated using circularity values and the results show the part of the cysts extracted. [Woonggyu Jung et al] [6] has proposed a technique in oral cancer detection using Optical Coherence Tomography. For the imaging depth of 2-3 mm, OCT is suitable for oral mucosa. They also detected oral cancer in 3-D volume images of normal and precancerous lesions. [Simon Kent] [7] has proposed a diagnosis method for oral cancer using Genetic Programming. The Technique proposed solved many complex problems. The comparison between a Genetic Programming system and Neural Network model was provided. The Genetic Programming system played a major advantage in diagnosing the tumor. [Ranjan Rashmi Paul et al] [8] has proposed a detection methodology to detect oral cancer using wavelet - neural networks. The wavelet coefficients of TEM images of collagen fibers from normal oral sub – mucosa and OSF tissues have been used in order to choose the feature vector which, in turn, used to train the Artificial Neural Network. The trained network could satisfy the normal and precancer stages. [Ireaneus Anna Rejani.Y, Dr. S. Thamarai Selvi 2009] [9] have proposed a technique to detect breast cancer at an earlier stage using SVM Classifier. The proposed system focused on two problems. One is to detect tumors as suspicious regions with a very weak contrast to their background and another is to extract features which categorize tumors. They used Gabor filtering technique for image enhancement and Top hat transform operation for image extraction. [Saheb Basha.S, Dr. K. Satya Prasad 2009] [10] have proposed an automatic detection of Breast cancer mass in mammograms using Morphological operations and Fuzzy C-means clustering. They

85

Vol. 3, Issue 1, pp. 84-91

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
developed an algorithm to distinguish masses and micro calcifications from background tissue using morphological operators and fuzzy C- means clustering (FCM) algorithm has been implemented for intensity – based segmentation. The proposed technique showed better results. [Man Kin Derek Ho 2007] [11] have proposed an algorithm to segment images for Medical confocal images. The results provide accurate count (95% with 6.2% standard deviation) of cells or droplets. [Ghassan Hamarneh, Artur Chodorowski, Tomas Gustavsson, 2000] [12] have proposed the application of active contour models for the segmentation of oral lesions in medical color images acquired from the visual per of the light spectrum. The proposed work also classifies cancerous and non – cancerous lesions. The automatic segmentation algorithm simplifies the analysis of oral lesions and can be used in clinical practice to detect potentially cancerous lesions. [S. Murugavalli, V. Rajamani, 2007] [13] proposed an improved implementation of brain tumor detection using segmentation based on neuro fuzzy technique. Various tissues like white matter, gray matter, cuff and tumor were detected. The Fuzzy C means algorithm is used to classify the image, layer by layer. The neuron fuzzy technique shows that MRI brain tumor segmentation using HSOM – FCM also perform more accurate one. [Mohammed.M.M et al 2003] [14] has proposed Prostate cancer diagnosis based on Gabor filter texture segmentation of ultrasound image. Multichannel filtering is an excellent method for prostate texture investigation. Using human visual system (HVS), medical doctors use three features for texture analysis, mainly repetition, directionality and complexity. A set of Gabor filters that is well distributed to cover the entire frequency plane is designed to mimic the HVS and therefore it is an excellent tool that can be used for prostate texture segmentation. [Seshadri.H.S, Kandasamy.A] [15] has proposed detection of breast cancer at an early stage by Digital Mammogram Image analysis. The gradient of the preprocessed image is calculated and finally the segmentation algorithm is applied to the image. They have tested the proposed method on digital mammograms taken from mini – MIAS database and found the lesion segmentation algorithm closely matches the radiologists outlines of these lesions. Varsha.H. Patil et al, [16] proposed an automated system for detecting breast tumor at an earlier stage. The system uses super resolution technique to display the necessary information for boosting the physician’s diagnosis. CAD tools are used to design the system. [Poulami Das, et al, 2009] [17] have proposed a method to identify abnormal growth cells in breast tissue and suggested further pathological tests. They compared normal breast tissue with malignant invasive breast tissue by a series of image processing steps. Features of cancerous breast tissue are extracted and analysed with normal breast tissue. [Shekar Singh et al 2011][18] have proposed the detection of breast cancer and classification of histopathological images. They used Feed Forward Back Propagation Neural Network for classification of benign and malignant breast cancers. The breast cancers were classified into type 1, type 2, type 3. They extracted eight features after cancer detection. Feed forward Neural Network gives fast and accurate classification of breast cell nuclei. Computer aided diagnosis systems for detecting malignant texture in biological study have been investigated using several techniques. Vijay Kumar.G et al [19] proposed an approach in computer aided diagnosis for early prediction of brain cancer using Texture features and neuro classification logic. Nine distinct invariant features with calculation of minimum distance for the prediction of tumor in a given MRI image. A neuro fuzzy approach is used for the recognition of the extraction region. Yan Zhu and Hong [20] suggested the Hopfield neural network for the detection of brain tumor boundaries which was based on an active contour model. This is more suitable for real time applications. Automated detection of tumors in different medical images is motivated by the inevitability of high accuracy when we deal with human life. L. Jeba sheela et al [21] proposed a system using imaging techniques to categorize the images as normal or abnormal and then classify the tissues of the abnormal brain MRI to identify brain related diseases. 2.2 Feature extraction and cancer classification techniques [DSVGK Kaladhar, B. Chandana, P. Bharath Kumar, 2011] [22] have predicted oral cancer survivability using Classification algorithms. The classification algorithms used are CART, Random

86

Vol. 3, Issue 1, pp. 84-91

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Forest, LMT, and Naïve Bayesian. The algorithms classify the cancer survival using 10 fold cross validation and training data set. Out of the other techniques, the Random Forest classification technique correctly classified the cancer survival data set. The absolute relative error is less when compared to other methods. [Xiaowei Chen et al 2006] [23] have proposed Automated segmentation, classification and tracking of cancer cell nuclei in time – lapse microscopy. Existing computational imaging methods are rather limited in analyzing and tracking such time – lapse datasets and manual analysis is unreasonably time – consuming and subject to observer variances. An automated system that integrates a series of advanced analysis methods to fill this gap. The cellular image analysis methods can be used to segment, classify and track individual cells in a living cell population over a few days. Experimental results show that the proposed method is efficient and effective in cell tracking and phase identification. [Yung – nien Sun et al 2010] [24] have proposed an automatic color – based feature extraction system for parameter estimation of oral cancer from optical microscopic images. Parameter comparisons between four cancer stages are conducted, and only the mean parameters between early and late cancer stages are statistically different. The proposed system provides a useful and convenient tool for automatic segmentation and evaluation for stained biopsy samples of oral cancer. [Yung –nien Sun et al 2007] [25] have proposed a new color – based approach for automated segmentation and classification of tumor tissues from microscopic images. The algorithm is evaluated by comparing the performance of the proposed fully – automated method against semi – automated procedures. The experimental results shows consist agreement between the two methods. The proposed algorithm provides an effective tool for evaluating oral cancer images. It can be applied to other microscopic images prepared with the same type of tissue staining. [Neha Sharma, Nigdi Pradhikaran, Akurdi 2011] [26] have compared the performance of data mining techniques for oral cancer prediction. The two data mining techniques used are Multilayer Perceptron Neural Network model and tree Boost model. For Training data as well as validation data, Multilayer Perceptron Neural Network and Tree Boost indicates the same specificity and sensitivity. Misclassification of data is not seen in both training and validation data in Multilayer Perceptron Neural Network as well as tree boost model. Also the most important variable for the prediction of malignancy is "Presence of Lymph Node" as seen on USG. As per the study, Tree Boost Classification Model and Multilayer Perceptron Neural Network model both are optimal for predicting malignancy in patient. [M. Muthu Rama Krishnan, Chandran Chakraborthy, Ajoy Kumar Ray, 2010] [27] have proposed a wavelet based texture classification for oral histopathological sections. As the conventional method involves in stain intensity, inter and intra observer variations leading to higher misclassification error, a new method is proposed. The proposed method, involves feature extraction using wavelet transform, feature selection using Kullback – Leibler (KL) divergence and diagnostic classification using Bayesian Approach and Support Vector Machines [A. Chodorowski et al, 1999] [28] have proposed a method for oral lesion classification using true color images. Five different color representations were studied and their use for color image analysis of mucosal images evaluated. Four common classifiers (Fisher’s Linear Discriminant, Gaussian quadratic, KNN nearest neighbor and Multilayer perceptron) were chosen for the evaluation of classification performance. Classification accuracy was estimated using resubstitution and 5 – fold cross validation methods. The best classification methods were achieved in HIS system and linear discriminant function. [A. Ji Wan Han 2008] [29] have investigated the classification of radicular cysts and odontogenic keratocysts. The classification was made using cascaded haar classifier. Three separated classifiers were trained respectively for each type of cyst to process unseen histological images in turn, to return a statistical count of the number of each corresponding cyst nuclei type present. The experimental results show the success of these classifiers in locating individual cells nuclei and in classifying the cyst types. [Laine. A.F et al] [30] have proposed Mammographic feature enhancement by multiscale analysis. Methods of contrast enhancement are described based on three over complete multiscale representations: 1) the dyadic wavelet transform (separable), 2) the φ – transform (non – separable,

87

Vol. 3, Issue 1, pp. 84-91

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
nonorthogonal), and 3) the hexagonal wavelet transform (nonseparable), Multiscale edges identified within distinct levels of transform space provide local support for image enhancement. [Sebastian Steger, Marius Erdt, Gianfranco Chiari and Geirgious Sakas] [31] have proposed a method for novel image feature extraction approach that is used to predict oral cancer reoccurrence. Several numeric image features that characterize tumors and lymph nodes are also proposed. In order to automatically extract those features Registration and supervised segmentation of CT/MR images form the base of automated extraction of geometric and texture features of tumor and lymph nodes. Higher accuracy and robustness is achieved compared to today’s clinical practice. Literature survey reveals that cancer imaging in one of the active areas of research today. According to researchers, it is important to detect, segment and classify cancers at an earlier stage. The researchers working in this area have contributed towards development of algorithms in cancer detection, segmentation, classification etc.

III. COMPARISON OF METHODS
Table 1. Comparisons of various cancer detection methods

The above referred research works are classified as cancer detection methods and cancer classification methods. A comparative study is made between the detection methods and the classification methods separately.

3.1 Comparison of Cancer Detection Techniques
In [13], a neuro fuzzy model was used to achieve a higher value of tumor pixels. The algorithm used classifies the image layer by layer. Brain Tumor was detected using a Neuro Fuzzy model. The performance of MRI image in terms of weight vector, execution time and tumor pixels detected and compared the results with the existing one. A higher value of detected tumor pixel than any other was achieved. Ghassan Hamasneh et al, [12] applied snakes for semi-automatic segmentation of oral lesions in color images of the human oral cavity. Snakes reduced the need for edge linking compared to traditional edge based segmentation and lead to small segmentation errors. But operator interaction was needed due to large variability of the objects and images in this application. In [8], the first procedure is

88

Vol. 3, Issue 1, pp. 84-91

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
determining the seed regions. The Fuzzy C means clustering algorithm is used a segmentation strategy to function as better classifier and aims to classify data into separable groups according to their characteristics. As the number of clusters increases, more and more information is obtained about the tissue which cannot be identified by pathologists. Varsha.H. Patil et al, [16] proposed an automated system for detecting breast tumor at an earlier stage. The system was online and interactive, hence faster and accurate than manual process. The system uses super resolution technique to display the necessary information for boosting the physician’s diagnosis. In [15], the proposed approach provided promising segmentation results. However, several control parameters are not automatically defined and the identification of lesions needs further development. In future, a new methodology is proposed to extract various parameters characterizing each basin. These parameters will be used in view to automatically identify the suspect of lesions. Woong et al [6] used 2D and 3D OCT for early detection and diagnosis of oral premalignancy and malignancy. 3D images provide detailed structural information at any location, and may be viewed at any desired by the clinician. OCT has the potential to become a powerful method for early oral cancer detection. 3.2 Comparison of Cancer Classification methods Muthu Rama Krishnan et al, [27], classified oral tumor using Bayesian Classification and Support Vector Quantization. All wavelet family has been used as an input to classifier to determine the signification of measurement. 48 gabor wavelet features and 9 wavelet features of epithelium are extracted. The signification of each feature is tested using KL divergence. Classification accuracy with wavelet and Gabor wavelet based texture features is also made. Wavelet family with gabor texture features leads to 92% average overall classification accuracy for Support Vector Quantization and 76.83% accuracy for Bayesian one. Ji Wan Han et al [29] used Haar cascade classifier for classification. The classifiers were able to find the individual cell nuclei, but there were much false positive detection. These false detections have a negative influence on the overall classification results of the technique. However, the performance of this technique against that of [30] is based on lesser information. Landini [32] analysed epithelial lining architecture in radicular cysts and odontogenic keratocysts applying image processing algorithms to follow traditional cell isolation based approach. Ireaneus Anna Rejani.Y et al [9] used a thresholding method for segmentation. The classification of breast cancer is done by SVM classifier. The method was tested on 75 mammographic images, from the mini – MIAS database. The methodology achieved a sensitivity of 75%. A. Chodorowski et al, [28], proposed a method for oral lesion classification using true color images. Classification accuracy was estimated using res resubstitution and 5- fold cross validation methods. The best classification results were achieved in HSI color system and using linear discriminant function 94. % of accuracy was achieved. The comparisons of various techniques are tabulated in Table 1.

IV. DISCUSSION
There are many techniques for detecting cancers. Some researchers have suggested neuro – fuzzy models for classifying cancers. Many methods aim for high accuracy, more features and enhancements. In [19], the considerable iteration time and the accuracy level is found to be about 50 – 60% improved in recognition compared to the existing neuro classifier. In [20], the desired detection strongly depends on active contour model. Hence in this work adaptive active contour model was used. The accuracy and speed of detection can be further modified by modifying model and neural network training approach. These papers focus more on accuracy. But the approaches applied for breast cancers or brain cancers cannot be applied directly for oral cancers, because of the moving artifacts induced by the moving tongue and jaw.

V.

CONCLUSIONS

In this paper, various methods to detect cancers are analyzed. The proposed work will identify oral cancer at an earlier stage which helps surgeons to provide medications and other treatments necessary

89

Vol. 3, Issue 1, pp. 84-91

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
for the particular cancer type. The proposed work will explore different enhancement techniques to improve the quality of images capturing devices like Ultra – Sonography (US), Positron Emission Tomography (PET), Single photon Emission Computed Tomography (SPECT), Optical Imaging (OI), Computed Tomography (CT), X ray, Ultrasound and MRI. This will benefit the patients suffering from oral cancer.

ACKNOWLEDGMENT
The authors would like to thank Dr. T.P.Swamy, MDS (Orthodontist, Annai Orthodontist Centre, Coimbatore) for his valuable suggestions about the occurrence and diagnosis of oral cancer. The authors devote their sincere thanks to the Management and Staff of Karpagam College of Engineering for their constant support and motivation.

REFERENCES
[1] [2] [3]

[4] [5]

[6]

[7] [8]

[9] [10]

[11] [12]

[13] [14]

[15] [16]

[17]

[18]

[19]

Arlene Guagliano, “Dental Tribune – Middle East and African Edition”,June – August 2011. National Cancer Institute Website, www.cancer.gov Crispian Scully, Jose.V. Bagan, Colin Hopper, Joel. B. Epstein, “Oral Cancer: Current and future diagnostics techniques – A review article”, American Journal of Dentistry, Vol. 21, No.4, pp 199 – 209, , August 2008. R.C. Gonzales and R.E. Woods, “Digital Image Processing”, Addison Wesley 1992. A. Banumathi, J. Praylin Mallika, S. Raju, V. Abhai Kumar, “Automated Diagnosis and Severity Measurement of cyst in dental X – ray images using Neural Network”, International Journal of Biomedical Soft Computing and Human Sciences, Vol.14, No.2, pp 103 – 107, 2009. Woonggyu Jung, Jun Zhang, Jungrae Chung, Petra Wilder – Smith, Matt Brenner, J. Stuart Nelson and Zhongping Chen, (2005) “Advances in Oral Cancer Detection using Optical Coherence Tomography”, IEEE Journal of Selected Topics in Quantum Electronics, Vol. 11, No.4.pp 811 – 817. Simon Kent, “Diagnosis of oral cancer using Genetic Programming – A Technical Report”, CSTR -96-14. Ranjan Rashmi Paul, Anirban Mukherjee, Pranab K. Dutta, Swapna Banerjee, Mousumi, Pal, Jyotirmoy Chatterjee and Keya Chaudhuri, “A novel wavelet neural network based pathological stage detection technique for an oral precancerous condition” , Journal of Clinical Pathology, Vol.58, Issue.9, pp 932 – 938, 2005. Ireaneus Anna Rejani.Y, S. Thamarai Selvi, “Early Detection of Breast cancer using SVM Classifier Technique”, International Journal of Computer Science and Engineering, Vol. 1(3), pp 127-130, 2009. S. Saheb Basha, K.Satya Prasad, “Automatic Detection of Breast cancer mass in mammograms using morphological operators and fuzzy C – means clustering”, Journal of Theoretical and Applied Information Technology, pp 704 – 708. Man Kin Derek Ko,” Watershed Segmentation for Medical Confocal Image Analysis Towards in Vivo Early Cancer Detection”, Journal of Biological Applications, pp 14 – 15, 2007. Ghassan Hamarneh, Artur Chodorowski, Tomas Gustavsson, “Active Contour models: Application to oral Lesion detection in color images”, IEEE Conference on Systems, Man, and Cybernetics” in IEEE Conference in Systems, Man and Cybernetics, Nashville, TN , USA, October 2000, pp 2458 – 2463. S. Murugavalli, V. Rajamani, “An Improved Implementation of Brain Tumor Detection using Segmentation based on Neuro Fuzzy Technique”, Journal of Computer Science, Vol.3, Issue. 11, pp 841 – 846, 2007. M.M. Mohamed, T. K. Abdel-galil, M. M. A. Salama, A. Fenster, K. Rizkalla, D. B. Downey, “Prostate Cancer Diagnosis Based on Gabor Filter Texture Segmentation of Ultrasound Image”, IEEE Canadian Conference on Electrical and Computer Engineering (CCECE’03), Montreal, Canada, Vol. 3 , pp. 1485 – 1488, May 2003. H. S. Seshadri, A. Kandasamy, “Detection of breast cancer tumor based on morphological watershed algorithm”, International Journal on Graphics, Vision and Image Processing, Vol.5, Issue. 5, 2005. Varsha H. Patil , Dattatraya S. Bormane, Vaishali S. Pawar, “An automated computer aided breast cancer detection system”, International Journal on Graphics, Vision and Image Processing, Vol.6, Issue. 1, 2006, pp 69 – 72. Poulami Das, Bebnath Bhattacharya, Samir. K. Bandyopadhyay, Tai – hoon Kim, “Analysis and Diagnosis of Breast cancer”, International Journal of u- and e- Service, Science and Technology”, Vol.2, September 2009. Shekar Singh, Dr.P.R.Gupta, Manish Kumar Sharma, “Breast cancer detection and classification of histopathological images”, International Journal of Engineering Science and Technology, Vol 3, No.5, May 2001, pp 4228 – 4232. G. Vijay Kumar, Dr.G.V.Raju, “Biological early brain cancer detection using Artificial Neural Network”, International Journal of Computer Science and Engineering, Vol. 2, No.8, pp 2721 – 2725, 2010.

90

Vol. 3, Issue 1, pp. 84-91

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
[20] [21] [22]

[23]

[24]

[25]

[26]

[27]

[28] [29]

[30] [31]

[32]

Yan Zhu, Hong, “Computerized Tumor boundary detection using Hop Field Network”, IEEE Transactions on Medical Imaging, 1997. L. Jeba Sheela,Dr. V. Shanthi, “Image mining Techniques for classification and segmentation of Brain MRI Data”, 2005 – 2007, JATIT. DSVGK Kaladhar, B. Chandana and P. Bharath Kumar, “Predicting cancer survivability using Classification algorithms”, International Journal of Research and Reviews in Computer Science (IJRRCS) Vol.2, No.2, pp 340 – 343, April 2011. Xiaowei Chen, Xiaobo Zhou and Wong, S.T.C, “Automated segmentation, classification, and tracking of cancer cell nuclei in time-lapse microscopy”, IEEE Transactions on Bio medical Engineering, Vol. 53, Issue.4, pp 762 – 766, 2006. Yung –nien Sun, Yi-ying Wang, Shao-chien Chang, Li-wha Wu and Sen – tien Tsai, “Color – based tumor segmentation for the automated estimation of oral cancer parameters”, Microscopy Research and Technique, Vol. 73, Issue. 1, pp 5- 13, 2010. Yung –nien Sun, Yi-ying Wang, Shao-chien Chang, Li-wha Wu and Sen – tien Tsai, “A color – based approach for automated segmentation in Tumor Tissue Classification”, Proceedings of the 29th Annual International conference of the IEEE EMBS, 2007. Neha Sharma, Nigdi Pradhikaran, Akurdi, “Comparing the performance of data mining techniques for oral cancer prediction”, Proceedings of the 2011 International Conference on Communication, Computing & Security (ICCCS’11), ISBN: 978-1-4503-0464-1, New York, USA, 2011. M. Muthu Rama Krishnan, Chandran Chakraborthy, Ajoy Kumar Ray, “Wavelet based texture classification of oral histopathological sections”, International Journal of Microscopy, Science, Technology, Applications and Education, pp 897-906. A. Chodorowski, U. Mattsson, T. Gustavsson, “Oral Lesion classification using true color images”, Proceedings of SPIE, Vol. 3661, ISBN. 978081943132, pp 1127 – 1138, 1999. A. Ji Wan Han, Toby Breckon, David Randell, Gabriel Landini, “Radicular cysts and odontogenic keratocysts epithelia classification using cascaded Haar classifiers” – A review article published by University of Birmingham. Laine, A.F. Schuler, S. Jian Fan Huda, W. “Mammographic feature enhancement by multiscale analysis” IEEE Transactions on Medical Imaging. Vol.13, Issue.4, pp 725 – 740, December 1994. Sebastian Steger, Marius Erdt, Gianfranco Chiari and Georgios Sakas, “Feature Extraction from Medical Images for an oral cancer reoccurrence prediction environment”, World Congress on Medical Physics and Biomedical Engineering, September 7 - 12, 2009, Munich, Germany. G. Landini, “Quantitative analysis of the epithelial lining architecture in radicular cysts and odontogenic keratocysts”, Head and Face medicine, Vol.2, February 2006.

Authors K. Anuradha completed her B.Sc (Applied Science) from PSG College of Technology, Coimbatore in 2000 and MCA from Bharathiar University in the year 2003. She is currently pursuing her Ph.D (Computer Science) at Karpagam University, Coimbatore. She has published a paper in the International Journal. She has presented 5 papers in National and International Conferences. She has got 8 years of teaching experience and 1 year industrial experience. Her areas of interest include Medical Image Processing, Computer Graphics and Distributed Computer Systems.

K. Sankaranarayanan, born on 15.06.1952, completed his B.E. (Electronics and Communication Engineering) in 1975 and M.E. (Applied Electronics) in 1978 from P.S.G. College of Tech., Coimbatore under University of Madras. He did his Ph.D. (Biomedical Digital Signal Processing and Medical Expert System) in 1996 from P.S.G.College of Technology, Coimbatore under Bharathiar University. He has got more than 32 years of teaching experience and 10 years of research experience. He has guided 10 candidates for Ph.D and presently guiding 18 candidates for Ph.D. His areas of interest include Digital Signal Processing, Computer Networking, Network Security, Biomedical Electronics, Neural Networks and their applications and Opto Electronics. He has published more than 50 papers in National Journals and International Journals.

91

Vol. 3, Issue 1, pp. 84-91

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

WAVEFORM ANALYSIS OF PULSE WAVE DETECTED IN THE FINGERTIP WITH PPG
Subhash Bharati & Girmallappa Gidveer
Jawaharlal Nehru Engineering College, Aurangabad, M.S., India

ABSTRACT
Photoplethysmography is one of the biomedical Instruments and when combined with pulse wave analysis it becomes information rich. Photoplethysmography is a non-invasive techniques that measures relative blood volume changes in the blood vessels close to the skin. This instrument was specially designed for the analysis healthy, diabetes an arthritis person. Many problems were created while designing PPG instrument such as light source, skin thickness of fingertip, signal amplification, storing of signals, heart rate and respiratory rate. We present the results of analysis of photoplethysmography (PPG) signal for healthy and patient with cardiovascular disorder (diabetes and arthritis). PPG signal of 21 subjects were recorded from the fingertip. The analysis indicates the content of PPG signal is different for healthy and cardiovascular patients. We also investigated heart rate and respiratory rate using PPG signal. In the present method of PPG analysis, the aim of this study to analyze the waveform in relation with diabetes, arthritis and healthy persons. The peripheral pulse has a steep rise and notch on falling slope in the subjects and a more gradual rise and fall and very small dicrotic notch were observed. The analysis with falling slope indicates the type of diseases.

KEYWORDS: Pulse Wave, Photoplethysmograph, Diabetes, Arthritis.

I.

INTRODUCTION

Human skin plays an important role in various physiological processes including thermoregulation, neural reception, and mechanical and biochemical protection. The heart-generated blood-pressure waves propagate along the skin arteries, locally increasing and decreasing the tissue blood volume with the periodicity of heartbeats. The dynamic blood volume changes basically depend on the features of the heart function, size and elasticity of the blood vessels, and specific neural processes. Therefore direct monitoring of skin blood pulsations may provide useful diagnostic information, especially if realized non-invasively. Optical technologies are well suited for non invasive monitoring of skin blood pulsation. Radiation of the red to near infrared spectral region penetrates several millimeter under the skin surface. Skin blood pumping and transport dynamics can be monitored at different body location (e.g. fingertip, earlobe, and forehead) with relatively simple and convenient PPG contact probes. Simultaneous data flow from several body locations the multi channel PPG technique increases the reliability of clinical measurements also allowing us to study heart beat pulse wave propagation in real time and to evaluate the vascular blood flow resistance an important physiological parameter for vascular diagnostics. In general each recorded PPG pulse contains useful information can be obtained by analysis of the PPG signal sequence recorded [1]. Plethysmograph is a combination of the Greek word. Plethysmos meaning is increase and Graph is the words for write [2]. It is an instrument used mainly to determine and register the variation in blood volume or blood flow in the body. We used photo electric type plethysmograph. Hence is known as Photoplethysmograph [3]. Pulse wave analysis helps to study diabetes & arthritis & it is unique for each individual so it would also give unique identification as biometric identification [4]. Pulse wave analysis also helps to study large artery damage& an abnormality in the cardiovascular disease which is one of the common causes of high mortality rate.PPG analysis emphasizes the importance of early evaluation of the diseases [5]. Several studied conducted by various groups of population showed that in PPG, the

92

Vol. 3, Issue 1, pp. 92-100

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
reflectance of light from in vivo tissue is described as a function of light for the wavelength in the range from 420 to 940 nm [6]. The electrical signal from PPG is related to blood volume changes in tissue. This signal provides a means of determining the diseases related to cardiac cycle & changes in arthritis & diabetes. The suggested PPG method is reliable, simple, low cost & noninvasive which could become an effective new screening tool for the early detection of diabetes neuropathic foot [7].

II.

LITERATURE SURVEY

2.1. Methods of Photoplethysmograph (PPG)
2.1.1. Reflected Photoplethysmograph Reflection PPG method uses the back scattered Optical signals for analysis of skin blood volume pulsation [8]. 2.1.2 Transmitted PPG In the transmission method, an optical signal change according to its absorption at the pulsation as oxygenated allows red wavelength more and deoxygenated blood allows infrared wavelength. It employs the principle that oxygenated blood is bright red. Whereas reduced or deoxygenated blood is dark red so combination of red and near infrared LED’s and photo sensors can be used to monitor the colour of blood [8].

2.2. Issues Regarding PPG
2.2.1. First Issues Regarding PPG The first issue concerns the contact and noncontact PPG, in which both are has nearly same potential only difference in the amplitude of the received signal and clarity. In the noncontact PPG signals are not so cleared as compared to contact type PPG [2]. 2.2.2 Second Issues Regarding PPG The second issue concerns the dynamic range of the detected signal. The detected pulsatile (AC) signal is very small compared to the non-pulsatile (DC) signal as shown in figure1 [2]. 2.2.3 Third Issues Regarding PPG The third issue is ambient light artifact. The detector will receive increased ambient light due to the probe separation from the tissue bed. Introducing close packaging of finger bed with detector could reduce this effect [2].

Figure 1. Breakdown of the component of the detected PPG signal

III.

MEASUREMENT SYSTEM

3.1. Block Diagram 93 Vol. 3, Issue 1, pp. 92-100

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
We used red light source as sensor, i.e. LED. LED gives brighter light at low power as compare to other. Also by using LED, the problem of localized heating can be avoided [9]. The PPG device incorporates a light source and detector, which are placed against the surface of skin [10]. Incident light passes through the tissue and blood and diffuses in the tissue bed. Variation in the intensity of light arises from changes in the blood fractional volume of arteries and tissues, which alter light absorption. This variation occurs with arterial pulsation. The absorption coefficient of tissue bed at various wavelengths for pulsation blood it has maximum output at wavelength 420 to 900 nm[6].For detectors we used LDR whose dark resistance is 10 K when the cell is not illuminated (i.e. dark resistance). The spectral response of LDR is good for visible region and we used red colored source. In our project we used LDR as detector which gives output change in resistance and this change in resistance is directly proportional to light incident on its surface. Coupling capacitor used which blocks the dc component present in our signal. As in previous work we showed that the detected signal consist ac and dc components.PPG signal = DC blood and tissues + AC blood modulates. Thus dc components can be filtered out [2]. The wave form different specimens (i.e. different male/females) depend on the skin thickness to overcome these problem we used variable gain (pot) to set the gain as per requirement. Lastly we used comparator to compare to shift the waveform as per subject requirement, thus added comparator due to which the dc level can be shifted, because of skin thickness and different artifacts i.e. movements etc. the wave form may be shifted. Thus we get final output which is PPG signal. Once we get PPG wave forms then our main aim is to show the effect of diabetic or arthritis on the waveforms. This depends upon the dicrotic notch.

Figure 2. Block Diagram

3.2. Physiological Measurement
Photoplethysmographic signal measurements were obtained from 21 subjects (13 male, 8 female). Subjects were in the age of 22-70. Measurements were performed in a laboratory. Each subject was asked to relax and sit on the chair and rest the forearm on the lab table to help the entire hand keep steady. An operator then attached the finger sensor to the forth fingertip of the left hand. It is important to have a comfortable arm position in order to keep the finger relatively motionless for a stable and repeatable recording. The length of the recorded signal was 10 seconds.

3.3. Analysis
The analysis of the healthy, diabetes, arthritis person is as shown in figure 3, 5, 7. From figure 3, we observed number of peaks; from this peak we calculated heart rate and respiration rate. Heart rate = No. of peaks × 12 Heart rate = 6 × 12=72 Respiratory rate=Heart rate÷2 Respiratory rate=72÷2=36 3.3.1. Analysis of Healthy Person

94

Vol. 3, Issue 1, pp. 92-100

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Figure 3.Healthy subject status

Analysis of healthy person is as shown in figure 3. Figure 4 shows the SPPPG (single period photoplethysmograph) and animated signal of healthy person.

Figure 4. SPPPG and animated signal of healthy person

3.3.2. Analysis of Diabetes Person

95

Vol. 3, Issue 1, pp. 92-100

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Figure 5. Diabetes subject status

Analysis of diabetes person is as shown in figure 5. Figure 6 shows the SPPPG and animated signal of diabetes person.

Figure 6. SPPPG and animated signal of diabetes person

3.3.3. Analysis of Arthritis Person

Figure 7. Arthritis subject status

96

Vol. 3, Issue 1, pp. 92-100

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Analysis of arthritis person is as shown in figure 7. Figure 8 shows the SPPPG and animated signal of arthritis person.

Figure 8. SPPPG and animated signal of arthritis person

IV.

DISCUSSION AND CONCLUSIONS

The main differences in the PPG with healthy, diabetes and arthritis patients were observed with the presence or absence of dicrotic notch & trailing edge slope. As vessels get stiffer during aging process, the reflected wave returns faster and due to the summation of wave the resultant pulse wave changes [11]. After filling in the patient data AC-component of his/her PPG signal is detected and stored. The analysis of healthy, diabetes and arthritis person is illustrated at figure 3, 5 and 7, which shows that the health status, heart rate and respiratory rate. The SPPPG and animated signal is illustrated at figure 4, 6 and 8.The shape of the single PPG pulse detected at the periphery (e.g. fingertip) can differ significantly from that at the magistral arteries; it primarily depends on resistance of the vascular system. If the vessel resistance is abnormally high due to diabetes, arthritis or other vascular pathology that narrow the vessels, velocity of blood flow from big arteries to small capillaries decreases. In the summary comparisons of the PPG wave form shows that it can provide a simple non-invasive means of studying diabetic & arthritic patients [13].

V.

RESULTS

The main objective of this work was to show the relation of trailing edge slope with diabetes and arthritis patient shows the change in slope than healthy subject i.e. the difference in the pulse shape changes as a function of disease which can be well observed visually. In the diabetic patients PPG waveform shows very small dichrotic notch & slope is less, where as arthritis patients PPG waveform shows very sharp slope and no dichrotic notch were observed. Single period PPG signal comprises a fast raising part or anacrota and subsequent falling part or catacrota. Anacrota reflects the stretching of the blood vessel walls under the increased blood pressure after each heartbeat and catacrota – relaxation processes of the blood vessel wall in-between each two heartbeat. The catacrota can be variously shaped depending on the vascular condition; it normally contains so called predicrotic dip, and secondary peak (notch) caused by elastic reflection in the arterial system. A typical healthy person’s SPPPG signal shape is presented at figure 4.The propagating blood pressure pulse wave becomes broadened and delayed and may completely lose its secondary (dicrotic) peak, when the periphery is reached. SPPPG signals were bell-shaped without any secondary peak at the catacrota. A typical diabetes person’s SPPPG signal shape is at figure 6.A typical SPPPG signal of arthritis person’s is as shown in figure 8.This peak is a clear evidence of increased blood flow via the damaged vessels.

ACKNOWLEDGEMENT
The author would like to thank Dr. Mali for pathological support and patient co-ordination. The author would also like to thank Dr. S. D. Deshmukh Principal, Prof. J.J. Rana H.O.D; Jawaharlal Nehru Engineering College, Auranagbad (M.S.) India. The author would also like to thank reviewers

97

Vol. 3, Issue 1, pp. 92-100

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
for their valuable suggestions. This work was supported by grants from the Department of Electronics and Telecommunication, Jawaharlal Nehru Engineering College, Auragabad (M.S.), India.

REFERENCES
[1].Janis Spigulis, “Optical Non-Invasive Monitoring of Skin Blood Pulsations”. Applied Optics Vol. 44, No.10.April 2005.pp.1850-1857. [2]. Peck Y.S.Cheng and P.R.Smith, “An Overview of Non-Contact Photoplethysmography”, Dept.of Electronics & Electrical Engineering, Loughborough University, LE 1 1 3TU, UK, pp57-59. [3].Leslie Cromwell, Fred J.Weibell, Erich A. Pfeiffer, “Biomedical Instrumentation and Measurement”. Second Edition, Prentice-Hall of India Private Limited, New Delhi, pp.150-163. [4].M.H.Sherebin, R.Z. Sherebin, “Frequency Analysis of Peripheral Pulse Wave Detected in the Finger with Photoplethysmograph”.IEEE Transaction on Biomedical Engineering, Vol.37No.3, March 1999. [5].K.Meigas, R.Kattai, M.Nigul, “Comparisons of Signal of Pulse Profile as Skin Surface Vibration PPG and Doppler Spectrogram for Continuous Blood Pressure Monitoring”. Proceeding of The International Federation for Medical and Biological Engineering, Vol. 3, 2002.pp. 510-511. [6]. Weijia Cui, Lee E. Ostrander & Bok Y. Lee, “In Vivo Reflectance of Blood and Tissue As a Function of Light Wavelength,” EEE Transaction on Biomedical. Engg. Vol - 37, No. 6 June 1990, pp.1-1. [7].Deon Won Kim, Sung Woo Kim, “Detection of Diabetic Neuropathy Using Blood Volume Ratio of Finger and Toe by PPG”. Engineering in Medicine and biology Society, 2007.EMBS 2007.29TH Annual international Conference of the IEEE. pp. 2211-2214. [8]. Vincent P.Crabtree, “Prospective Venox Feasibility Study,” Dept. of EEE, Loughborough University.pp.2727. [9].Joydeep Bhattacharya, Partha Pratim Kangilal, “Analysis & Characterization of Photoplethysmography Signal”, IEEE Transaction on Biomedical Engg.Vol.48, No.1, Jan.2001.pp1-1 [10]. Stephen A.M., H.H. Asada, “Photo-Plethysmograph Finger Nail Sensors for Measuring Finger Forces without Haptic Obstruction,” IEEE Transaction On Robotics and Automation, Vol.17, No.5, Oct.2001.pp.1-1. [11].Irina Hlimonenko, Kalju Meigas, Rein Vahisalu, “Waveform Analysis of Peripheral Pulse Wave Detected in the fingertip with Photoplethysmograph”. Tallinn Technical University, Biomedical Engg, Center, Tallinn, Estonia. Measurement Science Review,Vol.3 Section 2,2003.pp.49-52. [12]. Sijung H; P.R. Smith, “Comparison of Pulse Interval in Contact & Noncontact Photo Plethysmography,” Dept. of EEE, Loughborough University.pp.39-42. [13]. Jonis Spigulis, Indulis Kukulis, “Potentials of Advanced Photo-Plethysmography Sensing for NonInvasive Vascular Diagnostics and Early Screening”, University of Latvia, Dept. of Physics, Riana Blvd.19, Riga, LV-1586, Latvia.pp1-5.

APPENDIX
#include <REG51.H> #define adc_port P1 sbit rd = P3^7; sbit wr = P3^6; sbit cs = P3^4; sbit intr = P3^5; //ADC Port //Read signal P1.0 //Write signal P1.1 //Chip Select P1.2 //INTR signal P1.3

void conv(); //Start of conversion function void read(); //Read ADC function void delay(unsigned int); unsigned char serial_read(); void serial_send(unsigned char); void serial_init(); unsigned char adc_val;

98

Vol. 3, Issue 1, pp. 92-100

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
void main() { intr = 1; serial_init(); while(1) { //Forever loop conv(); //Start conversion read(); //Read ADC serial_send(adc_val); delay(10); } } void conv() { cs = 0; wr = 0; wr = 1; cs = 1; while(intr == 1); } void read() { cs = 0; rd = 0; adc_val = adc_port; rd = 1; cs = 1; }

//Make CS low //Make WR low //Make WR high //Make CS high //Wait for INTR to go low

//Make CS low //Make RD low //Read ADC port //Make RD high //Make CS high

void delay(unsigned int count) { unsigned int i; while(count) { i = 115; while(i > 0) i--; count--; } } void serial_init() { TMOD = 0x20; SCON = 0x50; TH1 = 0xFD; TL1 = 0xFD; TR1 = 1; } void serial_send(unsigned char dat) { SBUF = dat; while(TI == 0); TI = 0; } unsigned char serial_read()

99

Vol. 3, Issue 1, pp. 92-100

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
{ while(!RI); RI = 0; return SBUF; }

Authors:
Subhash Bharati has 10 years experience in Academic. He had worked as Lecturer in Gangamai College of Engineering, Dhule, M.S., India. At present he is as M.E. Scholar at Jawaharlal Nehru Engineering College, Aurangabad, M.S., India. He has successfully published National & International Research Paper.

Girmallappa Gidveer is professor in Electronics and Telecommunication Engineering Dept. in Jawaharlal Nehru Engineering College, Aurangabad, M.S., India. And has 35 years experience in Academic. He has published many Research Papers in various Journal and Conferences.

100

Vol. 3, Issue 1, pp. 92-100

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

SLOTTED PIFA WITH EDGE FEED FOR WIRELESS APPLICATIONS
T. Anita Jones Mary1, T. Joyce Selva Hephzibah2, C.S.Ravichandran3
Department of Electronics and Communication, Karunya University, Coimbatore, Tamil Nadu, India 3 S.S.K. Engineering College, Coimbatore, Tamil Nadu, India
1,2

A BSTRACT
In order to overcome the shortcoming of narrow bandwidth of conventional PIFA, a novel bandwidth enhancement approach is proposed in this paper. A handset antenna technique that combines a parallel excitation of PIFA and slot radiators is presented. To improve the bandwidth of handset antennas in both low and high bands, slots can be introduced into the radiating patch of different shapes such as rectangular and Hshape. Comparison between the PIFA with U-slot on the radiating patch of these shapes can provide that which shape is suitable for enhanced antenna performance. The presented antenna can cover handheld satellite phones, WLAN, Bluetooth and Wimax frequency bands. From the simulation results obtained using FEKO, it is observed that the relative bandwidth of PIFA with U-slot on rectangular patch is 10.59 times and on H-shape patch is 24.3 times more than the conventional PIFA.

K EYW ORDS: Handset antenna, Planar Inverted F- Antenna(PIFA),Slot , Bandwidth, Patch

I.

INTRODUCTION

Recently modern wireless communication technologies are in the process of rapid development. Multi-system applications have been used explosively. Owing to this, it has been a necessity to design antennas with the characteristics of multiband and wideband for mobile terminals[8]. In modern mobile handsets, PIFAs are generally used as built-in antennas. Nowadays PIFA is being adopted extensively as handset antenna because of its advantages of compact structure, low profile, easy fabrication, low manufacturing cost and easy integration with portable devices. It has reduced backward radiation and enhanced antenna performance. However, a major disadvantage of the PIFA antenna is its narrow impedance bandwidth. Hence, it is desirable to find methods that can enhance the bandwidth of the PIFA antenna. Therefore the antenna designed should be as smaller as possible to be fitted in the handsets and should have acceptable performance. Phones are increasingly adding components and features such as large color screens, digital cameras, digital music players, digital and analog radio and multimedia broadcast receivers [1]. For conventional PIFA, only single band of operation is possible. To have dual band operation, some part of top plate has to be removed and another PIFA should be inserted. It will lead to more complexity. A rectangular slot is introduced into the ground plane of mobile chassis. A more compact design of a practical wideband PIFA antenna with trapezoidal feed has the bandwidth for S11 < -10 dB ranges from 1.67 GHz to 4.05 GHz and the relative bandwidth reaches 83% . The antenna height mainly influences the bandwidth, resonant frequencies, and return loss [3]. Changes in the width of the planar element can also affect the determination of the resonant frequency [7]. The central slot has good matching at low bands (17.4%) but at higher bands it still behaves similar to the reference antenna (8.3%) since the slot is in the cut-off state. This means that the slot does not operate as a parasitic element [5]. Resonant frequency decreases with the decrease in short

101

Vol. 3, Issue 1, pp. 101-110

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
circuit plate width. Unlike micro-strip antennas that are conventionally made of half wavelength dimensions, PIFA’s are made of just quarter-wavelength. Analyzing the resonant frequency and the bandwidth characteristics of the antenna can be easily done by determining the site of the feed point at which the minimum reflection coefficient is to be obtained [6]. Besides the good bandwidth results obtained when the ground plane is tuned to the same frequency of the handset antenna. Analysis of the way to minimize the length of the feeding line maintenance as much as possible the advantages of the bandwidth benefits of the open-slots is explained [12]. PIFAbased internal multiband antenna that can support the following eight frequency bands: GSM ( 860– 980 MHz), DCS (1710–1880 MHz), PCS (personal communication services, 1880–1990 MHz), UMTS ( 1.9 –2.17 GHz), WiBro (2300–2390 MHz), Bluetooth (2400–2480 MHz), S-DMB (2630– 2655 MHz), and WLAN (5000–6000MHz)[17]. A reduced height, multiband internal antenna has been proposed for wireless personal communication handsets to operate at GSM- 900, DCS, PCS, UMTS, WiBro, Bluetooth, S-DMB, and WLAN frequency bands. Radiation and return loss performances of the antenna are reasonable at all the frequency bands[15]. A prototype that can cover the frequency bands of GSM850, GSM900, DCS1800 and PCS1900 simultaneously was fabricated and measured. In addition, the measured impedance bandwidth (defined at -6 dB level) of the low band ranges from 0.7 GHz to 0.98 GHz and for the high band it is between 1.65 GHz and 2.02 GHz, which completely covers the required quadband operational bandwidth[2]. There are four methods available to enhance the bandwidth of conventional PIFA. They are introducing slots, capacitive loading, loading dielectric with high permittivity and adding chip resistor. This paper describes a modified U-slot coupled H-shaped antenna & U-slot coupled rectangular patch to reduce the area of antenna and achieve the termination of high order harmonic component behaviours. It also exhibits excellent efficiency, bandwidth enhancement and linearity. The paper is organized as follows. In section II, the survey about slots on the ground plane as well as on the radiating patch is discussed. It provides the information about the applications in which slotted antennas are used. Section III provides the design parameters for the conventional & slotted PIFA. Section IV presents the low profile design covering handheld satellite phones, WLAN, Bluetooth and Wimax frequency bands. The simulation results using FEKO are discussed briefly in section IV. Finally, Section V summarizes the work.

II.

RELATED WORK

When the rectangular slot is on the ground plane, enhancement in bandwidth mainly depends on the size of the ground plane [8]. The novel H-shaped antenna has many excellent advantages over the conventional rectangular microstrip antenna such as the harmonic mode suppression, bandwidth enhancement, and the increase of the whole transmitter efficiency.The U-slot on the H-shaped patch is discussed [1]. Though the bandwidth and the size of an antenna are generally mutually conflicting properties, that is, improvement of one of the characteristics normally results in degradation of the other [9], hence compact size antennas are needed for mobile handsets with improvement in bandwidth. By varying the location and size of the slot, the equivalent length of the ground plane can be adjusted to the optimal lengths of the low and high bands, which thus enhance the bandwidth of PIFA antennas for both the low and high bands [18]. The resonance frequency depends inversely on the slot length and feed point, while it increases with increasing the slot width and coaxial probe feed radius [16]. Transmission line feeding is provided in order to provide excitation to antenna and RF module. The feeding line follows long meander shape path to reach antenna feeding pad and RF module [14]. The operational bandwidth is increased because the electrical length of the ground plane increases when meandering or open-end slots are used on the ground plane, although its physical size is fixed [4].

III.

ANTENNA DESIGN

Antenna design for mobile handsets can be of two types-internal and external. One of the main disadvantage of external antenna is it is very close to the user’s head and the radiation is directly incident on the head making the absorption rate high. Internal antenna can be installed on the side of

102

Vol. 3, Issue 1, pp. 101-110

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
the PCB i.e. opposite to the human head thus avoiding the human interference. One of the techniques to obtain multiband behavior for handset antennas is to create several resonant paths. The specification of the internal antenna depends strongly on the design of mobile phones and changes have to be made for each design. Moreover the internal antenna is difficult to design than its counterpart because the designer must consider characteristics such as feed point, ground position, radiation pattern, etc. At present, PIFAs have attracted much interest due to their small size and appreciable electrical characteristics compatible with existing specification, making it a promising candidate for internal antennas. The PIFA structure in general is shown in Fig.1.

Figure.1. PIFA structure

Bandwidth enhancement and size reduction are not achieved at the same time, especially for resonant structures such as PIFA. Dual resonance is obtained by introducing slots parallel to the radiating edge of the patch. The resonant frequency of the antenna can be reduced by decreasing the stub width and also by introducing open slots. To reduce the PIFA size, it is necessary to shorten the antenna. But it affects the impedance of the antenna and the radiation resistance becomes reactive. It can be compensated using capacitive load. The shape and size of the slot plays significant role in improving the bandwidth. In case of the PIFA, the loop structure formed by the shorting pin and feeding line is modeled as a shunt inductance and impedance matching of the PIFA can be achieved by controlling the required amount of the shunt inductance, which is generated by the loop structure. In the case of the slot antenna on the ground plane, the above mentioned loop structure can be modelled as a series inductance in contrast with the shunt inductance in the PIFA. The design parameters for PIFA without slot on rectangular patch and H-shaped patch are given in Table1 & 2 respectively. The simulated result of PIFA as per Table.1 &2 are shown in Fig.2 & 3 respectively. The feeding voltage is given at one of its edge. The resonant frequency of PIFA can be approximated using the following equation(1). L1 + L2 = /4 (1) Here in Table 1, Lg, Wg are length and width of the ground plane respectively. Lp, Wp are length and width of the radiating patch respectively. Lf, Wf, are length and width of the feed respectively. Ls, Ws are length and width of the shorting pin respectively. H-shaped patch is made using 3 rectangles. Lp1, Wp1 are length and width of the first rectangle, Lp2, Wp2 are length and width of the second rectangle and Lp3,Wp3 are length and width of the third rectangle. The U-slot dimensions are given in Table 3. The resonant frequency of the PIFA mainly depends on the length of the ground plane and the radiating patch. The simulated results for PIFA with U-slot on rectangular and H-shaped patch are shown in Fig.4 & 5 respectively.
Table 1.Values of the Design Parameters (PIFA on rectangular patch) Parameter Lg Wg Lp Wp Lf Wf Ls Ws Value(mm) 100 40 16 30 2 3 4 3

103

Vol. 3, Issue 1, pp. 101-110

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Table 2.Values of the Design Parameters (PIFA on H-shaped patch) Parameter Lg Wg Lp1 Wp1 Lp2 Wp2 Lp3 Wp3 Lf Wf Ls Ws Value(mm) 100 40 16 5 3 5 16 5 2 3 4 3

Table.3.Slot Dimensions U-slot(mm) L1 10 L2 6 W 2

The slot antennas of half wavelength structures are generally used to operate at the fundamental resonant mode. The electrical length of the proposed slot is shorter than a half of a guided wavelength. In the case of the slot antenna on the ground plane, the previously mentioned loop structure can be modeled as a series inductance in contrast with the shunt inductance in the PIFA. The slot antennas of half-wavelength structures are generally used to operate at the fundamental resonant mode. The electrical length of the slot is between a quarter and half of guided wavelength.

Figure.2. Conventional PIFA on rectangular patch in CADFEKO

Figure.3. Conventional PIFA on H-shaped patch in CADFEKO

104

Vol. 3, Issue 1, pp. 101-110

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Figure.4.PIFA with U-slot on rectangular patch in CADFEKO

Figure.5.PIFA with U-slot on H-shaped patch in CADFEKO

Its half-wavelength resonance at the desired resonant frequency can be achieved because the impedance of the slot antenna is transformed to the series resonance, which is generated by the loop structure of the feed line. Thus, the amount of shunt and series inductance is dependent on the loop size formed by the feed line and the shorting pin. Hence, the resonant frequencies of the proposed hybrid antenna can be controlled by adjusting the distance between the feed line and the shorting pin. The distance between feed and shorting pin for the desired resonant frequency is chosen to be 1.5 mm.

IV.

RESULTS & DISCUSSIONS

The return loss and field distribution on the PIFA without & with slots has been computed using the FEKO. From the simulated results of Fig.6 & 7, it can be seen that PIFA with U-slot on rectangular patch has three resonant modes at 1.6GHz, 4.147GHz and 6.56GHz. Bandwidth can be calculated using the following equation(2), where fH, fL and fC are high frequency, low frequency and centre frequency respectively. (2) The calculated bandwidth for each of these antennas are shown in Table.4. Bandwidth can be improved by varying the slot width and also by varying the distance between feed and stub. Slot on the ground plane is more efficient than the slot on the patch. The optimal performances of the low and high bands are mainly depends on the slot location and size. Also, the optimal slot location and size of the low band are different from those of the high band. Hence, to broaden the bandwidths for the low and high bands simultaneously, a compromised solution for the slot location and size should be selected. A U-slot adds a capacitive component in the input impedance that compensates for the inductive component of coaxial probe. A single half wavelength slot resembles the half wave dipole in terms of gain and radiation except that there is a difference in polarization. The electric field across the slot is maximum at the centre and tapers off towards the edges as indicated. In general, rectangular and circular slots are easy to analyze and hence U shaped slots are analyzed in this paper. The introduction of an open slot adjacent to U-

105

Vol. 3, Issue 1, pp. 101-110

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
slot reduces the frequency, particularly of the higher band drastically. This is due to the fact that there are currents flowing at the edge of the U-shaped slot, therefore a capacitive loaded slot reduces the frequency and thus the antenna dimensions drastically. The impedance matching of the dual band can be obtained by positioning of the single feed and the shorting pin within the U-shaped slot, and by optimizing the space between feed and shorting pins. A shorting wall, which is placed at the edge of this PIFA, controls the isolation/separation between the two bands. The current follows larger path due to slot on the ground plane. Slots are not only useful to antenna design but also for damping undesired modes for EMC purposes. Maximum current distribution of the PIFA is obtained close to the shorting pin and decreases away from it. The slot is weakly excited at 900MHz compared to the excitation at 1720 & 2000MHz. The antenna performance is good when the feeding port and slot are arranged on the same side. Fringing fields are the radiating sources of PIFA. A dual band PIFA antenna has been used to analyze the bandwidth and efficiency improvement when a slot in the ground plane is introduced: the original dual-band antenna has been enhanced to introduce more frequency bands. Therefore, a multiband antenna with good bandwidths and efficiency response can be obtained with only adding a slotted ground plane.

PIFA on rectangular patch PIFA on Hshaped patch

Figure. 6. Simulated return loss against frequency for PIFA without slot on rectangular & H-shaped patch

PIFA with U-slot on rectangular patch PIFA with U-slot on H-shaped patch

Figure.7. Simulated return loss against frequency for PIFA with U slot on rectangular & H-shaped patch

106

Vol. 3, Issue 1, pp. 101-110

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
It can be noticed that introducing slots in the ground plane is a simple way to achieve a multiband antenna without modifying PIFA antenna geometry and increasing the handset volume. A dual band PIFA antenna has been used to analyze the bandwidth and efficiency improvement when a slot in the ground plane is introduced. From Fig.7, PIFA with U-slot provides three resonances. The separation between the resonances is sensitive to the dimension of the slot. The original dual-band antenna has been enhanced to introduce more frequency bands. The present technique is useful since it simplifies the length of the feeding transmission line. Therefore, a multiband antenna with good bandwidth and efficiency response can be obtained with only adding a slotted ground plane.
Table 4.Performances of the Proposed Broadband Antennas Antenna Type (Rectangular patch) PIFA without slot PIFA with U-slot on patch Bandwidth(%) Antenna Type (H-shaped patch) PIFA without slot PIFA with Uslot on patch Bandwidth(%)

6.16 10.59

16.5 24.3

If the S11 parameter is above -6dB, then it provides good impedance matching. From Fig.6, PIFA without slot on rectangular patch has return loss of -5.8dB and H-shaped patch has return loss of 13dB. From Fig.7, PIFA with U-slot on rectangular patch has return loss of -8dB and H-shaped patch has return loss of -16dB. Hence the impedance of PIFA with U-slot on patch is better than the impedance of conventional PIFA and also PIFA with U-slot on H-shaped patch is having more enhanced bandwidth than PIFA with U-slot on rectangular patch. By adjusting the shorting plate width, good impedance matching can be achieved. Hence more energy has been radiated from the PIFA with U-slot on patch to the load. Adding open slots in certain locations of the ground plane improves the bandwidth of the antenna. When the open slots are added, the feeding line length should be high. In order to minimize the length of the feeding line, open slots are avoided in this paper. To generate a slot resonating at the second band, slot length should be increased. The slot width was varied in the model to achieve a feed point input impedance of approximately 50 . Here enough power is transmitted from one end to other end. The slotted PIFA could be suitable to be used in wireless communication applications. By feeding the slot in its edge, triple resonance response is obtained with U-slot on H-shaped patch whereas with Uslot on rectangular patch double resonance response is obtained. By instead using a slot in the ground plane as the radiating element the antenna performance becomes independent of available height. In order to provide very broadband response, the slot should be placed in the center of the ground plane. It provides better quality factor to the antenna. The slot is located under the antenna projection in order to get a good coupling between the PIFA and the slot. It can also be used as a parasitic antenna. It means that the slot can serve as part of the directional antenna but has no direct connection to the receiver and transmitter. It reflects or reradiates the energy that reaches it. When the slot length is similar to 0.5λ the slot acts as an effective parasitic element and a coupling effect between PIFA and slot may be obtained. The slot used in Fig.4&5 are too short to resonate at higher frequencies (1800–2000 MHz), this is why it only improves bandwidth at low bands. The ground surface waves produce spurious radiations or couple energy at discontuities leads to distortions in the main pattern or unwanted loss of power. All the antennas have the same radiation pattern at low frequencies. Therefore, adding some slots in the ground plane does not modify the antenna radiation structure. Introducing slots in the ground plane is a simple way to achieve a multiband antenna without modifying PIFA antenna geometry and increasing the handset volume. It is also observed that the first resonant frequency shifted upwards with the increasing of slot width. Again, with decreasing the slot width, second resonant frequency exhibits better resonant at the expense of reducing the upper edge frequency resulting in a bandwidth reduction. The slot is selected in an attempt to reduce the two orthogonal resonant modes to lower frequencies. Independent control of the two operating frequencies is possible with a U-shaped slot.

107

Vol. 3, Issue 1, pp. 101-110

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
The proposed antenna shows an interesting dual band resonant behavior with a wide range of the two resonant frequency ratio without changing the external dimensions of the combined slot structure. It is clear that slot can be either resonant or non-resonant. If it is resonant, the current along the edges of the slot introduces an additional resonance. If these additional resonances are near the patch resonance, adding them together causes enhancement in impedance bandwidth. Enhancement of impedance bandwidth and gain does not affect the nature of broadside radiation characteristics. It is found that lower and upper resonance frequency shifts towards lower side as the slot length increases and the bandwidth at both frequencies decreases. Higher resonance frequency shifts to higher side and lower resonance frequency shifts to lower side and the bandwidth for different slot width. It is also found that at lower resonance frequency the bandwidth depends inversely on the slot width whereas at upper resonance frequency the bandwidth depends directly on the slot width. It is clear that the resonance frequency decreases with increasing the feed position. The resonance frequency is highly dependent on the slot dimensions as well as the feed locations. From these discussions, it is noted that PIFA with U-slot on H-shaped patch enhances the bandwidth, whereas PIFA with U-slot on rectangular patch reduces the quality factor.

V.

CONCLUSION

It can be observed that the relative bandwidth of slotted PIFA is larger than conventional PIFA from the simulation results. The concept based on a PIFA-slot has been shown to be useful to design multiband handset antennas where the number of frequency bands is given by the sum of the bands given by each radiator. Moreover, said bands can be controlled independently which adds an extra freedom design. It is proposed that a novel hybrid PIFA with U-slot on rectangular and H-shaped patch for handheld satellite phones, WLAN, Bluetooth and Wimax applications. Its resonant frequency can be controlled by adjusting the inductance of the loop structure formed by the feed line and shorting pin. Thus, the PIFA for the lower band (1.6 GHz) and the slot for the higher band (11 GHz) operate using the same feeding voltage source. The relative bandwidth of PIFA with U-slot on H-shaped patch is greater than the conventional PIFA and PIFA with U-shaped slot on rectangular patch. Small volume, good electrical characteristics make slotted PIFA a promising candidate for the wireless applications. Main considerations for the design are Dimensions of conducting patch: They depend on the design frequency. The conducting patch should be of λ/4 dimension. Size of the ground plane: Ground plane affects the bandwidth to a greater extent; it should be optimized for the design frequency. The optimized value is 45% for the length and 25% for the width respectively. Position of the feed: It plays a major role in the impedance matching. The position of the feed should be as close to the short in order to get good impedance matching. Height of PIFA from ground plane: It will determine the bandwidth of PIFA. More the height, more the bandwidth will be.

REFERENCES
[1]. Amit A. Deshmukh, Priyanka Thakkar, Sneha Lakhani, Mayank Joshi, K. P. Ray, “Formulation of
Resonance Frequency for Dual band Slotted H-shaped Microstrip Antenna”, International Journal of Computer Applications, Proceedings of ICTSM – 2011, Mumbai, India ,Feb. 25 – 27, 2011, pp.18-26 [2]. Jawad K. Ali, “A New Dual Band E-shaped Slot Antenna Design for Wireless Applications”, Progress In Electromagnetics Research, PIERS Proceedings, Suzhou, China, September 12-16, 2011 [3]. Jaume Anguera, Iván Sanz, Josep Mumbrú, and Carles Puente, “ Multiband Handset Antenna With a Parallel Excitation of PIFA and Slot Radiators”, IEEE Transactions On Antennas And Propagation, Vol. 58, No. 2, Feb. 2010 [4]. Kekun Chang, Guan-Yu Chen, Jwo-Shiun Sun, and Y. D. Chen, “ PIFA Antenna with Coupling Effect for Bandwidth Enhanced Design and Measurement”, Progress In Electromagnetics Research Symposium Proceedings, Xi'an, China, March 22-26, 2010

108

Vol. 3, Issue 1, pp. 101-110

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
[5]. Vinod K. Singh, Zakir Ali, “Design and Comparison of a Rectangular-Slot-Loaded and C-Slot-Loaded Microstrip Patch Antenna”, IJCSNS International Journal of Computer Science and Network Security, vol.10 No.4, April 2010 [6]. C. I. Lee, W. C. Lin, Y. T. Lin, and Y. T. Lee, “A Novel H-shaped Slot-coupled Antenna for the Integration of Power Amplifier”, Progress In Electromagnetics Research Symposium Proceedings, Cambridge, USA, July 5-8, 2010 [7]. Hussein Attia, Mohammed M. Bait-Suwailam, and O. M. Ramahi, “Enhanced Gain Planar Inverted-F Antenna with Metamaterial Superstrate for UMTS Applications”, Progress In Electromagnetics Research , PIERS Proceedings, vol. 6, No. 6, 2010, pp.585-588 [8]. Xingyu Zhang and Antti Salo,” Design of novel wideband PIFA for mobile applications”, Progress In Electromagnetics Research Symposium, Beijing, China, March 23-27, 2009 [9]. Sinhyung Jeon, Hyengcheul Choi, and Hyeongdong Kim,”Hybrid planar inverted F antenna with a Tshaped slot on the ground plane”, ETRI Journal, Vol. 31, Number 5, October 2009 [10]. Xingyu Zhang and Anping Zhao ,”Enhanced bandwidth PIFA antenna with slot on the ground plane” PIERS Proceedings, Beijing, China, March 23-27, 2009 [11]. C. Picher and J. Anguera, A. Cabedo, C. Puente, S. Kahng, ”Multiband handset antenna using slots on the ground plane”, Progress In Electromagnetics Research Proceedings, Vol. 7, 95–109, 2009 [12]. C. Lin and K. L. Wong, “Internal hybrid antenna for multiband operation in the mobile phone,” Microw. Opt. Tech. Lett., vol. 50, no. 1, pp. 38–42, Jan. 2008. [13]. J A Ansari, Satya Kesh Dubey and Prabhakar Singh, R. U. Khan, Babau R. Vishvakarma “Analysis of U-slot loaded patch for dual band operation”, International Journal of Microwave and Optical Technology ,Vol. 3, No. 2, April 2008. [14]. S. Hong, W. Kim, H. Park, S. Kahng, and J. Choi, “Design of an internal multiresonant monopole antenna for GSM900/DCS1800/USPCS/ S-DMB operation,” IEEE Trans. Antennas Propag., vol. 56, no. 5, pp. 1437–1443, May 2008. [15]. S. Kahng, “The rectangular power-bus with slits GA-optimized to damp resonances,” IEEE Trans. Antennas Propag., vol. 55, no. 6, pp. 1892–1895, Jun. 2007. [16]. M. Cabedo, E. Antonino, A. Valero, and M. Ferrando, “The theory of characteristic modes revisited: A contribution to the design of antennas for modern applications,” IEEE Antennas Propag. Mag., vol. 49, no. 5, pp. 52–68, Oct. 2007. [17]. Zhu, X. W., X. X. He, and J. Liu, “Design of planar inverted-F antenna for mobile 3G applications," Mobile Communications, Vol. 31, 79-81, 2007. [18]. B. N. Kim, S. O. Park, Y. S. Yoon, J. K. Oh, K. J. Lee, and G. Y.Koo, “Hexaband planar inverted-F antenna with novel feed structure for wireless terminals,” IEEE Antennas Wireless Propag. Lett., vol. 6, pp. 66–68, 2007. [19]. J. Anguera, I. Sanz, A. Sanz, A. Condes, D. Gala, C. Puente, and J.Soler, “Enhancing the performance of handset antennas by means of groundplane design,” presented at the IEEE Int.Workshop on Antenna Technology: Small Antennas and Novel Metamaterials (IWAT), New York, Mar. 2006. [20]. E. Antonino, C. A. Suárez, M. Cabedo, and M. Ferrando, “Wideband antenna for mobile terminals based on the handset PCB resonance,”Microw. Opt. Technol. Lett., vol. 48, no. 7, pp. 1408–1411, Jul. 2006. [21]. H. Guodong, G. Changqing, "Study on Size Reduction of Quasi- Sierpinski Carpet Microstrip Antenna," Antennas, Propagation & EM Theory, 2006. ISAPE '06. 7th International Symposium on Oct. 2006. pp:1-4. [22]. M. A. Saed,” Broadband CPW-Fed Planar Slot Antennas with various tuning stubs” Electromagnetics Research, PIER 66, 199–212, 2006 Progress In

[23]. Dimitrios Peroulis, kamal Sarabandi, Linda P. B. Katehi, “Design of Reconfigurable slot Antennas,” IEEE Transactions on Antennas and Propagation, Vol. 53, No.2, February 2005

109

Vol. 3, Issue 1, pp. 101-110

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
[24]. C. Di Nallo and A. Faraone, “Multiband internal antenna for mobile phones,” Electron. Lett., vol. 41, pp. 514–515, 2005. [25]. S. Kumar, L. Shafai, and N. Jacob, “Investigation of wide-band microstrip slot antenna,” IEEE Trans. Antennas Propag., vol. 52, no. 3, pp. 865–872, Mar. 2004. [26]. Hossa, R., A. Byndas, and M. E. Bialkowski, “Improvement of compact internal antenna performance by incorporating open-end slots in ground plane," IEEE Microwave and Wireless Components Letters, Vol. 14, 283-285, 2004. [27]. Wang, F., Z. Du, Q. Wang, and K. Gong, “Enhanced-bandwidth PIFA with T-shaped ground plane," Electronics Letters, Vol. 40, 1504-1505, 2004. [28]. Azadegan, R., Sarabandi, K., "A novel approach for miniaturization of slot antennas," Antennas and Propagation, IEEE Transactions on Vol. 51, Issue 3, March 2003 pp:421 - 429. [29]. Salonen, P., “Effect of groundplane size on radiation efficiency and bandwidth of dual-band UPIFA,” IEEE Antennas and Propagation Society International Symposium, Vol. 3, 70–73, June 2003. [30]. M. F. Abedin and M. Ali, “Modifying the ground plane and its effect on planar inverted-F antennas (PIFAs) for mobile phone handsets,” IEEE Antennas Wireless Propag. Lett., vol. 2, pp. 226–229, 2003. [31]. K. L. Wong, Planar Antennas for Wireless Communications. New York: John Wiley & Sons Inc., 2003. [32]. Vainikainen, P., J. Ollikainen, O. Kivek¨as, and I. Kelander,“Resonator-based analysis of the combination of mobile handset antenna and chassis,” IEEE Transactions on Antennas and Propagation, Vol. 50, No. 10, 1433–1444, October 2002. [33]. Pekka Salonen, Mikko Keskilammi, and Markku Kivikoski, “Single-Feed Dual-Band Planar Inverted F Antenna with U-Shaped Slot,” IEEE Transactions on Antennas and Propagation, Vol. 48, No. 8, August 2000. [34]. C. R. Rowell and R. D. Murch, “A compact PIFA suitable for dual frequency 900/1800-MHz operation,” IEEE Trans. Antennas Propag., vol. 46, no. 4, pp. 596–598, Apr. 1998. [35]. Corbett R. Rowell and R. D. Murch, “ A Capacitively loaded PIFA for compact Mobile Telephone Handsets”, IEEE Transactions on Antennas and Propagations, 1996.

Authors
T. Anita Jones Mary was born in India on 8th May 1976. She has received B.E degree in Electronics and Communication Engineering from Madurai Kamaraj University, in 1998. She has received M.E degree in Communication Systems from Madurai Kamaraj University, in 2000. Currently she is pursuing PhD degree in Design of MIMO antennas for wireless applications.

T. Joyce Selva Hephzibah was born in Tamilnadu, India on 21st July 1987. She has received B.E degree in Electronics and communication from Anna University, Chennai in 2008. Since July 2011, she has been a student of M.Tech in Communication systems in Karunya University, Coimbatore. Her research interest includes Design of PIFA with slots for wireless applications. C. S. Ravichandran was born in India on 16th March 1967. He has received B.E degree in Electrical and Electronics Engineering from Pondicherry University, Pondicherry in 1989. He has received M.E degree in Power System from Bharathiar University, Coimbatore in 1993. He has received Ph.D degree in Control System from Bharathiar University. He has published many technical papers in International and National Journals, and at National and International conferences. He has been approved as research guide by Anna University, Coimbatore. Currently he is guiding 12 Ph.D Scholars.

110

Vol. 3, Issue 1, pp. 101-110

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

A NOVEL DESIGN OF MULTIBAND SQUARE PATCH ANTENNA EMBEDED WITH GASKET FRACTAL SLOT FOR WLAN & WIMAX COMMUNICATION
Amit K. Panda1 and Asit K. Panda2
2

Department of ECE, Guru Ghashi Das Central University, Bilashpur, India Department of ECE, National Institute of Science & Technology, Berhampur, India

1

A BSTRACT
A compact multiband patch antenna embedded with gasket fractal slots is proposed in this paper. The structure consists of square patch element with modified gasket slots on both radiating edge side. The antenna is fed by 50 co-planar waveguide (CPW) to make the structure purely planar. The investigation took place ranges between 1-7.5 GHz using CST MWS electromagnetic simulator. There are 3 resonant frequencies appeared at 2.45GHz, 3.6GHz & 5.6 GHz. From the return loss plot it is seen the antenna achieved the IEEE Bluetooth / WLAN (2.4-2.484 GHz), WiMAX (3.4-3.69 GHz) & WIFI (5.1-5.825 GHz) frequency band with -10 dB return loss and also nearly omni-directional radiation patterns achieved. The peak realized antenna gain is around 5dB in all distinct bands.

K EYW ORDS: Self-similar, fractal, CPW, Sierpinski gasket, WiMAX, WLAN.

I.

INTRODUCTION

Due to rapid growth of wireless communication technology, the necessities to cover multiple applications with a single antenna element have been developing extensive research on it. Traditionally, each antenna operates at a single or dual frequency bands, where different antenna is needed for different applications. This will cause a limited space and place problem. In order to overcome this problem, multiband antenna can be used where a single antenna can operate at many frequency bands. So, this has initiated antenna research in various directions, one of which was using fractal shaped antenna element [1]. Fractal antenna [2] has very good features like small size & multiband characteristic. Most fractal objects have self -similar with different scaling and spacefilling geometrical properties [3-5]. The fractal shape carried out by applying the infinite number of iterations using multiple reduction copy machine (MRCM) algorithm [6].Till now, many different antennas have been designed using different configurations to design multiband antenna such as, sierpinski gasket [7], multiple ring [8], hexagonal fractal [9] and circular fractal slot antenna [10]. Electric Circuits are of three types– 1D (e.g. transmission line), 2D (e.g. microstrip line) and 3D (e.g. waveguides). The patch and the ground are placed in the same plane (co-planar) which makes the design purely planar structure. In particular, a great interest in coplanar waveguide (CPW) fed antennas has been seen in the literature because of their many attractive features such as, simplest structure of a single metallic layer, no soldering point and easy integration with active devices or MMICs[11]. Here in this paper we have presented a square patch antenna embedded with a modified sierpinski gasket slot which exhibits a large reduction in size along with multiband operation. The antenna was fed by CPW-like matching section which suits for WLAN, WIMAX & WIFI applications. The design procedure of the proposed geometry was depicted in Fig. 1. The purposed fractal geometry was constructed using square patch element and gasket fractal slot. The initial geometry of antenna at 0th

111

Vol. 3, Issue 1, pp. 111-116

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
iteration has been taken a square patch element. Then at 1st iteration the square patch was etched with gasket fractal slot on both radiating edge side of patch. Similarly, third and fourth iteration were achieved with the reduced scale and overlapping portion was subtracted from the inner square metallization respectively. This process of dropping the central continues for nth iteration to generate purposed fractal structure. The iteration order was limited to 2nd iteration due to tolerance and complexity in fabrication. The scale factor will determined the high of each sub gasket and is given as:

δ=

hn ≈2 hn +1

(1)

(a)

(b)

(c)

Figure 1. Design procedure of purposed fractal structure geometry upto 2nd iteration

II.

ANTENNA CONFIGURATION

The proposed multiband antenna prototype was illustrated in Fig. 2. The entire dimensions of the antenna were 52mm × 62 mm. The 50-SMA connector was used to feed the antenna at the CPW line. The design of the antenna starts with a single element using basic square patch microstrip antenna operating at frequency 1.8 GHz using an electromagnetic solver (CST MWS).The dimension of the antenna was determined from the equation of the patch antenna design equation. The antenna was designed up to 2nd iteration process. It was designed on FR-4 substrate with thickness of the substrate = 1.59 mm. (1/16”), εr = 4.21 & tan (δ) = 0.019 respectively and where the radiating element was chosen as copper clad. The structure was fed using CPW Feed .The width (W) of the ground plane on either side of the CPW central strip was 19.5 mm and its length (Lg) is 28.5 mm. The spacing (g) between ground plane and central conductor is 0.5mm and the separation (h) between ground plane and patch was1 mm considered.

Figure 2. Square patch antenna embeded with gasket slots

The purposed multiband antenna structure was constructed using square and gasket slots on both radiating edges of square patch. For generation of gasket slot, it starts with a gasket patch element, and then the central Inverted gasket was removed with vertices that are located at the midpoints of the sides of the original Gasket. This process was repeated for the three remaining Sectoral until 2nd iteration in this particular case, so three scaled versions of the Sierpinski gasket were found on the antenna. The scale factor among the three gaskets was δ=2. Three scaled versions of the Sierpinski gasket, the first sub-gasket is 3rd order Sierpinski of height 13mm, the second sub-gasket was second order Sierpinski of height 26 mm and the third is 1st order Sierpinski of height 52 mm.

112

Vol. 3, Issue 1, pp. 111-116

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

III.

RESULT AND DISCUSSION

3.1 Return Loss Characteristics
The proposed antenna was simulated & analyzed using CST Microwave studio (CST MWS) between the frequencies 1-10 GHz. From the return loss plot as shown in Fig. 3, it was found that the antenna was matched in 3 resonant frequencies effectively appeared at 2.4 GHz, 3.61GHz, and 5.58 GHz respectively for 2nd iteration. The return losses in all 3 bands were quite good. The 1st impedance BW at 2.4 GHz was 105 MHz covering IEEE Bluetooth/WLAN band (2.2 – 2.56 GHz) with return loss 35 dB, the 2nd impedance BW at 3.6GHz was 112 MHz covering WiMAX band (3.35-3.50 GHz) with return loss -22 dB and the 3rd impedance BW was 150 MHz covering WIFI band (5.1-5.825 GHz) respectively.

Figure 3. Simulated return loss

3.2 Effect of Width of the CPW Feed (W1)
A parametric study was done by changing the width of CPW feed for better matching section. Fig. 4 depicts the simulated return loss curves for different CPW feed widths (W1=2.7, 2.925, 3.15, 3.375 and 3.6 mm).when Ws increased from 3 to 4 the impedance matching of the antenna gets better. This fact gives further indication that a better impedance matching can be obtained by optimizing the width of central conductor W1 =3.6mm. It was noticed that the resonant frequencies shift significantly for the five different W1. When W1 is narrowed, the resonant frequency decreased dramatically, leading to the variations of the operating bandwidth range of the antenna. When W1 was increasing, the resonant frequency as well as S11 increasing significantly. The return losses in all five bands are acceptable and all bandwidths are wider.

Figure 4. Simulated reurn loss curves for different width CPW feed

From the return loss plot it was seen that the optimized result for the antenna suited for IEEE Bluetooth/WLAN (2.4-2.484 GHz) & WIMAX (3.4-3.69 GHz) applications was attained at W1=3.6mm.

3.3 Gain vs. Frequency
The simulated peak gain of the proposed antenna is plotted in Fig. 5. It is seen that gain level was achieved throughout the band. As observed in fig.5, gain vs. frequency plot, it was found that the gain was around 4.2 dB at lower frequency band and were around 6 dB at higher band.

113

Vol. 3, Issue 1, pp. 111-116

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Figure 5. Gain versus frquency

3.4 Current Distribution & Radiation Pattern
The current density and radiation patterns were analyzed using CST Microwave Studio®. With a series of simulations it was seen that the magnetic current at the central gap & the electric current on the patch region of the antenna around the gap is crucial for resonance & radiation characteristics of such antenna. Simulation current density on the surface of the antenna at 2.4 GHz, 3.61GHz and 5.6 GHz were shown in Fig.6. It was observed as the number iterations was increased, multiple resonant frequencies obtained but the radiated power from the antenna was found to deteriorate

(a)

(b)

(c)
Figure 6. Current density distribution on antenna surface at (a) 2.4 GHz, (b) 3.61 GHz , (c) 5.6 GHz

114

Vol. 3, Issue 1, pp. 111-116

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
The simulated normalized radiation patterns at all distinct frequencies for φ = 00 &900 were shown in Fig. 7. It was observed that the H-plane patterns were reasonable over the entire operating bandwidth. The radiation patterns were consistent for the different resonant frequencies for 2nd iteration.

(a) E- θ for φ=900 at 2.4 GHz

(b) E- θ for φ=900 at 3.61 GHz

(c)E- θ for φ=900 at 5.6 GHz Figure 7. Simulated Normalized radiation patterns of purposed patch antenna for φ = 900 at (a) 2.4 GHz, (b) 3.61GHz (c) 5.6GHz

An interesting phenomenon was phenomenon is observed that with increasing resonant frequencies the patterns show more undulations. The pattern for lower bands are more omni directional & the cross polar isolation are very minimum but there are side lobes existing at higher frequency.

IV.

CONCLUSION

A novel compact CPW fed compact square patch antenna embedded with Gasket fractal slot was designed & simulated for multiband operations. The simulated results indicate that the antenna exhibits a good return loss, and the antenna gain was above 5 dB at the designed frequencies and other multiband frequencies suitable for IEEE Bluetooth/WLAN (2.4-2.484 GHz), WiMAX (3.4-3.69 GHz) & WIFI (5.1-5.825 GHz) wireless communication applications The design was implemented by using CST MWS electromagnetic simulation tools. The self similarity in the structure for the 2nd iteration leads to multiband operation of the antenna. The key parameters that influence antenna performance have been analyzed to gain an insight into antenna operation. Hence a good antenna performance over the operating frequencies over the whole band was obtained.

115

Vol. 3, Issue 1, pp. 111-116

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

ACKNOWLEDGMENT
The authors would like to thank, CST Company, India for their support in CST EM tool. The authors are grateful to the anonymous reviewers for their constructive & helpful comments & suggestions.

REFERENCES
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10]

[11]

B. B. Mandelbrot, The Fractal Geometry CfNature, Freeman, 1983. H.Jones, et al, “Fractals and chaos,” A.J.Crilly, R.A.Earnsshaw, and H.Jones, Eds. , Springer-Verleg., Newyork , 1990. C. Puente, J. Romeu, R. Pous, A. Cardama, “On the behavior of the Sierpinski multiband antenna,” IEEE Trans. Antennas Propagat., vol. 46, pp. 517-524, Apr. 1998. Werner D.H., Mittra R, “Frontier of electromagnetic,” Wiley-IEEE Press, Newyork, 1999. D. H. Werner, S. Ganguly, "An overview of Fractal Antenna Engineering Research", IEEE Antennas and Propagation Magazine, vol. 45, pp.38-57, 2003. H.O.Peitgen., et al, “Chaos and Fractals,” A.J.Crilly, R.A.Earnsshaw,and H.Jones, Eds. , SpringerVerleg., Newyork , 1990. C. Borja and J. Romeu, “Multiband Sierpinski fractal patch antenna,” Antennas and Propagation Society International Symposium, IEEE, vol. 3, pp. 1708-1711, July 2000. C. T.P. Song, Peter S. Hall and H. Ghafouri-Shiraz, “Multiband multiple ring monopole antennas,” IEEE Transactions Antenna & Propagation, vol. 51, pp. 722-729, Apr 2003. Kan Philip Tang and Parveen Wahid, “Hexagonal Fractal Multiband Antenna,” Antennas and Propagation Society International Symposium, IEEE, vol. 4, pp. 554-557, June 2002. Ji-Chyun Liu, Der-Chyuan Lou, Chin-Yen Liu, Ching-Yang Wu and Tai-Wei Soong, “Precise Determinations of the CPW-FED Circular fractal slot antenna, “Microwave and Optical Technology Letters, vol. 48, pp.1586-1592, Aug 2006. Ip, K.H.Y., Kan, T.M.Y., and Eleftheriades, G.V.: ‘A single-layer cpw-fed active patch antenna’, IEEE Microw. Guide. Wave Lett., 2000, 10, pp. 64–66

Authors Biography
Amit K. Panda received his M.Sc in Electronics from Berhampur university& the Master of technology degree in Electronics design technology from Tezpur cental University. He is currently working as Asst. professor with the Department of electronics and communication Engineering, Guru Ghasidas Central University, Bilaspur.His main Research interests are in FPGA based system design, VLSI digital design, Network implementation on FPGA, RF and microwave control devices and semiconductor components. Amit K. Panda is a faculty in ECE department at National institute of science & Technology (NIST), Berhampur, India. He obtained his M.Tech in ECE from NIST under Biju patnaik Technical University (BPUT) in 2009, completed his B.Tech in ECE in 2003 from NIST,.Currently continuing his Ph.D work on metamaterial. His current area of interest are DNG material, metamaterial antenna, invisible cloaking, Multiband & wideband patch antenna, fractal antenna. He has presented 8 international conference papers.

116

Vol. 3, Issue 1, pp. 111-116

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

IMPLEMENTATION OF THE TRIPLE-DES BLOCK CIPHER USING VHDL
Sai Praveen Venigalla, M. Nagesh Babu, Srinivas Boddu, G. Santhi Swaroop Vemana
Department of Electronics & Communications Engineering, KL University, Vijayawada, A.P., India

ABSTRACT
This paper presents FPGA implementations of the DES and Triple-DES with improved security against power analysis attacks. The proposed designs use Boolean masking, a previously introduced technique to protect smart card implementations from these attacks. Triple DES was the answer to many of the shortcomings of DES. Since it is based on the DES algorithm, it is very easy to modify existing software to use Triple DES. It also has the advantage of proven reliability and a longer key length that eliminates many of the shortcut attacks that can be used to reduce the amount of time it takes to break DES. However, even this more powerful version of DES may not be strong enough to protect data for very much longer. The DES algorithm itself has become obsolete and is in need of replacement.DES encrypts data in 64-bit and it is a symmetric algorithm. The key length is 56-bits.

KEYWORDS: DES, Encryption, Decryption, Cryptography, Simulation, Synthesis, TDES, Cipher.

I.

INTRODUCTION

DES (the Data Encryption Standard) is a symmetric block cipher developed by IBM. The algorithm uses a 56-bit key to encipher/decipher a 64-bit block of data. The key is always presented as a 64-bit block, every 8th bit of which is ignored. However, it is usual to set each 8th bit so that each group of 8 bits has an odd number of bits set to 1.[1] The algorithm is best suited to implementation in hardware, probably to discourage implementations in software, which tend to be slow by comparison. However, modern computers are so fast that satisfactory software implementations are readily available. DES is the most widely used symmetric algorithm in the world, despite claims that the key length is too short. Ever since DES was first announced, controversy has raged about whether 56 bits is long enough to guarantee security. The key length argument goes like this. Assuming that the only feasible attack on DES is to try each key in turn until the right one is found, then 1,000,000 machines each capable of testing 1,000,000 keys per second would find (on average) one key every 12 hours. Most reasonable people might find this rather comforting and a good measure of the strength of the algorithm[9]. Those who consider the exhaustive key-search attack to be a real possibility (and to be fair the technology to do such a search is becoming a reality) can overcome the problem by using double or triple length keys. In fact, double length keys have been recommended for the financial industry for many years. Section II discusses about Triple DES, Section III discusses about the algorithm of Triple DES, Section IV represents Simulation Results, Section V describes Conclusion and finally section VI describes Scope and Future Development.

II.

TRIPLE DES

Use of multiple length keys leads us to the Triple-DES algorithm, in which DES is applied three times. Triple DES is simply another mode of DES operation. It takes three 64-bit keys, for an overall key length of 192 bits. In Private Encryption, you simply type in the entire 192-bit (24 character) key rather than entering each of the three keys individually[4]. The Triple DES DLL then breaks the user provided key into three sub keys, padding the keys if necessary so they are each 64 bits long. The procedure for encryption is exactly the same as regular DES, but it is repeated three times. Hence the

117

Vol. 3, Issue 1, pp. 117-128

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
name Triple DES, The data is encrypted with the first key, decrypted with the second key, and finally encrypted again with the third key. Triple DES, also known as 3DES. Consequently, Triple DES runs three times slower than standard DES, but is much more secure if used properly. The procedure for decrypting something is the same as the procedure for encryption, except it is executed in reverse[2]. Like DES, data is encrypted and decrypted in 64-bit chunks. Unfortunately, there are some weak keys that one should be aware of: if all three keys, the first and second keys, or the second and third keys are the same, then the encryption procedure is essentially the same as standard DES. This situation is to be avoided because it is the same as using a really slow version of regular DES[4]. Note that although the input key for DES is 64 bits long, the actual key used by DES is only 56 bits in length. The least significant (right-most) bit in each byte is a parity bit, and should be set so that there are always an odd number of 1s in every byte. These parity bits are ignored, so only the seven most significant bits of each byte are used, resulting in a key length of 56 bits. This means that the effective key strength for Triple DES is actually 168 bits because each of the three keys contains 8 parity bits that are not used during the encryption process.

Figure 1. Triple DES-Block Diagram

If we consider a triple length key to consist of three 56-bit keys K1, K2, K3 then encryption is as follows: •Encrypt with K1 •Decrypt with K2 •Encrypt with K Decryption is the reverse process: •Decrypt with K3 •Encrypt with K2 •Decrypt with K1 Setting K3 equal to K1 in these processes gives us a double length key K1, K2. Setting K1, K2 and K3 all equal to K has the same effect as using a single-length (56-bit key). Thus it is possible for a system using triple-DES to be compatible with a system using single-DES. DES operates on a 64 – bit block of plaintext[3]. After an initial permutation the block is broken into a right half and left half, each 32 – bits long. Then there are 16 rounds of identical operations, called Function f, in which the data are combined with the key. After the sixteenth round, the right and left halves are joined, and a final permutation (the inverse of the initial permutation) finishes off the algorithm. DES operates on a 64 – bit block of plaintext. After an initial permutation the block is broken into a right half and left half, each 32 – bits long. Then there are 16 rounds of identical operations, called Function f, in which the data are combined with the key. After the sixteenth round, the right and left halves are joined, and a final permutation (the inverse of the initial permutation) finishes off the algorithm. In each round the key bits are shifted, and then 48 – bits are selected from the 56 –bits of the key. The right half of the data is expanded to 48 – bits via an expansion permutation, combined with 48 –bits of a shifted and permuted key via an XOR, sent through 8 S- boxes producing 32- new bits, and permuted again.

118

Vol. 3, Issue 1, pp. 117-128

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
These four operations make up Function f. The output of Function f is then combined with the left half via another XOR. The results of these operations become the new right half; the old right half becomes the new left half. These operations are repeated sixteen times, making 16 rounds of DES.

Figure 2. Enciphering computation

In each round the key bits are shifted, and then 48 – bits are selected from the 56 –bits of the key. The right half of the data is expanded to 48 – bits via an expansion permutation, combined with 48 – bits of a shifted and permuted key via an XOR, sent through 8 S- boxes producing 32- new bits, and permuted again. These four operations make up Function f. The output of Function f is then combined with the left half via another XOR. The results of these operations become the new right half; the old right half becomes the new left half. These operations are repeated sixteen times, making 16 rounds of DES.

III.

ALGORITHM FOR TDES

3.1. Encryption
Step1: k1, K2, k3 are the keys in key expander with the selection function. Step2: If selection function is active i.e. ‘1’ then encryption process is activated with key k1. And this encryption output is given to input of the decryption i.e. selection function is ‘0’ with key K2. Step3: Decryption output is given to input of encryption i.e. if selection function is ‘1’ with k3.

3.2. Decryption
Step4: It is the reverse process of encryption.

119

Vol. 3, Issue 1, pp. 117-128

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Figure 3. TDES Algorithm

Figure 4. Single Round of DES

3.3. Initial permutation (IP)

Figure 5. Initial permutation

120

Vol. 3, Issue 1, pp. 117-128

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Table 1. Initial permutation IP 58 60 62 64 57 59 61 63 50 52 54 56 49 51 53 55 42 44 46 48 41 43 45 47 34 36 38 40 33 35 37 39 26 28 30 32 25 27 29 31 18 20 22 24 17 19 21 23 10 12 14 16 9 11 13 15 2 4 6 8 1 3 5 7

Table 1 specifies the input permutation on a 64-bit block. The meaning is as follows: the first bit of the output is taken from the 58th bit of the input; the second bit from the 50th bit, and so on, with the last bit of the output taken from the 7th bit of the input. The initial permutation occurs before round one; it transposes the input block as described in table 1 this table, like all the other tables in this chapter , should be read left to right, top to bottom. For example, the initial permutation moves bit 58 of the plaintext to bit position 1, bit 50 to bit position 2, and so forth. The initial permutation and the corresponding final permutation do not affect DES‘s security.

3.4. Final permutation (IP-1)

Figure 6. Final permutation Table 2. Final permutation IP-1 40 39 38 37 36 35 34 33 8 48 7 47 6 46 5 45 4 44 3 43 2 42 1 41 16 15 14 13 12 11 10 9 56 55 54 53 52 51 50 49 24 23 22 21 20 19 18 17 64 63 62 61 60 59 58 57 32 31 30 29 28 27 26 25

121

Vol. 3, Issue 1, pp. 117-128

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
The final permutation is the inverse of the initial permutation; the table is interpreted similarly. This is shown in table 2

3.5. Expansion permutation (E)

Figure 7. Expansion permutation Table 3. Expansion permutation E 32 4 8 12 16 20 24 28 1 5 9 13 17 21 25 29 2 6 10 14 18 22 26 30 3 7 11 15 19 23 27 31 4 8 12 16 20 24 28 32 5 9 13 17 21 25 29 1

The expansion permutation is interpreted as for the initial and final permutations. Note that some bits from the input are duplicated at the output; e.g. the fifth bit of the input is duplicated in both the sixth and eighth bit of the output. Thus, the 32-bit half-block is expanded to 48 bits. This operation expands the right half of the data, RI, from 32-bits to 48 bits. Because this operation changes the order of the bits as well as repeating certain bits, it is known as an expansion permutation. This operation has two purposes: it makes the right half the same size as the key for the XOR operation and it provides a longer result that can be compressed during the substitution operation. However, neither of those is its main cryptographic purpose. By allowing one bit to affect two substitutions, the dependency of the output bits on the input bits spreads faster. This is called an avalanche effect. This is shown in table 3

3.6. Permutation (P)

Figure 8. Permutation

122

Vol. 3, Issue 1, pp. 117-128

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Table 4. permutation P 16 29 1 5 2 32 19 22 7 12 15 18 8 27 13 11 20 28 23 31 24 3 30 4 21 17 26 10 14 9 6 25

The 32 – bit output of the S –box substitution is permuted according to a P –box. This permutation maps each input bit to an output position; no bits are used twice and no bits are ignored. This is called a straight permutation or just a permutation. This is shown in table 4.

3.7. Substitution boxes (S-boxes)

Figure 10. Calculation of f(R, k)

After the compressed key is XORed with expanded block, the 48 – bit result moves to a substitution operation. The substitutions are performed by eight substitution boxes, or S-boxes. Each S – box has a 6-bit input and a 4-bit output, and there are eight different S-boxes. The total memory requirements for the eight DES S-boxes are 256 bytes. The 48 bits are divided into eight 6-bit sub-blocks. Each separate block is operated on by a separate S-box: The first block is operated on by S-box 1; the second block is operated on by S-box 2, and so on.

3.8. Rotations in the key-schedule
Before the round subkey is selected, each half of the key schedule state is rotated left by a number of places. This table specifies the number of places rotated. Triple DES has two attractions that assure its widespread use over the next few years[6]. First, with its 168-bit key length, it overcomes the vulnerability to brute-force attack of DES. Second, the underlying encryption algorithm in Triple DES is the same as in DES. This algorithm has been subjected to more scrutiny than any other encryption algorithm over a longer period of time, and no effective cryptanalytic attack based on the algorithm rather than brute-force has been found[5]. Accordingly, there is a high level of confidence that 3DES is very resistant to cryptanalysis. If security were the only consideration, then 3DES would be an appropriate choice for a standardized encryption algorithm for decades to come.

123

Vol. 3, Issue 1, pp. 117-128

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Figure 11. Key schedule calculation

Figure 12. Feistel Decryption Algorithm

3.9. DES Decryption
i) Use same function

124

Vol. 3, Issue 1, pp. 117-128

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
ii) Key is the key… Used in reverse order (K1,…, K16 becomes K16,…, K1) Right circular shift of 0-2 bits 0 1 2 2 2 2 2 2 1 2 2 2 2 2 2 1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 (1 1 2 2 2 2 2 2 1 2 2 2 2 2 2 1) With DES it is possible to use the same function to encrypt or decrypt a block. The only difference is that the keys must be used in the reversed order. That is , if the encryption keys for each round are K1,K2,K3,…K16, then the decryption keys are K16, K15, K14, …,K1.The algorithm that generates the key used for each round is circular as well. The key shift is shown above.

3.10. Applications
The DES3 core can be utilized for a variety of encryption applications including: • Secure File/Data transfer • Electronic Funds Transfer • Encrypted Storage Data • Secure communications

3.11. Features
• • • • • FIPS 46-3 Standard Compliant Encryption/Decryption performed in 48 cycles(ECB mode) Up to 168 bits of security For use in FPGA or ASIC designs Verilog IP Core

3.11.1. Non Pipelined Version
• Small gate count shared DES

3.11.2. Pipelined Version
• • Pipelined for maximum performance Encryption/Decryption performed in 1 cycle (ECB mode) after an initial latency of 48 cycles

IV.

SIMULATED RESULTS

Figure 13. Waveform of DES Block

125

Vol. 3, Issue 1, pp. 117-128

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Figure 14. Waveform of Add Key

Figure 15. Waveform of Add left

Figure 16. Waveform of Expansion Table

Figure 17. Waveform of Pbox

126

Vol. 3, Issue 1, pp. 117-128

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Figure 18. Waveform of S1box

Figure 19. Waveform of Sbox

V.

CONCLUSIONS

As DES will run through 16 iterations to achieve its desired cipher text (final output).With Triple DES, it will Encrypt-Decrypt-Encrypt the block and a completely different output is generated with a final combination. It’s said that the security is 192 bit encryption, but also argued that regardless of the keys, the security is only 168 bit. This debate is clearly beyond the scope of this article/writer. If you wish to participate with the scientists in their discussions, it’s your humility at stake. It's a safe but that Triple DES is exponentially stronger than the previous DES. After that, AES may supplant Triple DES as the default algorithm on most systems if it lives up to its expectations. But Triple DES will be kept around for compatibility reasons for many years after that. So the useful lifetime of Triple DES is far from over, even with the AES near completion.

VI.

SCOPE AND FUTURE DEVELOPMENT

For the foreseeable future Triple DES is an excellent and reliable choice for the security needs of highly sensitive information. The AES will be at least as strong as Triple DES and probably much faster. It's the industry mandate from Visa and MasterCard that's requiring ATM deployers to upgrade and/or replace their legacy terminals. In a nutshell, it's all about three waves of encryption, and it's designed to make ATM transactions more secure.

REFERENCES
[1]. Data Encryption Standard, Federal Information Processing Standard (FIPS) 46, National Bureau of Standards, 1977. [2]. Federal Information Processing Standards Publication 140-1, “Security Requirements for Cryptographic Modules”, U.S. Department of Commerce/NIST, Springfield, VA: NIST, 1994

127

Vol. 3, Issue 1, pp. 117-128

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
[3]. B. Schneier, “Applied Cryptography, Protocols, Algorithms, and Source Code in C”, John Wiley & Sons 1994. [4]. D. C. Feldmeier, P. R. Karn, “UNIX Password Security – Ten Years Later,” CRYPTO’89, Santa Barbara, California, USA, pp. 44-63, 1989. [5]. Xilinx, San Jose, California, USA, Virtex, 2.5 V Field Programmable Gate Arrays, 2001, www.xilinx.com. [6]. NIST Special Publications 800-20, "Modes of Operation Validation System for the Triple Data Encryption Algorithm", National Institute of Standard and Technology, 2000. [7]. Ohjun KWON, Hidenori SEIKE, Hirotsugu KAJISAKI and Takakazu KUROKAWA, “Implementation of AES and Triple-DES cryptography using a PCI-based FPGA board”, Proc. of the International Technical Conference On Circuits/Systems Computers and Communications 2002, ITCCSCC-2002, Phuket, Thailand, July 16-19, 2002. [8]. Pawel Chodowiec, Kris Gaj, Peter Bellows, and Brian Schott, “Experimental Testing of the Gigabit IPSec- Compliant Implementations of Rijndael and Triple DES Using SLAAC-1V FPGA Accelerator Board”, Proc. Information Security Conference, Malaga, Spain, October 1-3, 2001, pp. 220-234. [9]. Herbert Leitold, Wolfgang Mayerwieser, Udo Payer, Karl Christian Posch, Reinhard Posch, and Johannes Wolkerstorfer, “A 155 Mbps Triple-DES Network Encryptor”, Proc. Cryptographic Hardware and Embedded Systems - CHES 2000, USA, August 17-18, 2000. Authors

Sai Praveen Venigalla was born in A.P, India. He received the B.Tech degree in Electronics & communications Engineering from Jawaharlal Nehru Technological University in 2009. Presently he is pursuing M.Tech VLSI Design in KL University. His research interests include FPGA Implementation, Low Power Design. M. Nagesh Babu was born in Kurnool, Kurnool (Dist.), AP, India. He received B.Tech in Electronics & Communication Engineering from JNTU Anantapur, AP, India and M.Tech from Hyderabad Institute of Technology and Management, R.R (Dist), AP, India. He is working as Associate Professor in Department of Electronics & Communication Engineering, K L University, Vijayawada, AP, India. He has 9 years of Industry experience and 9 years of teaching experience. He presented 2 papers in National conferences. Srinivas Boddu was born in A.P, India. He received the B.Tech degree in Electronics & communications Engineering from Jawaharlal Nehru Technological University in 2009. Presently he is pursuing M.Tech VLSI Design in KL University. His research interests include FPGA Implementation, Low Power Design.

G Santhi Swaroop Vemana was born in A.P, India. He received B.Tech degree in Electronics and Communication Engineering from Jawaharlal Nehru technological university in 2008. He worked as OFC engineer at united telecoms ltd at GOA during 2009-2010. Presently he is pursuing M.Tech VLSI Design in KL University. His research interests include FPGA Implementation, Low Power Design.

128

Vol. 3, Issue 1, pp. 117-128

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

COMPUTATIONAL INVESTIGATION OF PERFORMANCE CHARACTERISTICS IN A C-SHAPE DIFFUSING DUCT
Prasanta. K. Sinha1, A. N. Mullick2, B. Halder3 and B. Majumdar4
1 2,3

Department of Mechanical Engineering, KATM, Durgapur-713212, India Department of Mechanical Engineering, NIT, Durgapur – 713 209, India 4 Department of Power Engineering, Jadavpur University, Kolkata, India

ABSTRACT
In the present investigation the distribution of mean velocity, static pressure and total pressure are experimentally studied on a C-shape diffuser of 45 angle of turn with an area ratio of 1.298 and centerline length was chosen as three times of inlet diameter. The experimental results then were numerically validated with the help of Fluent and then a series of parametric investigations are conducted with same centre line length and inlet diameter but with different area ratios varying from 1.25 to 3.75. The measurements were taken at Reynolds number 2.45 x 105 based on inlet diameter and mass average inlet velocity. Predicted results of coefficient of mass averaged static pressure recovery (52%) and coefficient of mass averaged total pressure loss (11%) are in good agreement with the experimental results of coefficient of mass averaged static pressure recovery (48%) and coefficient of mass averaged total pressure loss (10%) respectively. Standard k-ε model in Fluent solver was chosen for validation. From the parametric investigation it is observed that for the increase in area ratio static pressure recovery increases sharply upto area ratio 2 after that increment in lesser gradient up to 2.67 and it was maximum at area ratio 2.67. But when area ratio increases from 2.67 to 3.75, pressure recovery decreases steadily. The coefficient of total pressure loss almost remains constant with the change in area ratio and angle of turn for similar inlet conditions.
°

KEYWORDS:
Ar As CC Cpr CV

C-shape diffuser, Five-hole probe, Fluent solver and k-ε model.

NOMENCLATURE
Area ratio Aspect ratio Concave or inward wall Coefficient of pressure recovery Convex or outward wall D L Re β Inlet diameter of the Diffuser Centerline length of the Diffuser Reynolds number Angle of turn of the center line

I.

INTRODUCTION

Diffusers are used in many engineering application to decelerate the flow or to convert the dynamic pressure into static pressure. Depending on application, they have been designed in many different shapes and sizes. The C-shape diffuser is one of such design and is an essential component in many fluid handling systems. C-shape diffusers are an integral component of the gas turbine engines of high-speed aircraft. It facilitates effective operation of the combustor by reducing the total pressure loss. The performance characteristics of these diffusers depend on their geometry and the inlet conditions. Part turn or curved diffusers are used in wind tunnels, compressor crossover, air conditioning and ventilation ducting systems, plumes, draft tubes, etc.

129

Vol. 3, Issue 1, pp. 129-136

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
The objective of the present study is to investigate the flow characteristics within a circular C-shaped diffuser. The research work on curved diffuser was initiated from the study of curved duct. The earliest experimental work on curved ducts has been reported by Williams et al. [1] in the beginning of the last century. It was reported that the location of the maximum axial velocity is shifted towards the outward wall of the curved pipe. The effect of the centrifugal force on the pressure gradient was studied by Dean [2, 3]. He established a relation between viscous force, inertia force and the curvature by a non dimensional number known as Dean Number. Experimental investigation on circular 90° and 180° turn curved ducts was carried out by Rowe et al. [4] and reported the generation of contra rotating vortices within the bends. Enayet et al. [5] investigated the turbulent flow characteristics through 90° circular curved duct of curvature ratio 2.8. It was observed that the thickness of the inlet boundary layer has a significant role on the generation of secondary motion within the duct. Kim and Patel [6] have investigated on a 90° curved duct of rectangular cross section. It was reported that the formation of vortices on the inner wall due to the pressure driven secondary motion originated in the corner region of curved duct. The earliest work on curved diffuser was reported by Stanitz [7]. The diffuser was designed based on potential flow solution for two-dimensional, invisid, incompressible and irrotational flow. The evaluation of the performance of these diffusers was on the basis of wall static pressure only. The first systematic studies on 2-D curved subsonic diffusers were carried out by Fox & Kline [8]. The centerline of the diffuser was taken as circular with a linearly varying area distribution normal to the centerline. They established a complete map of flow over a range of the L/D ratio and at different values of β. Parson and Hill [9] investigated on three 2-D curved diffusers of As = 10 of various combination of ratio between the centerline length and inlet width. They observed that the streamline curvature affects the flow development substantially within the curved diffuser. A qualitative measurement of the mean flow quantities in a 40° curved diffuser of rectangular cross section of Ar = 1.32 and inlet As = 1.5 have been reported by McMillan [10]. The result clearly showed the development of strong counter rotating vortices between two parallel walls, which dominate the flow and performance characteristics. Majumdar et al. [11] experimentally studied the flow characteristics in a large area ratio curved diffuser with splitter vanes installed at different angles to the flow at the inlet of the diffuser. It was observed that splitter vanes deflect the flow towards the convex wall and a pair of contra rotating vertices generated at the flow passage. Yaras [12] experimentally investigated the flow characteristics of 90° curved diffuser with strong curvature having Ar =3.42 for different values of inlet boundary layer thickness and turbulence intensity. Measurements were taken by the help of seven-hole pressure probe. He observed that the performance parameters were almost independent by the variations in the inlet boundary layer. Majumdar et al. [13] experimentally studied the turbulent characteristics in a curved diffuser. They observed that the stream wise bulk flow shifted towards the outward wall in the downstream of the diffuser, which was mainly due to the influence of centrifugal force. Moreover, one pair of contra-rotating vortices was identified at 30° turns in the flow passage. The overall static pressure recovery was observed as 5l%. Majumdar et al. [14] conducted an experiment on 180° bend rectangular diffusing duct. They measured the wall pressure, velocity and turbulence intensity along the flow passage of the diffusing duct. The observation clearly showed the formation of vortical motions between the two parallel walls. The overall pressure recovery was found about 48% Sinha et al. [15] conducted an experiment on 30° curved annular diffuser. They measured the mean velocity, static pressure and total pressure along the flow passage of the diffuser. They are also conducted a series of parametric investigations with same centre line length and inlet diameter but with different area ratios. They observed that the high velocity fluids shifted and accumulated at the concave wall of the exit section. They also observed that with the increase in area ratio pressure recovery increases upto certain point than with further increase in area ratio Pressure recovery decreases. Sinha et al. [16] investigated an experiment on 37.5° annular diffusing duct. They measured the mean velocity, static pressure and total pressure along the flow passage of the diffuser. They are also conducted a series of parametric investigations with same centre line length and inlet diameter but with varying area ratio and different angle of turn. They observed that the high velocity fluids shifted and accumulated at the concave wall of the exit section. They also observed that with the increase in area ratio pressure recovery increases but with the increase in angle of turn Pressure recovery decreases. Sinha et al. [17] investigated an experiment on 42° C-shape diffusing duct. They measured the mean velocity, static pressure and total pressure along the flow passage of the diffuser.

130

Vol. 3, Issue 1, pp.

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
They are also conducted a series of parametric investigations with same centre line length and inlet diameter but with varying area ratio and with 42° angle of turn. They observed that the high velocity fluids shifted and accumulated at the concave wall of the exit section. They also observed that with the increase in area ratio pressure recovery increases upto certain point than with further increase in area ratio Pressure recovery decreases.

II.

EXPERIMENTAL FACILITY

A test rig for the present investigation has been constructed at Fluid Mechanics & Machinery Laboratory of Power Engineering Department Jadavpur University to investigate the flow characteristics within a circular cross sectioned C-shape diffuser. The geometry of the test diffuser is shown in Fig. 1 with co-ordinate system and measurement locations. The entire set up was fabricated from mild steel sheet except the test diffuser. The test diffuser was designed with increase in area from inlet to exit and it distributed normal to the centerline as suggested by Fox and Kline [8]. The test diffuser was designed based on an area ratio of 1.298 and centerline length of 235 mm. The test diffuser is made of fiber glass reinforcement plastic. Centerline was turned at 45 from inlet to exit with inlet diameter of 78 mm. In order to avoid the pressure losses and flow distortion at the inlet and exit, two constant area connectors were attached at the inlet and exit of the test diffuser. A precalibrated five-hole pressure probe was used to obtain detailed flow parameters like mean velocity and its components, total and static pressure and secondary motions along the entire length of the diffuser. Ambient air was used as working fluid. For measuring mean velocity and its components and static and total pressure surveys along the entire cross section of curved diffuser, the test piece was divided into four planes, one at the Inlet section one diameter upstream of the test diffuser, two planes, Section A and Section B at 15° and 30° turn along the length of the diffusing passages and the fourth plane, Section C is at the mid point of the exit duct. The details of measured planes are shown in Fig. 1. For measurement of flow parameters the five hole pressure probe was inserted through a 8 mm drilled hole provided at four locations, namely , 0°, 45°, 90°, and 315° angle as shown detail in Fig.1.

Figure 1: Geometry of test diffuser and measuring locations

The pre-calibrated five hole pressure probes was mounted in a traversing mechanism and the probe inserted into the flow field through 8 mm diameter drilled hole provided at the wall. The probe was placed within 1 mm of solid surface for the first reading. The probe was then moved radially and placed at the desired location as shown in Fig. 1. Instrumentation for the present study was chosen such that the experimental errors are minimum and also to have quick response to the flow parameters. The pre-calibrated hemispherical tip five-hole pressure probe used for the present study. The probe was calibrated and using non null technique was used to measure the flow parameter. All the five sensing ports of the probe were connected to a variable inclined multi tube manometer. The readings were recorded with respect to atmospheric pressure. The mean velocity and components of mean velocity distribution have been drawn with the help of SURFER software The assessment of errors resulting from the readings of the present five hole pressure probe was made as a function of all incidence angles for all flow characteristics in all the probe sectors and discussed in details[18], [19].

131

Vol. 3, Issue 1, pp.

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

III.

RESULTS AND DISCUSSIONS

The flow characteristics have been evaluated by mass average mean velocity, between the curved walls, total pressure and static pressure of the flow at various cross sections. Measured flow quantities have been presented in the form of 2-D profiles. All the velocities and pressures were normalized with respect to the inlet mass average velocity and inlet dynamic pressure respectively.

3.1

Mean velocity contour

The normalized mean velocity distribution in the form of contour plots at various sections of the curved diffuser has been discussed here and shown in Fig.2. The mean velocity contour at Inlet Section is shown in Fig.2(a) and it indicates that the flow is nearly symmetrical in nature throughout the entire cross-sectional area. The mean velocity contour of Section A, shown in Fig.2(b), indicates the overall diffusion of velocity particularly near the concave wall of this section and also reveals the shifting of the high velocity fluid close to the convex wall. These are mainly due to combined effect of the inertia force and centrifugal action on the flow.The mean velocity distribution in Section B is shown in Fig.2(c). The figure shows that the overall diffusion takes place at this section. The flow pattern at this section has changed compared to the preceding section. An appreciable diffusion across the entire section is observed from this figure. It is also observed that the high velocity core has occupied major part of the central area of the cross section and this is mainly due to the tendency of the flow to move towards concave wall due to inertia. As a result, low velocity fluid is pushed towards the convex wall indicating the accumulation of low momentum fluid at this section.

(a) Inlet section

(b) Section A (c) Section B Figure 2: Mean velocity contour

(d) Section C

The mean velocity distribution of Section C is shown in Fig.2(d). The figure depicts that the high velocity core is shifted towards concave wall and it occupies a substantial portion of the cross sectional area. The low velocity fluid close to the convex wall, as compared with the observation at Section B, occupies more area at Section C. Comparing the velocity distribution lines about the mid horizontal plane, it can be observed that the flow is symmetrical between top and bottom surfaces; it indicates the development of similar types of flow about the vertical plane. In a closed conduit, this phenomenon is only possible if the directions of flow in the two halves are opposite in nature (counter rotating flows), which is a natural phenomenon of flow through curved duct of any cross-section. Over an above it, increase in the cross-sectional area has further complicated the flow development.

3.2

Pressure Recovery & loss Coefficient

The variation of normalized coefficients of mass averaged static pressure recovery and total pressure loss based on the average static and total pressures at different sections of C-shape diffuser are shown in Fig.3. The figure shows that the coefficient of pressure recovery increases continuously up to the Section A, subsequently the increase takes place with a lesser gradient up to the Section B, and then again increases in a steeper gradient. The overall coefficient of mass averaged static pressure recovery is nearly 48% for this test diffuser. The coefficient of mass averaged total pressure loss increases rapidly in the curved diffuser up to the Section A. Then it increases steadily and marginally up to Section C. The overall mean value of the coefficient of mass averaged total pressure loss is nearly 11% for this test diffuser.

132

Vol. 3, Issue 1, pp.

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Figure 3: Variation of mass average pressure recovery and loss coefficients

3.3

Numerical Validation

In the present study a preliminary investigation was carried out using different turbulence models available in FLUENT. Based on the Intensive investigation it was found that Standard k – ε model of turbulence provides the best result and results obtained from computational analysis match both in qualitatively and quantitatively with the experimental results. It is to be noted here that the inlet profiles obtained during experiment are fed as an inlet condition during the validation with FLUENT. Some of the validation figures are shown in Fig. 4(a), Fig. 4(b) and Fig. 4(c) respectively. Experimental

(a) Section A

(b) Section B

(c) Section C and Section C obtained

Computational
Figure 4: Comparison of normalized velocity distribution at Section A, Section B through Computational and Experimental investigation.

All three figures indicate that the mass averaged mean velocity contours obtained by computational and experimental investigations, which shows a qualitative matching to each other. This could be due to the complicated nature of flow at those planes, which was not properly predicted by the process of computer simulation. The mean velocity distribution at the Section B and Section C are shown in Fig. 4(c) and Fig. 4(d) show a reasonably good agreement of the computational investigation with the experimental results. These agreements confirm that the CFD code using Standard k-ε model can predict the flow and performance characteristics reasonably well for similar geometries with similar boundary conditions. Figure 5 shows the comparison of performance parameters like coefficient of static pressure recovery and coefficient of total pressure loss obtained through experimental and computational investigation. From the figure it has been observed that coefficient of pressure recovery Cpr for the computational investigation was obtained as 52% compared to the experimental investigation, which obtained as 48%. Similarly the coefficient of pressure loss is obtained as 10% in

133

Vol. 3, Issue 1, pp.

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
computation investigation compared to the 11% of experimental study. This shows very good matching of the predicted results with the experimental one. These agreements confirm that the CFD code using Standard k – ε model can predict the flow and performance characteristics reasonably well for similar geometries with same boundary conditions

the similar observation is reported in the work by Dey et al [20].

Figure 5: Comparison of performance parameters obtained through computational and experimental investigation

3.4

Parametric investigation

To obtain a more insight of the performance parameters an intense parametric study of pressure recovery/loss coefficient for different area ratio diffusers with the angle turn 42º was carried out. For this purpose area ratios1.25, 1.5, 1.75, 2, 2.25, 2.75, 3, 3.25 and 3.5 with the angle of turns 45º Cshape diffusers have chosen. From this investigation it is observed from Fig.6 that for the increase in area ratio static pressure recovery increases sharply upto area ratio 2 after that increment in lesser gradient up to 2.67 and it was maximum at area ratio 2.67. But when area ratio increases from 2.67 to 3.75 pressure recovery decreases steadily. The coefficient of total pressure loss almost remains constant with the change in area ratio and angle of turn for similar inlet conditions.

Figure 6: Variation of mass average pressure recovery and loss coefficients.

IV.

CONCLUSION

Based on the present investigation following conclusion have drawn for the present paper. High velocity fluids shifted and accumulated at the concave wall of the exit section. The mass average

134

Vol. 3, Issue 1, pp.

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
static pressure recovery and total pressure loss for the curved test diffuser is continuous from Inlet section to Section C. Performance parameter like coefficient of mass average static pressure recovery and coefficient of mass average total pressure loss are 48% and 10% respectively. A comparison between the experimental and predicated results for the annular curved diffuser show good qualitative agreement between the two. The coefficient of mass averaged static pressure recovery and total pressure loss are obtained as 52% and 11% in predicted results and in the experimental results their values value obtained as 48% and 10% respectively, which indicate a good matching between the experimental and predicted results From the parametric investigation it is observed that for the increase in area ratio static pressure recovery increases sharply upto area ratio 2.0, after that increment in lesser gradient up to 2.67 and it was maximum at area ratio 2.67. But when area ratio increases from 2.67 to 3.75 pressure recovery decreases steadily. Among the different turbulence models within the fluent solver a standard k-ε model shows the good results and predicts the flow and performance characteristics well for C-shape diffusing ducts with uniform flow at inlet.

REFERENCE
G. S. Williams, C.W. Hubbell, and G.H. Fenkell, Experiments at Detroit, Mich, on the effect of curvature upon the flow of water pipes, Trans. ASCE, ,Vol.47, 1902, pp.1-196. [2] W. R. Dean, Note on the Motion of Fluid of Curved Pipe, Philosophical Magazine, Vol.20, pp.208-223. [3] 1927,W. R, Dean, “The Streamline Motion of Fluid in a Curved Pipe”, Philosophical Magazine, Vol.30, 1928, pp.673-693. [4] M. Rowe, “Measurement and computations of flow in pipe bends”, Journal of Fluid Mechanics, 1970, Vol.43, pp.771-783. [5] M. M. Enayet, M.M. Gibson, A.M.K.P. Taylor, and M. Yianneskis, “Laser Doppler measurements of Laminar and Turbulent Flow in a Bend”, Int. Journal of Heat and Fluid Flow, 1982, Vol. 3, pp.211-217. [6] W.J. Kim, and V.C. Patel, “Origin and Decay of Longitudinal Vortices in the Development of flow in a Curved Rectangular Duct”, Trans ASME, Journal of Fluid Engineering, 1994 Vol116 Vol.116, pp.45-52 [7] J. D. Stanitz, “Design of Two Dimensional Channels with Prescribed Velocity Distribution along the Channel Walls”, NACA Report No. 115, 1953 [8] R.W. Fox, and S.J. Kline, “Flow Regimes in Curves Subsonic Diffuser”, Trans ASME, Journal of Basic Engineering, 1962,Vol. 84, pp. 303 – 316 [9] ] D.J. Parson and P.G. Hill, “Effect of Curvature on Two Dimensional Diffusing flow”, Trans. ASME, Journal of fluid engineering, 1973, Vol. 95. pp. 1–12. [10] O. J, Mcmillan, “Mean Flow Measurements of the Flow Diffusing”, NASA Contractor Report 3634, 1982. [11] B. Majumdar, S.N. Singh, and D.P. Agrawal, “Flow characteristics in a large area curved diffuser”, Proc. I Mech E, Journal of Aerospace Engineering, 1996, Vol. 210, pp. 65 – 75. [12] M. I. Yaras, “Effects of Inlets Conditions in the Flow in a Fishtail Diffuser with Strong Curvature”, Trans. ASME, Journal of Fluid Engineering, 1996, Vol. 118, pp 772 – 778. [13] B., Majumdar, R. Mohan, S.N. Singh, and D.P. Agrawal, “Experimental study of Flow in a high Aspect ratio 90° Curved Diffuser”, Trans. ASME, Journal of Fluids Engineering, 1998,Vol. 120, pp. 83 – 89. [14] B. Majumdar, S. N. Singh, and D.P. Agrawal, “Flow Structure in a 180° Curved Diffusing Ducts”, The Arabian journal of science and technology, 1999, Vol. 24, pp. 79 –87. [15] P. K. Sinha, A. N. Mullick, B. Halder and B. Majumdar, “A Parametric Investigation of Flow through an Annular Curved Diffuser”, Journal of International Review of Aerospace Engineering, 2010,Vol. 3. No. 5, pp. 249-256. [16] P. K. Sinha, A. K. Das and B. Majumdar, “Numerical Investigation of flow through Annular Curved Diffusing duct”, International Journal of Engineering & Technology IJET-IJENS, 2011, Vol..03, No.03, pp.190-196. [17] P. K. Sinha., A. N. Mullick, B. Halder and B. Majumdar,“ Computational Analysis of Flow performance in a C- shape Subsonic Diffuser”, International Journal of Engineering Research and Applications, Vol..01, Issue 3, pp.824-829, 2011. [18] D. Chowdhoury, “Modelling and calibration of pressure probes”, M.E Thesis, Jadavpur University, 2007. [19] S. Mukhopadhya, A. Dutta A. N. Mullick and B. Majumdar, “Effect of Five-hole probe tip shape on its calibration”, Journal of the Aeronautical Society of India, 2001, Vol.53, No.4, pp.271–279. [20] R.K., Dey, S., Bharani, S. Singh, and V Sheshadri, “Flow Analysis in an S-shaped Diffuser with Circular Cross-Section”, Arabian Journal of Science and Engineering, Vol.27, 2002, pp. 197-206. [1]

135

Vol. 3, Issue 1, pp.

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Authors Biographies
P. K. Sinha received the B. Tech (Mechanical) and M.Tech. (Mechanical Designs of Machines) degrees both from N.I.T. Durgapur, India in 1984 & 1991 respectively. He is obtained his Ph.D from Jadavpur University in 2011. His research interests are Experimental Fluid Dynamics and Aerodynamics, Machine Design and Non conventional Energy. He has published thirty seven papers. Presently he is working in Department of Mechanical Engineering, Techno India GroupKanksa Academy of Technology & Management, Durgapur. He is a Fellow of Institution of Engineers (India) and Life member of fluid Mechanics and Fluid Power. A. N. Mullick received the B.E. (Mechanical) and M.E. (Mechanical) degrees both from Jadavpur University, India in 1979 & 2000 respectively. He is obtained his Ph.D from Jadavpur University in 2007. His research interests are Experimental Fluid Dynamics and Aerodynamics. He has published thirty eight papers. Presently he is working in Department of Mechanical Engineering, N.I.T. Durgapur, India. He is a Life Member of ASME, IEEE, IE(India), AeSI, LMFMFP, LMAMM B. Halder received the B. Tech (Mechanical) and M.Tech. (Mechanical) degrees both from N.I.T. Durgapur, India in 1982 & 1984 respectively. He is obtained his Ph.D from Indian Institute of Technology, Kharagpur in 1993. His research interests are Experimental Fluid Dynamics and Aerodynamics. He has published twenty four papers. Presently he is working in Department of Mechanical Engineering, N.I.T. Durgapur, India. He is a Member of Fellow IIM, Fellow IIE, LMISTE B. Majumdar received the B.E. (Mechanical) and M.E. (Mechanical) degrees both from Jadavpur University, India in 1981 & 1983 respectively. He is obtained his Ph.D from Indian Institute of Technology, Delhi in 1995. His research interests are Experimental Fluid Dynamics and Aerodynamics, Hydro Power and wind power energy. He has published Seventy two papers. Presently he is working in Department of Power Engineering, Jadavpur University, Kolkata, India. He is a Life Member of Institution of Engineers (India).

136

Vol. 3, Issue 1, pp.

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

GUIDELINES IN SELECTING A PROGRAMMING LANGUAGE AND A DATABASE MANAGEMENT SYSTEM
Onkar Dipak Joshi1, Virajit A. Gundale2, Sachin M. Jagdale3
Assistant Professor, CSE Department, Sharad Institute of Technology College of Engineering, Yadrav Dist. Kolhapur India. 2 Professor, Mechanical Engineering Department, Sharad Institute of Technology College of Engineering, Yadrav Dist. Kolhapur India. 3 Assistant Professor, CSE Department, Sharad Institute of Technology Polytechnic, Yadrav Dist. Kolhapur India.
1

A BSTRACT
Software development is a very complex process. When the project developer is planning to build a project for any organization in that case it is obvious question for developer to select suitable PL (Programming Language) and DBMS (Database Management System). It is a notable issue in project development because if developer fails to select suitable programming language or database management system then it badly affects on project development. This paper discusses what factors play important role for developers to select the programming language as well as database management system. In many cases some researchers point out only about the database management system or about programming language with no thumb rule at all. No published information whatsoever is available on this subject too. This paper gives a working and practical guideline for the developer in selecting upon the programming language and database management system as crucial factors for project development.

K EYW ORDS: Software development, DBMS, PL, developer, project development

I.

INTRODUCTION

For building any software project there is need to get perfect understanding of the user requirements and accordingly the users and developers need to plan the project. While planning a project there are some basic facts need to be addressed. Those are user requirements, programming language, database management system, tools etc. If the project developer wants to develop an excellent project then he has to concentrate on these above mentioned things because it plays a pivotal role in the project development. In the case of user requirements it is a lengthy discussion process between the users about his need for the project so as one can go deep into the interactions you may get good requirements from the user. Software testing tool is the testing part of the project that comes after project coding. The most concern part in the project development from initial stage is selecting an appropriate programming language and database management system. It is because after some amount of time if the programming language or database management system becomes obsolete then this will jeopardize the whole work. This paper introduces some guidelines for choosing proper programming language as well as database management system. Project development in the IT service industry is the crucial factor because the whole IT industry profit depends upon the project’s success. Figure 1 shows major causes of IT project failure, in which 29% of IT project fail due the inadequate coordination of resources in the project where programming language and database management system is one the major resources in the project. While discussing the project development some issues need to be sorted out in the early stages because IT project’s time duration is very long and hence if we do not settle the issues early in the development stage then it become unavoidable.

137

Vol. 3, Issue 1, pp. 137-144

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Figure 1 Major causes of project failure

Due to this we need to reopen or restart the project once again and hence it is wastage of money, man power and resources. If the developer can find the technology in the initial stages of the project development then it can achieve better time saving, cost reduction and utilization of the resources in the project development. Without proper decision of programming language and database management system it can take lot of time to build project. This paper guides how to select programming language and database management system as per the applicability of the project in different scenarios. IT industry has capacity to provide the services to all the businesses. Providing service is the same as providing a software project which fulfills the requirement of the business or user. Project development is a very complex process in which thousands of employees are involved for creating excellent project. Software projects depend upon the Requirement Analysis, Design, Coding and Testing. Sometimes programming language is also called front end of the project and database management system is backend for the project. The brief discussion for selecting programming language for an IT project is as follows:

II.

SELECTION OF PROGRAMMING LANGUAGE

Selection of a particular language should be made based on overall goals of the project. Before starting an important project, it would be important to create several independent testing code programs in various languages. This will describe which one will be best for your task and that will go to save the time over selecting a particular language without unnecessary headache. By considering the user requirements the developer should consider the language for example if user need an immediate project within short amount of time then developer may choose RAD i.e. Rapid Application Development supported languages that will provide drag and drop facilities for the controls and easily available inbuilt functions that saves time of developer from developing controls and functions. If developer is novice in project development then he may go for simpler languages like VB whereas if the project is large enough then using JAVA, .NET, C++ etc. is common. Another approach is to divide project into separate modules or small projects with different languages for each module and selecting the one which may be best suited for the project. Some programming languages need high end configurations whereas other needs common configurations, it is very important to consider the user’s machine configuration while selecting language. For selection of particular language, the developer needs to consider the platform dependency, employees who know about the language, popularity of the language, and unanimity of the project team about language. The selected language must be easily understandable by user as well as the developer. It is expected to select the language which can contribute to save time, money, efforts and suitability in project.

III.

SELECTION OF DATABASE MANAGEMENT SYSTEM

Database management system (DBMS) is set of software which takes part in controlling the creation, maintenance, and organization of a user’s database. DBMS allows organizations to easily develop

138

Vol. 3, Issue 1, pp. 137-144

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
databases for various applications. A database is a mainly collection of data records, files, and other objects. It gives concurrent access to different user application programs to database at single time. It provides variety of database models, such as network model, hierarchical model, relational model and object model to conveniently support different applications. It generally supports query languages which are also work as high-level programming language. Database management system provides facilities for controlling data access, enforcing data integrity, managing concurrency control, and recovering the database has capacity to restore data from failures with the help of various backup techniques as well as maintaining database security. IT project requires database management system to store, retrieve and organize data from the database. There are some different types of database management system viz. 1. Hierarchical Database 2. Network Database 3. Relational Database 4. Object oriented Database There are number of database types available today; this paper describes some commonly known database types which are frequently used.

3.1. Hierarchical DatabaseHierarchical Databases are mostly used for mainframe computers. These are very traditional methods of managing and storing data. This kind of database is organized in pyramid fashion, where all the leafs are extending from the roots as shown in the Figure 2 where the library collection node is at the root position and all other nodes are extending from the root. The advantage of hierarchical databases that data can be accessed and updated rapidly because it consist tree-like structure and the relationships between records are maintaining in advance.

Figure 2 Hierarchical database

The main disadvantage of this type is that each child in the tree may have only one parent, due to which relationships between children are not permitted, even if they make sense from a logical point. These databases are so complex in their design that addition of new field or record requires that entire database be redefined.

3.2. Network Database
There are a considerable differences between hierarchical and network databases. In this database type, children of database are called members and parents are called owners. The most important difference is each child or member can have more than one parent as shown in Figure 3

139

Vol. 3, Issue 1, pp. 137-144

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Figure 3 Network database

where edu, com, org are the parents and they have multiple child’s from different domains. Network databases are considered more flexible than hierarchical database because it has more than one parent. Limitations must be considered when using this kind of database is network databases must be defined in advance and limitation to the number of connections that can be made between records.

3.3. Relational Databases
In Relational database, relationship between data files is relational. Relational database connect data in different files or tables by using common data elements or a key field like primary or candidate key. Data in relational database is stored in different tables, each table having a key field that uniquely identifies each row as shown in figure 4.

Figure 4 Relational database

This database is now more popular for these major reasons. There is no need to get trained for using relational database only introductory knowledge about database is enough. Database entries can be modified without redefining the entire structure.

3.4. Object-oriented Databases
This type of database is able to handle many new data types such as graphics, photographs, audio, and video. Object-oriented databases are useful for small, reusable modules of software called objects. The instructions contained within the object are used to do something with the data in the object. Though object oriented also has some disadvantages such as cost for developing this database is very high. The benefits to object-oriented databases are ability to mix and match reusable objects provides incredible multimedia capability.

140

Vol. 3, Issue 1, pp. 137-144

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Figure 5 Object oriented database

IV.

DBMS

Various Database management systems have their own advantages and disadvantages. After analyzing the dbms, developer has to decide which database type is appropriate for the project. This paper discusses some cases which provide useful information for novice developer in selecting programming language and database for the project. a) Suppose there is a need to build an parametric design application for mechanical engineering CAD product like UG, Catia, Solidworks, etc. where the design table which is a simple database is available in the excel table, where the programmer will just access this database and update existing values. While developing such kind of file there is no need to get knowledge of high end databases like oracle or Microsoft SQL. The same task is performed with the help of Microsoft Access or the excel file can also function as a basic database too. b) Civil engineer’s mostly work with the estimation, costing, measurement, etc this kind of application database don’t need of any special kind of services like performance tuning, security it only needs to store numbers and retrieve wherever necessary. Here, there is no need of performance tuning or security then the Microsoft Access is the best choice because it is easily available. c) Consider that the developer is going to develop banking system application at novice level, any banking application needs performance tuning, security, Business Intelligence, scalability, reliability then Microsoft access is not going to satisfy the requirements and hence developer have to think another databases which provides such kind of requirements for e.g. ORACLE, Microsoft SQL. While selecting database developer has to analyze and understand the current as well as future requirements of the user. Nowadays, it is not possible for businesses to hire a person who will be only hired for handling the project but business need an employee who can simultaneously do the project operations as well as handle the given work. If the developer is going to develop project for the first time then Microsoft Access is the best solution, it is because Microsoft Access is easily available with Microsoft Office, easy to understand and can handle database transactions up to certain level. When the developer gets trained on Microsoft Access and able to understand the database transactions then by experience it will use advance database management system efficiently.

V.

RESULTS AND DISCUSSION

Proper selection of programming language and database management system for project is a crucial task. After studying different database management systems, this paper gives some basic differences between these dbms in Table 1 Software project development is project life cycle process which starts from requirement analysis from user to the testing of project and deploys it on to the user side.
Table 1 Difference between various Database Management Systems Microsoft SQL Oracle Postgre SQL Access Easily Available EXCELLENT GOOD GOOD EXCELLENT Ease to Understand EXCELLENT GOOD AVERAGE AVERAGE Security AVERAGE GOOD EXCELLENT AVERAGE Issues

141

Vol. 3, Issue 1, pp. 137-144

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Scalability Portability Data Handling Database Administration Business Intelligence Basic Data Types Concurrent Access Costly Support Size of project Stores Procedure AVERAGE GOOD AVERAGE AVERAGE AVERAGE AVERAGE LIMITED NO SMALLMEDIUM NO VERY GOOD GOOD GOOD GOOD GOOD AVERAGE YES NO ALL YES EXCELLENT VERY GOOD EXCELLENT VERY GOOD GOOD AVERAGE YES YES ALL YES AVERAGE GOOD GOOD GOOD POOR VERY GOOD YES NO ALL NO

Without proper selection of database management system and programming language, the goals of project are not achieved. Due to which there will be a loss for the organization directly affecting the profit of the business. Table 1 shows the difference between the various database management systems which are commonly used. By analyzing the table it is very clear that Microsoft access and PostgreSQL dbms are most favorable for the developer. PostgreSQL is an open source database management system which is freely available but due to some of its technical issues it not famous as Microsoft access for novice level application development. Microsoft access is used for small or medium sized projects, where there is no need of security, scalability and efficiency. Microsoft access is used by the beginner or the developers who want to develop the module or small project in short time. Whereas other database management systems like SQL, Oracle and Postgre SQL are useful for all projects, where there is need of security, efficiency, scalability and concurrent access to database. But these databases are not affordable for the small industries.

\ Figure 6 Popularity of the databases according to the cases

If developer decides programming language and database management system in initial stage of the project then, risk management will be achieved in the project and resources are also properly utilized. By referring this paper developer will come to know the factors on which the programming language and database management system will be selected for project. Figure 1.5 shows use of the some popular database management systems. By observing the figure it can easily understand that for advance operations like storing procedure, efficiency, security there must be a need of Oracle, SQL, Postgre SQL etc database management systems. For the novice level database applications developer must have to work with Microsoft Access.

VI.

CONCLUSION

The primary goal of the developer is to develop a project which satisfies user’s need as well as the project will complete within time deadline. If the selection of programming language and database management system fail at initial stage in the project, then developer team need to go to basics and

142

Vol. 3, Issue 1, pp. 137-144

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
start the work again. It is wastage of resources for the organization. This paper provides guidelines for choosing a programming language and database management system for project development. If the developer working with unsuitable programming language and/or database management system then project will definitely not function properly and due to which the user is not satisfied with project. For developing the small and medium sized projects for the routine functionalities and there is no high expectations from project then developer may use Microsoft access as the backend. If there are tremendous needs from the project for security, scalability and performance in the project it is better to use SQL or Oracle like architectures. Cost of IT projects is in millions or more, it is not affordable for any company to invest such big amount again and again for project development only. If software developer considers the points discussed in this paper for selecting programming language and database management system then project will achieve the goals set by the developer as well as clients of project.

ACKNOWLEDGEMENT
I would like to express utmost gratitude to my Co-author Dr. Virajit A. Gundale for his continued support, encouragement and guidance provided in articulating this paper. I would also like to thanks Mr. Babaso Ghag who extended his help in preparation of this paper.

REFERENCES
[1] Silberschatz, korth & Sudarshan, “Database System Concept”. McGraw Hill publication,2010. ISBN 0-07-352332-1. [2] Pankaj Jalote, “An Integrated approach to Software Engineering”. Narosa, 2011. ISBN: 978-81-7319-702-4 [3] Ben, mernachem, marlis, ”Software Quality”. CENGAGE, 2008. ISBN-13: 9788131509463 [4] Ramakrishnan & Gehrke, “Database Management systems”. McGraw Hill publication, 2003. ISBN-10: 0072465638 [5] Thomas Connolly & Carolyn Begg, “Database Systems : A practical approach to design, implementation and management”. Pearson Education, 2009. ISBN-13: 9780321523068 [6] Mikell P. Groover, Emory W. Zimmers, “CAD/CAM: Computer Aided design and Manufacturing”. PHI, 2008. ISBN 13: 9780131101302 [7] P.N.Rao, “CAD/CAM: Principles and Applications”, Tata McGraw-Hill Education 2004. ISBN 0-07058373-0 [8] M.G.Korgaonkar, “Just in Time Manufacturing”, Vedams eBooks (P) Ltd, ISBN 10: 0333926633 / ISBN 13: 9780333926635 [9] “IT diagram”, Retrieved on January 11, 2012 from, http://2.bp.blogspot.com/_hTYkhoBH1iI/S5Ef053jCZI /AAAAAAAAAAc/HETvx-erhBk/s400/IT-Business-Focus.jpg [10] Gints Plivna , “How to choose database”, Retrieved on January 07, 2012 from, http://www.gplivna.eu/papers/choose_database.htm [11] “11 advantages to information technology provider services”, Retrieved on January 08, 2012 from, http://intinc.com/benefits/technical-advantages.htm [12] “How do you choose what programming language to use”, Retrieved on January 08, 2012 from, http://stackoverflow.com/questions/144691/how-do-you-choose-what-programming-language-to-use [13] Susan Desousa’s, “Different types of databases” Retrieved on January 08, 2012 from, http://www.my-project-management-expert.com/different-types-of-databases.html [14] “Types of database management system” Retrieved on January 08, 2012 from, http://www.personal.psu.edu/glh10/ist110 /topic/topic07/topic07_06.html [15] “Types of database”, Retrieved on January 08, 2012 from http://www.theukwebdesigncompany.com/articles/types-of-databases.php [16] “Hierarchical Database Model” Retrieved on January 11, 2012 from, http://www.ibm.com/developerworks/xml/library/x-matters8/hier.gif [17] “Network Database Model” Retrieved on January 11, 2012 from, http://docs.oracle.com/cd/B19306_01/server.102/b14231/img/admin041.gif [18] “OODB Model” Retrieved on January 11, 2012 from, http://www.cs.pitt.edu/~chang/156/images/fig1914.gif [19] “Relational database model”, Retrieved on January 11, 2012 from, http://4.bp.blogspot.com/NK4lb2DAQyk/TeI8OsraBEI/AAAAAAAAAAU/fwTEwGYo9Ew/s1600/relational+database.jpg [20] “Comparison of Oracle, MySQL and Postgres DBMS” Retrieved on January 29, 2012 from, http://www-ss.fnal.gov/dsg/external/freeware/mysql-vs-pgsql.html

143

Vol. 3, Issue 1, pp. 137-144

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
[21] “Oracle10g vs PostgreSQL 8 vs MySQL 5”(n.d.). Retrieved on January 29, 2012 from, http://it.toolbox.com/blogs/oracle-guide/oracle-10g- vs- postgresql-8-vs-mysql-5-5452

Authors

Onkar Dipak Joshi is presently working as the Assistant Professor at Sharad Institute of Technology, College of Engineering. Yadrav Dist. Kolhapur, India. He obtained his B.E. in Computer Science and engineering from Shivaji University with distinction and currently pursuing his M.S. by research in Computer Science and Engineering. His total teaching experience is more than 2 years.

Virajit A. Gundale is presently working as the Professor, Mechanical Engineering Department at Sharad Institute of Technology, College of Engineering. Yadrav Dist. Kolhapur, India. Apart from this he is well known consultant in the design and development of Submersible pumps and motor components. He has worked on various international projects in Bangladesh, Indonesia, Egypt, etc. He obtained his B.E. in Mechanical Engineering from Shivaji University and M.S. in Manufacturing Management from BITS, Pilani. He obtained his Ph.D. in Manufacturing Technology from UNEM, Costa Rica in 2010. He also works on various Software development projects related with Automation of CAD/CAM, ERP, etc. His total experience including Teaching and Industry spans more than 12 years. He is also a fellow of the International Institute of Mechanical Engineers, South Africa. Sachin M. Jagdale is presently working as the Assistant Professor at Sharad Institute of Technology, Polytechnic. Yadrav Dist. Kolhapur, India. He obtained his B.E. in Computer Science and engineering from Shivaji University and currently pursuing his M.Tech. in Computer Science and Engineering. His total teaching experience is more than 2 years.

144

Vol. 3, Issue 1, pp. 137-144

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

AUDIO STEGANALYSIS OF LSB AUDIO USING MOMENTS AND MULTIPLE REGRESSION MODEL
Souvik Bhattacharyya1 and Gautam Sanyal2
Department of CSE, University Institute of Technology, The University of Burdwan, Burdwan, India 2 Department of CSE, National Institute of Technology, Durgapur, India
1

ABSTRACT
Steganography is the art and science of communicating in a way which hides the existence of the communication. Important information is firstly hidden in a host data, such as digital image, text, video or audio, etc, and then transmitted secretly to the receiver. Steganalysis is another important topic in information hiding which is the art of detecting the presence of steganography. In this paper an effective steganalysis method based on statistical moment as well as invariant moments of the audio signals is used to detect the presence of hidden messages has been presented. Multiple Regression analysis technique has been carried out to detect the presence of the hidden messages, as well as to estimate the relative length of the embedded messages. The design of audio steganalyzer depends upon the choice of the audio feature selection and the design of a two-class classifier. Experimental results demonstrate the effectiveness and accuracy of the proposed technique.

KEYWORDS: Audio Steganalysis; Statistical Moments; Invariant Moments.

I.

INTRODUCTION

Steganography is the art and science of hiding information by embedding messages with in other seemingly harmless messages. As the goal of steganography is to hide the presence of a message it can be seen as the complement of cryptography, whose goal is to hide the content of a message. Although steganography is an ancient subject, the modern formulation of it comes from the prisoner’s problem proposed by Simmons [1].An assumption can be made based on this model is that if both the sender and receiver share some common secret information then the corresponding steganography protocol is known as then the secret key steganography where as pure steganography means that there is none prior information shared by sender and receiver. If the public key of the receiver is known to the sender, the steganographic protocol is called public key steganography [4, 8]. For a more thorough knowledge of steganography methodology the reader is advised to see [1], [2].Some Steganographic model with high security features has been presented in [3-6].Almost all digital file formats can be used for steganography, but the image and audio files are more suitable because of their high degree of redundancy [2]. Fig. 1 below shows the different categories of file formats that can be used for steganography techniques.

Figure 1: Types of Steganography

145

Vol. 3, Issue 1, pp. 145-160

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Among them image steganography is the most popular of the lot. In this method the secret message is embedded into an image as noise to it, which is nearly impossible to differentiate by human eyes [11, 15, 17]. In video steganography, same method may be used to embed a message [18, 24]. Audio steganography embeds the message into a cover audio file as noise at a frequency out of human hearing range [19]. One major category, perhaps the most difficult kind of steganography is text steganography or linguistic steganography [3]. The text steganography is a method of using written natural language to conceal a secret message as defined by Chapman et al. [16]. Steganalysis, from an opponent's perspective, is an art of deterring covert communications while avoiding affecting the innocent ones. Its basic requirement is to determine accurately whether a secret message is hidden in the testing medium. Further requirements may include judging the type of the steganography, estimating the rough length of the message, or even extracting the hidden message. The challenge of steganalysis is that: Unlike cryptanalysis, where it is evident that intercepted encrypted data contains a message, steganalysis generally starts with several suspect information streams but uncertainty whether any of these contain hidden message.

Fig. 1. Generic Steganography and Steganalysis

Figure 2: Generic Steganography and Steganalysis

In this paper, a novel steganalysis method based on statistical moments of the audio signals as well as the multiple regression analysis based on the features of those statistical moments are proposed. The performance of the steganalysis has been demonstrated by extensive experimental investigation. This paper has been organized as following sections: Section II describes some review works of audio steganography. Section III reviews the previous work on audio steganalysis. Section IV describes the various methods for audio feature selection. Experimental Results of the method has been discussed in Section V and Section VI draws the conclusion.

II.

REVIEW OF RELATED WORKS ON AUDIO STEGANOGRAPHY

This section presents some existing techniques of audio data hiding namely Least Significant Bit Encoding, Phase Coding Echo Hiding and Spread Spectrum techniques. There are two main areas of modification in an audio for data embedding. First, the storage environment, or digital representation of the signal that will be used, and second the transmission pathway the signal might travel [4, 11]. A. Least Significant Bit Encoding The simple way of embedding the information in a digital audio file is done through Least significant bit (LSB) coding. By substituting the least significant bit of each sampling point with a binary message bit, LSB coding allows a data to be encoded and produces the stego audio. In LSB coding, the ideal data transmission rate is 1 kbps per 1 kHz. The main disadvantage of LSB coding is its low embedding capacity. In some cases an attempt has been made to overcome this situation by replacing the two least significant bits of a sample with two message bits. This increases the data embedding capacity but also increases the amount of resulting noise in the audio file as well. A novel method of LSB coding which increases the limit up to four bits is proposed by Nedeljko Cvejic Et al. [13, 16]. To extract secret message from an LSB encoded audio, the receiver needs access to the sequence of sample indices used during the embedding process. Normally, the length of the secret message to be embedded is smaller than the total number of samples done. There are other two disadvantages also

146

Vol. 3, Issue 1, pp. 145-160

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
associated LSB coding. The first one is that human ear is very sensitive and can often detect the presence of single bit of noise into an audio file. Second disadvantage however, is that LSB coding is not very robust. Embedded information will be lost through a little modification of the stego audio. B. Phase Coding Phase coding [11, 16] overcomes the disadvantages of noise induction method of audio steganography. Phase coding designed based on the fact that the phase components of sound are not as perceptible to the human ear as noise is. This method encodes the message bits as phase shifts in the phase spectrum of a digital signal, achieving an inaudible encoding in terms of signal-to-noise ratio. In figure 4 below original and encoded signal through phase coding method has been presented.

Original Signal

Encoded Signal

Figure 3: The original signal and encoded signal of phase coding technique.

Phase coding principles are summarized as under: • The original audio signal is broken up into smaller segments whose lengths equal the size of the message to be embedded. • Discrete Fourier Transform (DFT) is applied to each segment to create a matrix of the phases and Fourier transform magnitudes. • Phase differences between adjacent segments are calculated next. • Phase shifts between consecutive segments are easily detected. In other words, the absolute phases of the segments can be changed but the relative phase differences between adjacent segments must be preserved. Thus the secret message is only inserted in the phase vector of the first signal segment as follows:

A new phase matrix is created using the new phase of the first segment and the original phase differences. • Using the new phase matrix and original magnitude matrix, the audio signal is reconstructed by applying the inverse DFT and by concatenating the audio segments. To extract the secret message from the audio file, the receiver needs to know the segment length. The receiver can extract the secret message through different reverse process. The disadvantage associated with phase coding is that it has a low data embedding rate due to the fact that the secret message is encoded in the first signal segment only. This situation can be overcome by increasing the length of the signals segment which in turn increases the change in the phase relations between each frequency component of the segment more drastically, making the encoding easier to detect. Thus, the phase coding method is useful only when a small amount of data, such as a watermark, needs to be embedded. • C. Echo Hiding In echo hiding [14, 15, 16] method information is embedded into an audio file by inducing an echo into the discrete signal. Like the spread spectrum method, Echo Hiding method also has the advantage of having high embedding capacity with superior robustness compared to the noise inducing methods. If only one echo was produced from the original signal, only one bit of information could be encoded. Therefore, the original signal is broken down into blocks before the encoding process begins. Once the encoding process is completed, the blocks are concatenated back together to form the final signal.

147

Vol. 3, Issue 1, pp. 145-160

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
To extract the secret message from the final stego audio signal, the receiver must be able to break up the signal into the same block sequence used during the encoding process. Then the autocorrelation function of the signal's cepstrum which is the Forward Fourier Transform of the signal's frequency spectrum can be used to decode the message because it reveals a spike at each echo time offset, allowing the message to be reconstructed.

Figure 4: Echo Hiding Methodology.

D. Spread Spectrum Spread Spectrum (SS) [16] methodology attempts to spread the secret information across the audio signal's frequency spectrum as much as possible. This is equivalent to a system using the LSB coding method which randomly spreads the message bits over the entire audio file. The difference is that unlike LSB coding, the SS method spreads the secret message over the audio file's frequency spectrum, using a code which is independent of the actual signal. As a result, the final signal occupies a more bandwidth than actual requirement for embedding. Two versions of SS can be used for audio steganography one is the direct sequence where the secret message is spread out by a constant called the chip rate and then modulated with a pseudo random signal where as in the second method frequency-hopping SS, the audio file's frequency then interleaved with the cover-signal spectrum is altered so that it hops rapidly between frequencies. The Spread Spectrum method has a more embedding capacity compared to LSB coding and phase coding techniques with maintaining a high level of robustness. However, the SS method shares a disadvantage common with LSB and parity coding that it can introduce noise into the audio file at the time of embedding.

III.

REVIEW OF RELATED WORKS ON AUDIO STEGANALYSIS

Audio steganalysis is very difficult due to the existence of advanced audio steganography schemes and the very nature of audio signals to be high-capacity data streams necessitates the need for scientifically challenging statistical analysis [29]. A. Phase and Echo Steganalysis Zeng et. al [17] proposed steganalysis algorithms to detect phase coding steganography based on the analysis of phase discontinuities and to detect echo steganography based on the statistical moments of peak frequency [18]. The phase steganalysis algorithm explores the fact that phase coding corrupts the extrinsic continuities of unwrapped phase in each audio segment, causing changes in the phase difference [19]. A statistical analysis of the phase difference in each audio segment can be used to monitor the change and train the classifiers to differentiate an embedded audio signal from a clean audio signal. B. Universal Steganalysis based on Recorded Speech Johnson et. al [20] proposed a generic universal steganalysis algorithm that bases it study on the statistical regularities of recorded speech. Their statistical model decomposes an audio signal (i.e., recorded speech) using basis functions localized in both time and frequency domains in the form of Short Time Fourier Transform (STFT). The spectrograms collected from this decomposition are analyzed using non-linear support vector machines to differentiate between cover and stego audio signals. This approach is likely to work only for high-bit rate audio steganography and will not be effective for detecting low bit-rate embeddings.

148

Vol. 3, Issue 1, pp. 145-160

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
C. Use of Statistical Distance Measures for Audio Steganalysis H. Ozer et. al [21] calculated the distribution of various statistical distance measures on cover audio signals and stego audio signals vis--vis their versions without noise and observed them to be statistically different. The authors employed audio quality metrics to capture the anomalies in the signal introduced by the embedded data. They designed an audio steganalyzer that relied on the choice of audio quality measures, which were tested depending on their perceptual or non-perceptual nature. The selection of the proper features and quality measures was conducted using the (i) ANOVA test [22] to determine whether there are any statistically significant differences between available conditions and the (ii) SFS (Sequential Floating Search) algorithm that considers the inter-correlation between the test features in ensemble [23]. Subsequently, two classifiers, one based on linear regression and another based on support vector machines were used and also simultaneously evaluated for their capability to detect stego messages embedded in the audio signals. D. Audio Steganalysis based on Hausdorff Distance The audio steganalysis algorithm proposed by Liu et. al [24] uses the Hausdorff distance measure [25] to measure the distortion between a cover audio signal and a stego audio signal. The algorithm takes as input a potentially stego audio signal x and its de-noised version x as an estimate of the cover signal. Both x and x are then subjected to appropriate segmentation and wavelet decomposition to generate wavelet coefficients [26] at different levels of resolution. The Hausdorff distance values between the wavelet coefficients of the audio signals and their de-noised versions are measured. The statistical moments of the Hausdorff distance measures are used to train a classifier on the difference between cover audio signals and stego audio signals with different content loadings. E. Audio Steganalysis for High Complexity Audio Signals More recently, Liu et. al [27] propose the use of stream data mining for steganalysis of audio signals of high complexity. Their approach extracts the second order derivative based Markov transition probabilities and high frequency spectrum statistics as the features of the audio streams. The variations in the second order derivative based features are explored to distinguish between the cover and stego audio signals. This approach also uses the Mel-frequency cepstral coefficients [28], widely used in speech recognition, for audio steganalysis. Recently two new methods of audio steganalysis of spread spectrum information hiding has been proposed in [31-32].

IV.

AUDIO FEATURE SELECTION

In this section audio quality measures in terms of moments up to 7th order both statistical and invariants has been investigated for the purpose of audio steganalysis. Various moments of the audio signals are sensitive to the presence of a steganographic message embedding. Moments based features have been extracted for steganalytic measure in such a way that reflect the quality of distorted or degraded audio signal vis-à-vis its original in an accurate, consistent and monotonic way. Such a measure, in the context of steganalysis, should respond to the presence of hidden message with minimum error, should work for a large variety of embedding methods, and its reaction should be proportional to the embedding strength. A. Moments based Audio feature To construct the features of both cover and stego or suspicious audios moments of the audio series has been computed. In mathematics, a moment is, loosely speaking, a quantitative measure of the shape of a set of points. The "second moment", for example, is widely used and measures the "width" (in a particular sense) of a set of points in one dimension or in higher dimensions measures the shape of a cloud of points as it could be fit by an ellipsoid. Other moments describe other aspects of a distribution such as how the distribution is skewed from its mean, or peaked. There are two ways of viewing moments [30], one based on statistics and one based on arbitrary functions such as f(x) or f(x, y). As a result moments can be defined in more than one way. Statistical view Moments are the statistical expectation of certain power functions of a random variable. The most common moment is the mean which is just the expected value of a random variable as given in equation 1.

149

Vol. 3, Issue 1, pp. 145-160

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

µ = E[ X ] =


−∞

x f ( x ) dx

(1)

Where f(x) is the probability density function of continuous random variable X. More generally, moments of order p = 0, 1, 2, … can be calculated as mp = E [Xp].These are sometimes referred to as the raw moments. There are other kinds of moments that are often useful. One of these is the central moments µp = E [(X–µ)p].The best known central moment is the second, which is known as the variance given in equation 2.

σ 2 = ∫ ( x − µ ) 2 f ( x) dx = m2 − µ12

(2)

Two less common statistical measures, skewness and kurtosis, are based on the third and fourth central moments. The use of expectation assumes that the pdf is known. Moments are easily extended to two or more dimensions as shown in equation 3.

m pq = E[X pY q ] = ∫∫ x p y q f ( x, y ) dx dy

(3)

Here f(x, y) is the joint pdf. Estimation However, moments are easy to estimate from a set of measurements, xi. The p-th moment is estimated as given in equation 4 and 5.

mp =

1 N

∑x
i =1

N

p i

(4)

(Often 1/N is left out of the definition) and the p-th central moment is estimated as

µp =

1 N

∑ (x − x )
i i

p

(5)

is the average of the measurements, which is the usual estimate of the mean. The second central moment gives the variance of a set of data s2 = µ2.For multidimensional distributions, the first and second order moments give estimates of the mean vector and covariance matrix. The order of moments in two dimensions is given by p+q, so for moments above 0, there is more than one of a given order. For example, m20, m11, and m02 are the three moments of order 2. Non-statistical view This view is not based on probability and expected values, but most of the same ideas still hold. For any arbitrary function f(x), one may compute moments using the equation 6 or for a 2-D function using the equation 7.

mp =

∫x
−∞

p

f ( x) dx
(6) (7)

m pq = ∫∫ x p y q f ( x, y ) dx dy

Notice now that to find the mean value of f(x), one must use m1/m0, since f(x) is not normalized to area 1 like the pdf. Likewise, for higher order moments it is common to normalize these moments by dividing by m0 (or m00). This allows one to compute moments which depend only on the shape and not the magnitude of f(x). The result of normalizing moments gives measures which contain information about the shape or distribution (not probability dist.) of f(x).

150

Vol. 3, Issue 1, pp. 145-160

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Digital approximation For digitized data (including images) we must replace the integral with a summation over the domain covered by the data. The 2-D approximation is written in equation 8.

m pq = ∑∑ f ( xi , y j ) xip y q j
i =1 j =1

M

N

= ∑∑ f (i, j ) i p j q
i =1 j =1

M

N

(8)

If f(x, y) is a binary image function of an object, the area is m00, the x and y centroids are x = m10 / m00 and y = m01 / m00 . Invariance In many applications such as shape recognition, it is useful to generate shape features which are independent of parameters which cannot be controlled in an image. Such features are called invariant features. There are several types of invariance. For example, if an object may occur in an arbitrary location in an image, then one needs the moments to be invariant to location. For binary connected components, this can be achieved simply by using the central moments, µpq. If an object is not at a fixed distance from a fixed focal length camera, then the sizes of objects will not be fixed. In this case size invariance is needed. This can be achieved by normalizing the moments as given in equation 9.

η pq =

µ pq , where γ = ½( p + q ) + 1. γ µ00 (9)

The third common type of invariance is rotation invariance. This is not always needed, for example if objects always have a known direction as in recognizing machine printed text in a document. The direction can be established by locating lines of text. M.K. Hu derived a transformation of the normalized central moments to make the resulting moments rotation invariant as given in equation 10.

(10)

151

Vol. 3, Issue 1, pp. 145-160

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
B. Regression based Analysis Method In statistics, regression analysis includes any techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables. More specifically, regression analysis helps one understand how the typical value of the dependent variable changes when any one of the independent variables is varied, while the other independent variables are held fixed. In all cases, the estimation target is a function of the independent variables called the regression function.In regression analysis, it is also of interest to characterize the variation of the dependent variable around the regression function, which can be described by a probability distribution. Regression analysis is widely used for prediction and forecasting, where its use has substantial overlap with the field of machine learning. Regression analysis is also used to understand which among the independent variables are related to the dependent variable, and to explore the forms of these relationships. In restricted circumstances, regression analysis can be used to infer causal relationships between the independent and dependent variables. A large body of techniques for carrying out regression analysis has been developed. Familiar methods such as linear regression and ordinary least squares regression are parametric, in that the regression function is defined in terms of a finite number of unknown parameters that are estimated from the data. Nonparametric regression refers to techniques that allow the regression function to lie in a specified set of functions, which may be infinite-dimensional. The performance of regression analysis methods in practice depends on the form of the data-generating process, and how it relates to the regression approach being used. Since the true form of the data-generating process is in general not known, regression analysis often depends to some extent on making assumptions about this process. These assumptions are sometimes (but not always) testable if a large amount of data is available. Regression models for prediction are often useful even when the assumptions are moderately violated, although they may not perform optimally. However, in many applications, especially with small effects or questions of causality based on observational data, regression methods give misleading results. Regression models: Regression models involve the following variables The unknown parameters denoted as β; this may be a scalar or a vector. The independent variables X. The dependent variable, Y. In various fields of application, different terminologies are used in place of dependent and independent variables. A regression model relates Y to a function of X and β as given in equation 11. The approximation is usually formalized as E(Y | X) = f(X, β). To carry out regression analysis, the form of the function f must be specified. Sometimes the form of this function is based on knowledge about the relationship between Y and X that does not rely on the data. If no such knowledge is available, a flexible or convenient form for f is chosen. Linear regression: In linear regression, the model specification is that the dependent variable, yi is a linear combination of the parameters (but need not be linear in the independent variables). For example, in simple linear regression for modeling n data points there is one independent variable: xi, and two parameters, β0 and β1: Yi = β0+β1xi+εi , i =1,…….,n.-----(12) In multiple linear regressions, there are several independent variables or functions of independent variables. For example, adding a term in xi2 to the preceding regression gives: Yi = β0+β1xi+β2 xi2+ εi , i =1,…….,n.--------(13) This is still linear regression; although the expression on the right hand side is quadratic in the independent variable xi, it is linear in the parameters β0, β1 and β2.In both cases, εi is an error term and the subscript i indexes a particular observation. Given a random sample from the population, we estimate the population parameters and obtain the sample linear regression model: Ŷi= β0+β1xi ------------ (14)

152

Vol. 3, Issue 1, pp. 145-160

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
The residual, ei = Yi - Ŷi the difference between the value of the dependent variable predicted by the model, Ŷi and the true value of the dependent variable Yi. C. Audio Steganalysis based on Moment based Multiple Regression Analysis The Steganalysis technique proposed here to test the presence of the hidden message is the combination of statistical moments and invariant moments based analysis along with multiple regression based analysis on the cover data and stego data series for the estimation of the presence of the secret message as well as the predictive size of the hidden message. Let C and S be the series modeling the distribution of the cover and stego audio signals, a multiple regression analysis can be done in order to express S (dependent variable) as a function of the cover data C (independent variable) can be modeled as multiple regression model S = I + a * C where I is the constant and a is the regression coefficient. It has been seen from the experiments that multiple regression model between two same series namely a and b can be expressed as b = (0) + (1) *a. Steganalysis approach has been designed here based on the above mentioned fact considering cover audio data as the independent data series and stego audio data as the dependent series data. From the experimental results it can be shown that with the introduction of secret message/increasing length of the secret message moments parameters also changes. Hence it can be hypothesize that regression analysis between the cover data and stego data will cluster differently for clean and stego signals. This is the basis of proposed steganalyzer that aims to classify audio signal as original and suspicious. Multiple Regression modeling is also done in order to estimate the strength of the association among the variables of a series. The steganalysis system works in four stages: Moment based feature generation, feature selection, SVM classifier selection and training, and finally, steganalysis i.e. detection of the suspicious image. The first three steps were performed using only the training set. The final step was performed on the test collection to generate the submitted results.

Figure 5: Block Diagram of the proposed steganalysis system

V.

EXPERIMENTAL RESULTS

The steganalyzer has been designed based on a training set and using various audio steganographic tools. The steganographic tools used here Steganos [15] and S-Tools [16].In the experiments 10 audio wav signals were used for training and 10 audio wav signals for testing. After applying LSB replacement algorithm for embedding secret message into the cover audio with the insertion rate of 100 char, 200 char, 400 char with increase of twice the previous one up to 51200 char various stego audios has been created. Multiple Regression analysis based on the statistical moments, invariant moments as well as the combined form of this two has been used to estimate the various regression statistics of those stego audios. This steganalyzer can predicted the approximate area of the hidden message as well as the embedding message length can also be predicted on the basis of those

153

Vol. 3, Issue 1, pp. 145-160

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
regression statistics. From fig 6 it can be seen that with the introduction of small message all the invariant moments values changes. Figure 7 shows the graphical change rate of invariant moments of various orders with changes in insertion rate.
Insertion Rate(in char) 0 100 200 400 800 1600 3200 6400 12800 25600 51200

Φ1 |log|
16.8198 16.8202 16.8207 16.8189 16.8197 16.8193 16.8196 16.8201 16.8159 16.8175 16.8195

Φ2|log|
33.0476 33.0483 33.0495 33.0458 33.0475 33.0466 33.0472 33.0483 33.0397 33.0429 33.0471

Φ3 |log|
45.4821 45.5179 45.4880 45.4900 45.4867 45.4931 45.5013 45.4945 45.5072 45.5390 45.5394

Φ4 |log|
45.4821 45.5179 45.4880 45.4900 45.4867 45.4931 45.5013 45.4945 45.5072 45.5390 45.5394

Φ5 |log|
90.7469 90.8187 90.7588 90.7628 90.7561 90.7689 90.7854 90.7717 90.7973 90.8609 90.8618

Φ6 |log|
61.8972 61.9335 61.9042 61.9043 61.9018 61.9078 61.9163 61.9100 61.9185 61.9519 61.9544

Φ7|log|
Inf 12.4766 Inf 12.4766 Inf 12.1951 Inf Inf Inf Inf Inf

Figure 6: Invariant Moments value of chimes.wav audio signal at various embedding rate.

Figure 7: Rate of change of invariant moment of various orders at various embedding rate.

From fig 8 it can be seen that with the introduction of small message all the statistical moments value also changes and figure 9 shows the graphical change rate of statistical moments of various orders with changes in insertion rate.Finally figure 10 and figure 11 shows the changes of combined moments values of various order in tabular form and graphically respectively.

154

Vol. 3, Issue 1, pp. 145-160

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Insertion Rate (in char) 0 100 200 400 800 1600 3200 6400 12800 25600 51200 M1 |log| M2 |log| M3 |log| M4 |log| M5 |log| M6 |log| M7 |log|

Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf

5.0144 5.0144 5.0143 5.0144 5.0144 5.0144 5.0144 5.0144 5.0144 5.0144

10.7782 10.7795 10.7790 10.7789 10.7808 10.7793 10.7844 10.7870 10.7800 10.7833

6.7931 6.7931 6.7931 6.7931 6.7931 6.7932 6.7932 6.7932 6.7931 6.7932

10.0539 10.0543 10.0543 10.0540 10.0544 10.0539 10.0570 10.0573 10.0540 10.0548

7.5902 7.5902 7.5902 7.5902 7.5902 7.5904 7.5905 7.5905 7.5902 7.5903

9.9484 9.9485 9.9486 9.9484 9.9486 9.9499 9.9505 9.9506 9.9483 9.9487

5.0144 10.7782 6.7932 10.0541 7.5903 9.9487 Figure 8: Statistical Moments value of chimes.wav audio signal at various embedding rate.

Figure 9: Rate of change of statistical moment of various orders at various embedding rate.

155

Vol. 3, Issue 1, pp. 145-160

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Insertion Rate (in char) 0 100 200 400 800 1600 3200 6400 12800 25600 51200 CM1 |log| Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf CM2 |log| 38.0620 38.0627 38.0638 38.0602 38.0619 38.0610 38.0616 38.0627 38.0541 38.0573 CM3 |log| 56.2603 56.2974 56.2670 56.2689 56.2675 56.2724 56.2857 56.2815 56.2872 56.3223 CM4 |log| 52.2752 52.3110 52.2811 52.2831 52.2798 52.2863 52.2945 52.2877 52.3003 52.3322 CM5 |log| CM6 |log| 69.4874 69.5237 69.4944 69.4945 69.4920 69.4982 69.5068 69.5005 69.5087 69.5422 CM7 |log| Inf 22.4251 Inf 22.4250 Inf 22.1450 Inf Inf Inf Inf CAVGM

100.8008 100.8730 100.8131 100.8168 100.8105 100.8228 100.8424 100.8290 100.8513 100.9157

39.7357 39.7585 39.7399 39.7404 39.7390 39.7426 39.7489 39.7452 39.7502 39.7712

Inf 38.0615 56.3176 52.3326 100.9159 69.5447 Inf 39.7715 Figure 10: Combined Moments value of chimes audio signal at various embedding rate.

Figure 11: Rate of change of combined moments of various orders at various embedding rate.

156

Vol. 3, Issue 1, pp. 145-160

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Figure 12: Rate of Change of Average Combined Moments value at various insertion rates

To build the classes of Cover Audio signals Combined Moment values of the original 10 audio signals (chimes.wav, heartbeat.wav etc) are inserted in the database. Next step is to produce stego audio signals with those original audio signals with random bits and different rates in another database. Original audio signals database with various moments values are used for training set from which the classifiers are tuned. With help of this an incoming audio signal can be classified as stego or cover audio signal. Figure 11 shows the rate of changes of average combined moment values at various insertion rate where as fig 12 shows the combined moments value at the various segments of the original audio signal chimes.wav and also show the changes of moments value after embedding 100 char in segment1. From the figure it can be seen that moments value has been changed in segment1 with the introduction of data where as segment2, segment3 and segment 4 parameters are remain unchanged.

Figure 13: Combined Moments value at various audio segment of the original and stego audio

Thus it can be concluded that secret data has been embedded in segment1 portion of the cover audio signal. Next step is to predict the length of the hidden message. The classifier is trained with the values of Combined Moments of order 2, 3, 4, 5 and 6 of the original audio signal and stego signals with different embedding rate in order to form a relation between the embedding capacity and the combined moment values order 2, 3, 4, 5 and 6. Insertion rate can be computed from the various

157

Vol. 3, Issue 1, pp. 145-160

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
moments’ values using Multiple Regression analysis. Insertion Rate (IR) can be calculated from eqn (a) or eqn (b) given below Insertion Rate (in Char) = -41895724 + 12170436 * CM2 |log| - 2204991 * CM3 |log| + 17073509 *CM4 |log| + 5327565 *CM5 |log| - 24850928 *CM6 |log| -------------- (a) Or Insertion Rate (in %) = -117.1 + 19.32 * CM2 |log| - 28.06 * CM3 |log| - 40.7 * CM4 |log| + 51.82 * CM5 |log| - 30.73 * CM6 |log|----------------- (b) [Considering Maximum Embedding Capacity of chimes.wav is 169984 character] Embedding capacity can be predicted based on the insertion rate given in fig 13 which establishes the accuracy level of prediction is 70-75% with this steganalyzer.

Figure 14: Error rate in Prediction

From the above table in figure 13 it can be concluded that insertion rate can be predicted with an average error rate of 30-35 %.Thus we can say embedding length of the secret message can be successfully predicted 65-70% with this steganalyzer.

VI.

CONCLUSION

In this paper a new approach of LSB audio steganalysis is proposed. In this method moments based multiple regression analysis technique has been used for steganalysis. This algorithm not only detects the presence of the hidden message but can also be able to predict the length of the secret message. This method is also capable of finding the approximate hidden area of the secret message which is one of the superiority factors of the proposed method in comparison with the other existing methods.

REFERENCES
[1] [2] [3]

[4]

[5]

[6]

N.F.Johnson. and S. Jajodia. Steganography: seeing the unseen. IEEE Computer, 16:26–34, 1998. JHP Eloff. T Mrkel. and MS Olivier. An overview of image steganography. In Proceedings of the fifth annual Information Security South Africa Conference., 2005. “Study of Secure Steganography model” by Souvik Bhattacharyya and Gautam Sanyal at the proceedings of “International Conference on Advanced Computing & Communication Technologies (ICACCT-2008),Nov, 2008, Panipat, India” “An Image based Steganography model for promoting Global Cyber Security” by Souvik Bhattacharyya and Gautam Sanyal at the proceedings of “International Conference on Systemics, Cybernetics and Informatics (ICSCI-2009),Jan, 09,Hyderabad,India.” “Implementation and Design of an Image based Steganographic model” by Souvik Bhattacharyya and Gautam Sanyal at the proceedings of “ IEEE International Advance Computing Conference “(IACC2009)” A Novel Approach to Develop a Secure Image based Steganographic Model using Integer Wavelet Transform” at the proceedings of International Conference on Recent Trends in Information, Telecommunication and Computing (ITC 2010)” by Souvik Bhattacharyya, Avinash Prasad Kshitij and Gautam Sanyal. (Indexed by IEEE Computer Society).

158

Vol. 3, Issue 1, pp. 145-160

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
[7] [8]

[9]

[10]

[11]

[12]

[13]

[14]

[15] [16] [17]

[18]

[19]

[20]

[21]

[22] [23] [24]

[25]

[26]

[27] [28]

C. Kraetzer and J. Dittmann, “Pros and Cons of Mel cepstrum based Audio Steganalysis using SVM Classification,” Lecture Notes in Computer Science, vol. 4567, pp. 359 – 377, January 2008. W. Zeng, H. Ai and R. Hu, “A Novel Steganalysis Algorithm of Phase Coding in Audio Signal,” Proceedings of the 6th International Conference on Advanced Language Processing and Web Information Technology, pp. 261 – 264, August 2007. W. Zeng, H. Ai and R. Hu, “An Algorithm of Echo Steganalysis based on Power Cepstrum and Pattern Classification,” Proceedings of the International Conference on Information and Automation, pp. 1667 – 1670, June 2008. M. K. Johnson, S. Lyu, H. Farid, “Steganalysis of Recorded Speech,” Proceedings of Conference on Security, Steganography and Watermarking of Multimedia, Contents VII, vol. 5681, SPIE, pp. 664– 672, May 2005. H. Ozer, I. Avcibas, B. Sankur and N. D. Memon, “Steganalysis of Audio based on Audio Quality Metrics,” Proceedings of the Conference on Security, Steganography and Watermarking of Multimedia, Contents V, vol. 5020, SPIE, pp. 55 – 66, January 2003. Y. Liu, K. Chiang, C. Corbett, R. Archibald, B. Mukherjee and D. Ghosal, “A Novel Audio Steganalysis based on Higher-Order Statistics of a Distortion Measure with Hausdorff Distance,” Lecture Notes in Computer Science, vol. 5222, pp. 487 -501, September 2008. D. P. Huttenlocher, G. A. Klanderman and W. J. Rucklidge, “Comparing Images using Hausdorff Distance,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 9, pp. 850– 863, September 1993. T. Holotyak, J. Fridrich and S. Voloshynovskiy, “Blind Statistical Steganalysis of Additive Steganography using Wavelet Higher Order Statistics,” Lecture Notes in Computer Science, vol. 3677, pp. 273 – 274, September 2005. N. Memon I. Avcibas and B. Sankur. Steganalysis using image quality metrics. Signal Processing., IEEE Transactions on Image Processing,12(2):221-229, 2003. M. Goljan J. Fridrich and D. Hogea. Attacking the outguess. In Proceedings of 2002 ACM Workshop on Multimedia and Security,ACM Press., 2002. W. Zeng, H. Ai and R. Hu, “A Novel Steganalysis Algorithm of Phase Coding in Audio Signal,” Proceedings of the 6th International Conference on Advanced Language Processing and Web Information Technology, pp. 261 – 264, August 2007. W. Zeng, H. Ai and R. Hu, “An Algorithm of Echo Steganalysis based on Power Cepstrum and Pattern Classification,” Proceedings of the International Conference on Information and Automation, pp. 1667 – 1670, June 2008. Paraskevas and E. Chilton, “Combination of Magnitude and Phase Statistical Features for Audio Classification,” Acoustical Research Letters Online, Acoustical Society of America, vol. 5, no. 3, pp. 111 – 117, July 2004. M. K. Johnson, S. Lyu, H. Farid, “Steganalysis of Recorded Speech,” Proceedings of Conference on Security, Steganography and Watermarking of Multimedia, Contents VII, vol. 5681, SPIE, pp. 664– 672, May 2005. H. Ozer, I. Avcibas, B. Sankur and N. D. Memon, “Steganalysis of Audio based on Audio Quality Metrics,” Proceedings of the Conference on Security, Steganography and Watermarking of Multimedia, Contents V, vol. 5020, SPIE, pp. 55 – 66, January 2003. A.C. Rencher, Methods of Multivariate Data Analysis, 2nd Edition, John Wiley, New York, NY, March 2002. P. Pudil, J. Novovicova and J. Kittler, “Floating Search Methods in Feature Selection,” Pattern Recognition Letters, vol. 15, no. 11, pp. 1119 – 1125, November 1994. Y. Liu, K. Chiang, C. Corbett, R. Archibald, B. Mukherjee and D. Ghosal, “A Novel Audio Steganalysis based on Higher-Order Statistics of a Distortion Measure with Hausdorff Distance,” Lecture Notes in Computer Science, vol. 5222, pp. 487 -501, September 2008. P. Huttenlocher, G. A. Klanderman and W. J. Rucklidge, “Comparing Images using Hausdorff Distance,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 9, pp. 850– 863, September 1993. T. Holotyak, J. Fridrich and S. Voloshynovskiy, “Blind Statistical Steganalysis of Additive Steganography using Wavelet Higher Order Statistics,” Lecture Notes in Computer Science, vol. 3677, pp. 273 – 274, September 2005. I.Avcibas, “Audio Steganalysis with Content-independent Distortion Measures,” IEEE Signal Processing Letters, vol. 13, no. 2, pp. 92 – 95, February 2006. Q. Liu, A. H. Sung and M. Qiao, “Novel Stream Mining for Audio Steganalysis,” Proceedings of the 17th ACM International Conference on Multimedia, pp. 95 – 104, Beijing, China, October 2009.

159

Vol. 3, Issue 1, pp. 145-160

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
C. Kraetzer and J. Dittmann, “Pros and Cons of Mel cepstrum based Audio Steganalysis using SVM Classification,” Lecture Notes in Computer Science, vol. 4567, pp. 359 – 377, January 2008. [30] MOMENTS IN IMAGE PROCESSING Bob Bailey Nov. 2002 [31] Audio steganalysis of spread spectrum hiding based on statistical moment by Zhang Kexin at the proceedings of 2nd International Conference on Signal Processing Systems (ICSPS), 2010. [32] Audio steganalysis of spread spectrum information hiding based on statistical moment and distance metric by Wei Zeng, Ruimin Hu and Haojun Ai at Multimedia Tools and Applications Volume 55, Number 3, 525-556.
[29]

AUTHORS
Souvik Bhattacharyya received his B.E. degree in Computer Science and Technology from B.E. College, Shibpur, India, presently known as Bengal Engineering and Science University (BESU) and M.Tech degree in Computer Science and Engineering from National Institute of Technology, Durgapur, India. Currently he is working as an Assistant Professor in Computer Science and Engineering Department at University Institute of Technology, The University of Burdwan. He has more than 30 research publication in his credit. His areas of interest are Natural Language Processing, Network Security and Image Processing.

Gautam Sanyal has received his B.E and M.Tech degree National Institute of Technology (NIT), Durgapur, India. He has received Ph.D (Engg.) from Jadavpur University, Kolkata, India, in the area of Robot Vision. He possesses an experience of more than 25 years in the field of teaching and research. He has published nearly 72 papers in International and National Journals / Conferences. Three Ph.Ds (Engg) have already been awarded under his guidance. At present he is guiding six Ph.Ds scholars in the field of Steganography, Cellular Network, High Performance Computing and Computer Vision. He has guided over 10 PG and 100 UG thesis. His research interests include Natural Language Processing, Stochastic modeling of network traffic, High Performance Computing, Computer Vision. He is presently working as a Professor in the department of Computer Science and Engineering and also holding the post of Dean (Students’ Welfare) at National Institute of Technology, Durgapur, India. He is a member of IEEE also.

160

Vol. 3, Issue 1, pp. 145-160

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

ANALYSIS AND COMPARISON OF COMBINATIONAL CIRCUITS BY USING LOW POWER TECHNIQUES
Suparshya Babu Sukhavasi1, Susrutha Babu Sukhavasi1, Vijaya Bhaskar M2, B Rajesh Kumar3
Assistant Professor, Department of ECE, K L University, Guntur, AP, India. M.Tech -VLSI Student, Department of ECE, K L University, Guntur, AP, India. 3 M.Tech-Embedded VLSI Student, Department of ECE, SITE, T P Gudam, AP, India.
2 1

ABSTRACT
Power dissipation is the major aspect which is effecting the digital circuits. By implementing the self resetting logic to the digital circuit, the power dissipation is drastically reduced. In the VLSI Design this low power technique is very advanced for DSP applications. The dynamic circuits are becoming increasingly popular because of the speed advantage over static CMOS logic circuits; hence they are widely used today in high performance and low power circuits. Self-resetting logic is a commonly used piece of circuitry that can be found in use with memory arrays as word line drivers. Self resetting logic implemented in dynamic logic families have been proposed as viable clock less alternatives. The combinational logic is a type of digital logic which is implemented by Boolean circuits, where the output is a pure function of the present input only. This is in contrast to sequential logic, in which the output depends not only on the present input but also on the history of the input. In this paper mainly the self resetting logic is applied for the different combinational circuits and the analysis is done very clearly. By implementing this low power technique for different logic circuits and adders, by comparison with DYNAMIC and SRCMOS logic’s power dissipation is drastically reduced up to 35% compared with CMOS logic circuits and observations are tabulated.

KEYWORDS: High speed, VLSI, Self-resetting logic (SRL), topologies, power dissipation

I.

INTRODUCTION

Combinational logic is used in computer circuits to do Boolean algebra on input signals and on stored data. Practical computer circuits normally contain a mixture of combinational and sequential logic. The part of an arithmetic logic unit, or ALU, that does mathematical calculations is constructed using combinational logic. Other circuits used in computers, such as half adders, full adders, half subtractors, full subtractors, multiplexers, demultiplexers, encoders and decoders are also made by using combinational logic. In today‘s fast processing environment, the use of dynamic circuits is becoming increasingly popular [5]. Dynamic CMOS circuits are defined as those circuits which have an additional clock signal inputs along with the default combinational circuit inputs of the static systems. Dynamic systems are faster and efficient than the static systems. A fundamental difficulty with dynamic circuits is the monotonicity requirement. In the design of dynamic logic circuits numerous difficulties may arise like charge sharing, feed through, charge leakage, single-event upsets, etc. In this paper novel energyefficient self-resetting primitive gates followed by the design of adder logic circuits are proposed. Addition is a fundamental arithmetic operation that is generally used in many VLSI systems, such as application-specific digital signal processing (DSP) architectures and microprocessors [2]. This module is the core of many arithmetic operations such as addition/subtraction, multiplication, division and address generation. Such high performance devices need low power and area efficient adder

161

Vol. 3, Issue 1, pp. 161-174

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
circuits. So this paper presents a design construction for primitive gates and adder circuits which reduce delay and clock skew when compared to the dynamic logic adder implementation [9]. VDD VSGR Delay chain

Inputs

Nfet logic

Cx Vx Cout Vout

Fig.1. Basic Structure of A Self Resetting Logic Circuit

Another category of dynamic circuits, called self-resetting CMOS (SRCMOS), represents signals as short-duration pulses rather than as voltage levels. When a set of pulses are sent to the inputs to a logic gate, they must arrive at essentially the same time and they must overlap with one another for a minimum duration. After a logic gate has processed a set of input pulses, a reset signal is activated that restores the logic gate to a state in which it can receive another set of input pulses. The reset operation is timed to occur after the input pulses have returned to zero [11]. Thus, there is no need for an evaluate or “foot” transistor since the pull-down network will be off during the reset operation, and this is one of the factors that leads to high-speed operation. Moreover, since the reset occurs immediately after each gate has evaluated, there is no need for a separate precharge phase. Since short-duration pulses are hard to debug and test, special additional test-mode features are sometimes added for these purposes. Two types of reset structures have been proposed for use in SRCMOS. In globally self-resetting CMOS [4], the reset signal for each stage is generated by a separate timing chain which provides a parallel worst-case delay path. Individual reset signals are obtained at various tap points along this timing chain in such a way that the reset pulse arrives at each stage only after the stage has completed its evaluation. Very careful device sizing based on extensive simulations over process-voltage-temperature corners are required in order to ensure correct operation. Moreover, any extra delay margin that is designed into the timing chain simply reduces the throughput by a corresponding amount. On the other hand, in locally self-resetting CMOS [5], the reset signal for each stage is generated by a mechanism local to that stage. Previous implementations of this technique have been based on single-rail domino stages in which the reset signal is obtained by sending the stage’s own output signal through a short delay chain. Again, this technique requires very careful simulations and device sizing in order to ensure that the reset signals do not arrive too early. As with the other technique, any timing margin that is built in will directly limit the achievable performance.

II.

SELF RESETTING CMOS DYNAMIC LOGIC

Self-resetting logic is a commonly used piece of circuitry that automatically precharge they (i.e., reset themselves) after a prescribed delay. They find applications where a small percentage of gates switch in a cycle, such as memory decoder circuits. It is a form of logic in which the signal being propagated is buffered and used as the precharge or reset signal. By using a buffered form of the input, the input loading is kept almost as low as in normal dynamic logic while local generation of the reset assures that it is properly timed and only occurs when needed [6]. A generic view of a self-reset logic is shown in Fig.1. In the domino case, the clock is used to operate the circuit. In the self-resetting case, the output is fed back to the precharge control input and, after a specified time delay, the pull-up is reactivated. There is an NMOS sub block where the logic function performed by the gate is implemented which is represented as NMOS_LF through which the input

162

Vol. 3, Issue 1, pp. 161-174

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
data‘s are loaded. The output of the gate F provides a pulse if the logic function becomes true. This output is buffered and it is connected to PMOS structure to precharge [7]. The delay line is implemented as a series of inverters. The signals that propagate through these circuits are pulses. VDD VSGR=VDD 0v Delay chain

Inputs

Nfet logic

Cx Vx=0 VDD Cout Vout =0

Fig.2. Precharge

Dynamic logic circuits are widely used in modern low power VLSI circuits. These dynamic circuits are becoming increasingly popular because of the speed advantage over static CMOS logic circuits; hence they are widely used today in high performance and low power circuits. Normally in the design of flip-flops and registers, the clock distribution grid and routing to dynamic gates presents a problem to CAD tools and introduces issues of delay and skew into the circuit design process [1]. There are situations that permit the use of circuits that can be automatically precharge themselves (i.e., reset themselves) after a prescribed delays [7]. These circuits are called post charge or self-resetting logic which are widely used in memory decoders. VDD VSGR= 0v VDD Delay chain

Inputs

Nfet logic

Cx Vx=VDD 0v Cout Vout

Fig.3. Discharge

The width of the pulses must be controlled carefully or else there may be contention between NMOS and PMOS devices, or even worst, oscillations may occur. Self-resetting logic (SRL) can be classified as a variant of domino logic that allows for asynchronous operation [11]. A basic SRL circuit is shown in Figure. A careful inspection of the schematic shows that the primary differences between this gate and the standard domino circuit are

163

Vol. 3, Issue 1, pp. 161-174

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
(a) The addition of the inverter chain that provides feedback from the output voltage Vout (t) to the gate of the reset pFET MR, and (b) The elimination of the evaluation nFET. Note that an odd number of inverters are used in the feedback. As discussed below, the feedback loop has a significant effect on both the internal operation of the circuit and the characteristics of the output voltage. Precharging of Cx occurs when the clock is at a value =0 and the circuit conditions are shown in Figure 2. During this time, and which is identical to the event in a standard domino circuit. As we will see, the timing of the input signals precludes the possibility of a DC discharge path to ground by insuring that the inputs to all logic nFET are 0 during precharge. The voltage on the gate of MR is at a value of so that insures that MR is in cutoff during this time. The distinct features of SRL arise when a discharge occurs. The circuit conditions are shown in Fig.3

III.

ANALYSIS OF LOGIC GATES AND ADDERS

This section presents the basic construction and simulation of primitive gates and adders in all the specified logics above they are DYNAMIC LOGIC, SRCMOS LOGIC, and SELF RESETTING LOGIC. The cells shown in Fig. is a self-resetting implementation of 2-input primitive gates. The logical functions are implemented by the NMOS stack with two input signals A and B. The delay path in this circuit is implemented with single inverter. The mechanism of self-resetting in this circuit is achieved through the PMOS transistors. The implementation of AND/OR can be obtained by placing the NMOS stack in series and parallel connection, whereas the gates NAND/NOR can be implemented with De Morgan‘s law, an OR gate with inverted input signals behaves as a NAND gate. Similarly an AND gate with inverted input signals behaves as a NOR gate [8].

3.1 Dynamic logic implementation in logic gates and adders
Dynamic logic is one of the low power technique which is applicable for digital circuits, in which the area will be reduced where as the power reduction is less compared to Self resetting logic. Analysis of basic logic gates and their applications is done. 3.1.1 And gate implementation

Fig.4. schematic of and gate by using dynamic logic

Fig.5. simulation results of and gate by using dynamic logic

164

Vol. 3, Issue 1, pp. 161-174

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
3.1.2 Xor gate

Fig.6. schematic of xor gate by using dynamic logic

Fig.7. simulation results of xor gate by using dynamic logic

3.1.3

Half adder

Fig.8. schematic of half adder by using dynamic logic

Fig.9. simulation results of half adder by using dynamic logic

165

Vol. 3, Issue 1, pp. 161-174

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
3.1.4 Full adder

Fig.10. schematic of full adder by using dynamic logic

Fig.11. simulation results of full adder by using dynamic logic

3.2 SRCMOS logic implementation in logic gates and adders Self resetting CMOS is one of the low power technique which is applicable for digital circuits, it yields high performance but having complexity circuitry, in which the area will be reduced where as the power reduction is less compared to Self resetting logic. Analysis of basic logic gates and their applications is done. 3.2.1 And gate

Fig.12. schematic of and gate by using SRCMOS logic

166

Vol. 3, Issue 1, pp. 161-174

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Fig.13. simulation results of or gate by using SRCMOS logic

3.2.2

Xor gate

Fig.14. schematic of xor gate by using SRCMOS logic

Fig.15. simulation results of xor gate by using SRCMOS logic

167

Vol. 3, Issue 1, pp. 161-174

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
3.2.3 Half adder

Fig.16. schematic of half adder by using SRCMOS logic

Fig.17. simulation results of half adder by using SRCMOS logic

3.2.4

Full adder

Fig.18. schematic of full adder by using SRCMOS logic

168

Vol. 3, Issue 1, pp. 161-174

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Fig.19. simulation results of full adder by using SRCMOS logic

3.3 Self Resetting logic implementation in logic gates and adders
Self resetting logic is a novel low power technique which is applicable for digital circuits, in which the power reduction is less compared to other mentioned low power techniques. Analysis of basic logic gates and their applications is done. 3.3.1 And gate

Fig.20. schematic of and gate by using Self Resetting Logic

Fig.21. simulation results of and gate by using self resetting logic

169

Vol. 3, Issue 1, pp. 161-174

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
3.3.2 Xor gate

Fig.22. schematic of xor gate by using Self Resetting Logic

Fig.23. simulation results of xor gate by using self resetting logic

3.3.3

Half adder

Fig.24. schematic of half adder by using Self Resetting Logic

170

Vol. 3, Issue 1, pp. 161-174

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Fig.25. simulation results of half adder by using self resetting logic

3.3.4

Full adder

Fig.26. schematic of full adder by using Self Resetting Logic

Fig.27. simulation results of full adder by using self resetting logic

171

Vol. 3, Issue 1, pp. 161-174

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

IV.

COMPARATIVE ANALYSIS OF COMBINATIONAL LOGIC CIRCUITS

In half adder circuit the sum and carry is observed for this dynamic half adder the power dissipation is calculated and it is reduced up to 15% compared to CMOS logic. The total power dissipation for this circuit is 39.05 and in dynamic full adder circuit the power dissipation is calculated and it is reduced up to 15% compared to CMOS logic. The total power dissipation for this circuit is 56.90.
Table 1 Power dissipation and delays of dynamic logic circuit Topology AND OR XOR HALF ADDER FULL ADDER Rise delay ns(output node) 0.004 0.002 0.003 0.007 0.012 Fall delay ns(output node) 0.002 0.001 0.002 0.005 0.01 Power Dissipation (mW) 26.45 26.45 47.89 39.05 56.90

In half adder circuit the sum and carry is observed for this SRCMOS half adder circuit the power dissipation is calculated and it is reduced up to 20% compared to CMOS logic[12]. The total power dissipation for this circuit is 35.19 and in this SRCMOS full adder circuit the power dissipation is calculated and it is reduced up to 20% compared to CMOS logic. The total power dissipation for this circuit is 50.21.
Table 2 Power dissipation and delays of SRCMOS LOGIC Topology AND OR XOR HALF ADDER FULL ADDER Rise delay ns(output node) 0.003 0.002 0.005 0.012 0.025 Fall delay ns(output node) 0.005 0.005 0.004 0.033 0.014 Power Dissipation (mW) 24.21 24.21 43.35 35.19 50.21

In half adder circuit the sum and carry is observed for this SELF RESETTING LOGIC half adder circuit the power dissipation is calculated and it is reduced up to 35% compared to CMOS logic. The total power dissipation for this circuit is 30.17 and in this SELF RESETTING LOGIC full adder circuit the power dissipation is calculated and it is reduced up to 35% compared to CMOS logic. The total power dissipation for this circuit is 45.12
Table 3 Power dissipation and delays of SRL circuit Topology AND OR XOR HALF ADDER FULL ADDER Rise delay ns(output node) 0.005 0.004 0.015 0.024 0.050 Fall delay ns(output node) 0.001 0.001 0.002 0.022 0.043 Power Dissipation(mW) 23.12 23.12 36.67 30.17 45.12

V.

ANALYSIS OF POWER DISSIPATION
Table 4 Comparative Analysis of Power Dissipation Power dissipation DYNAMIC LOIC SRCMOS AND 26.45 24.21 OR 26.45 24.21 XOR 47.89 43.35 HALF ADDER 39.05 35.19 FULL ADDER 56.90 50.21

172

Vol. 3, Issue 1, pp. 161-174

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
LOGIC SRL 23.12 23.12 36.67 30.17 45.12

Chart 1 analysis of power dissipation 60 50 40 30 20 10 0 AND OR XOR HALF ADDER FULL ADDER DYNAMIC LOIC SRCMOS LOGIC SRL

IV.

CONCLUSION

In this paper, an exhaustive analysis for commonly used high-speed primitive gates and adder circuits using self-resetting logic is implemented in 45-nm CMOS technologies. The goal was to obtain a family of gates that could simplify the implementation of fast processing circuit which overcomes the restriction due the pulses being elongated and shortened as signal traverse the logic stages. In this paper exhaustive comparison between conventional DYNAMIC LOGIC, SRCMOS and SRL were carried in terms of its parasitic value, delays and power dissipation. It is observed that the proposed circuits have offered an improved performance in power dissipation, charge leakage and clock skew when compared to DYNAMIC LOGIC and SRCMOS logic with additional burden of silicon area. Hence, it is concluded that the proposed designs will provide a platform for designing high performance and low power digital circuits and high noise immune digital circuits such as, digital signal processors and multiplexers.

REFERENCES
[1] Woo Jin Kim, Yong-Bin Kim, A Localized Self-Resetting Gate Design Methodology for Low Power IEEE 2001. [2] M. E. Litvin and S. Mourad, Self-reset logic for fast arithmetic applications, IEEE Transactions on Very Large Scale Integration Systems, vol. 13, no. 4, pp. 462–475, 2005. [3] L. Wentai, C. T. Gray, D. Fan, W. J. Farlow, T. A. Hughes, and R. K. Cavin, 250-MHz wave pipelined adder in 2-µm CMOS, IEEE Journal of Solid-State Circuits, vol. 29, no. 9, pp. 1117–1128, 1994. [4] D. Patel, P. G. Parate, P. S. Patil, and S. Subbaraman, ASIC implementation of 1-bit full adder, in Proc. 1st Int. Conf. Emerging TrendsEng. Technol., Jul. 2008, pp. 463–467. [5] M. Lehman and N. Burla, Skip techniques for high-speed carry Propagation in binary arithmetic units, IRE Trans. Electron.comput., vol. EC-10, pp. 691–698, Dec. 1962. [6] R. A. Haring, M.S. Milshtein, T.I. Chappell, S. H. Dong and B.A. Chapell, "Self resseting logic and incrementer" in Proc. IEEE Int. Symp. VLSI Circuits, 1996 pp. 18-19. [7] G. Yee and C. Sechen, " Clock-delayed domino for adder and combinational logic design" in proc.IEEE/ACM Int. Conf. Computer Design, Oct., 1996, pp. 332-337. [8] P. Ng, P. T. Balsara, and D. Steiss, Performance of CMOS Differential Circuits, IEEE J. of Solid-State Circuits, vol. 31, no. 6, pp. 841-846, June 1996. [9] P. Srivastava, A. Pua, and L. Welch, .Issues in the Design of Domino Logic Circuits, Proceedings of the IEEE Great Lakes Symposium on VLSI, pp. 108-112, February 1998. [10] W. Zhao and Y. Cao. New generation of predictive technology model for sub-45nm design exploration, In IEEE Intl. Symp. On Quality Electronics Design, 2006 [11] CMOS Logic Circuit Design 2002 John P Uyemura [12] Prabhakara C. Balla and Andreas Antoniou Low Power Dissipation MOS Ternary Logic Family IEE journal of solid state circuits, vol sc-19 no.5 October 1984

173

Vol. 3, Issue 1, pp. 161-174

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Authors

Suparshya Babu Sukhavasi was born in A.P, India. He received the B.Tech degree from JNTU, A.P, and M.Tech degree from SRM University, Chennai, Tamil Nadu, India in 2008 and 2010 respectively. He worked as Assistant Professor in Electronics Engineering in Bapatla Engineering College for academic year 2010-2011 and from 2011 to till date working in K L University . His research interests include antennas, FPGA Implementation, Low Power Design and wireless communications.

Susrutha Babu Sukhavasi was born in A.P, India. He received the B.Tech degree from JNTU, A.P, and M.Tech degree from SRM University, Chennai, Tamil Nadu, India in 2008 and 2010 respectively. He worked as Assistant Professor in Electronics Engineering in Bapatla Engineering College for academic year 2010-2011 and from 2011 to till date working in K L University . His research interests include antennas, FPGA Implementation, Low Power Design and wireless communications.

Vijaya Bhaskar Madivada was born in A.P, India. He received the B.Tech degree in Electronics & Communications Engineering from Jawaharlal Nehru Technological University in 2010. Presently he is pursuing M.Tech VLSI Design in KL University; His research interests include FPGA Implementation, Low Power Design.

B Rajesh Kumar was born in Gudivada, Krishna (Dist.,), AP, India. He received B.Tech. in Electronics & Communications Engineering from Jawaharlal Nehru Technological University in 2010 , A.P, India. Presently he is pursuing M.Tech. in VLSI from SITE. His research interest includes Low Power and design of VLSI circuits.

174

Vol. 3, Issue 1, pp. 161-174

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

HIGH QUALITY DESIGN AND METHODOLOGY ASPECTS TO ENHANCE LARGE SCALE WEB SERVICES
Suryakant B Patil1, Sachin Chavan2, Preeti Patil3
1 2&3

Department of Information Technology, MPSTME, NMIMS, Shirpur, India. Department of Computer Engineering, MPSTME, NMIMS, Shirpur, India.

ABSTRACT
Network traffic has increased tremendously due to the rise in the number of Internet users. This has affected several aspects and characteristics of large scale networks such as reduced network bandwidth, increased latency, and higher response time for users who require large scale web services. This paper proposes a novel design and methodology to address these issues. It does this by optimizing network bandwidth and reduces latency and response time for large scale Web applications. The methodology works by analysing content in the proxy cache, identifying content aliasing, duplicate suppression and by the creation of the respective soft links. Here it is necessary to understand the characteristics of the web traffic before one would propose any design to manage it better. For instance, an expanding network results into reduced network bandwidth, increased latency, and higher response time for users. The present solution makes intelligent use of the proxy cache server to overcome these problems. In this study proxies were designed to enable network administrators to control internet access from within intranet. But when proxy cache is used, there develops the problem of Aliasing. Aliasing in proxy server caches occurs when the same content is stored in the cache several times. The present methodology improves performance by avoiding storing the same content in cache multiple times those results in wastage of storage space. The investigation proposed here analysed the cache content of the proxy and checked the replication of content. This work thus proved the way to increase the efficiency of large scale web services. This high quality design approach focused on the content of the access logs and user habits as users would access large scale Web services.

KEYWORDS: Large Scale Web, Cache, Web Proxy, Mirroring, and Duplicate Suppression.

I.

INTRODUCTION

Web caching consists of storing most frequently access objects to the local server at our place instead of original server. Due to this reason web server can make better use of network bandwidth, reduced workload on web servers, improves the response time for users. The local servers used to store frequently referred objects are called as web proxy servers. A proxy server acts as a mediator between the original server and the clients. The proxy server setup can be seen in Figure 1. The proxy cache also stores all of the images and sub files for the visited pages, so if the user jumps to a new page within the same site that can be possible. Aliasing in proxy servers caches occurs due to same content is stored in cache multiple times. On the World Wide Web, aliasing commonly occurs when a client makes two requests, and both the requests have the same payload [3]. The major problem associated with using the web cache is storage space requires storing the visited pages with their objects [16]. This Paper is divided into 4 sections; section 1 is an introduction of proxy web cache system which briefs about the role of the Proxy server and setup of the proxy cache server between the client side browser and web servers. Further it list out the advantages and disadvantages of using Proxy cache. Section 2 covers the related work, the concepts of the web traffic, web cache, mirroring, MD5 Hashing Algorithm followed by static caching and dynamic caching are discussed. Section 3 elaborates the survey done for Data collection and reduction with experimental analysis. The data is gathered from various systems based on the user habits and requirement. It reflects the amount of the

175

Vol. 3, Issue 1, pp. 175-185

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
cache space saved for respective systems after reduction based on the category of the files. Section 4 consists of results and discussions based on experimental analysis. It shows the enhancement in the large scale web services by using the proxy cache for duplication avoidance with the high quality design and methodology aspects for various categories of the web data like image, HTML, formatted, Style and other files.

1. Client request web page from proxy Proxy caching server 2. Proxy request web page from web server 3. Web server sends the page to the proxy 4. Proxy caches the pages and return to client

Web server

Figure 1. Proxy server setup

1.1 Advantages of Caching
1. Web caching reduces the workload of the remote Web server 2. If the remote server is not available due to the remote server's crash or network partitioning, the client can obtain a cached copy at the proxy. 3. It provides us a chance to analyze an organization usage patterns.

1.2 Disadvantages of using a caching:
1. The main disadvantage is that a client might be looking at stale data due to the lack of proper proxy updating. 2. The access latency may increase in the case of a cache miss due to the extra proxy processing. 3. A single proxy cache is always a bottleneck. 4. A single proxy is a single point of failure.

II.

RELATED WORK

2.1 Mirroring
Shivkumar and Garcia-Molina investigated mirroring in a large crawler data set and reported that in the WebTV client trace far more aliasing happens than expected [11]. Similarly, Bharat et al. surveyed techniques for identifying mirrors on the Internet [17]. Bharat and Broder investigated mirroring in a large crawler data set and reported that roughly 10% of popular hosts are mirrored to some extent [17]. Broder et al. considered approximate mirroring or “syntactic similarity” [17]. Mirroring is defined as keeping multiple copies of the content of a Web site or Web pages on different servers using different domain names. A mirrored site is nothing but an exact replica of the original site and is updated frequently to ensure that it reflects the updates made to the content of the original site. The main purpose of mirroring is to build in redundancy and ensure high availability of web

176

Vol. 3, Issue 1, pp. 175-185

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
documents or objects. Mirrored sited also help make access faster when the original site is geographical distant.

2.2 Web Traffic
The amount of data sent and received by visitors to a website is web traffic. It is analysis to see the popularity of web sites and individual pages or sections within a site. Web traffic can be analyzed by viewing the traffic statistics found in the web server log file, an automatically generated list of all the pages served. Traffic analysis is conducted using access logs from web proxy server. Each entry in access logs records the URL of document being requested, date and time of the request, the name of the client host making the request, number of bytes returns to requesting client, and information that describe how the clients request was treated as proxy [1]. Processing these log entries can produce useful summary statistics about workload volume, document type and sizes, popularity of document and proxy cache performance [7].

2.3 Web cache
Web caching can play a valuable role in improving service quality for a large range of Internet users. A web cache is a mechanism for the temporary storage (caching) of web documents, such as HTML pages and images, to reduce bandwidth usage, server load, and perceived lag. A web cache stores copies of documents passing through it; subsequent requests may be satisfied from the cache if certain conditions are met. There are two types of Web caches a browser cache and a proxy cache [9]. A proxy cache is a shared network device that can undertake Web transactions on behalf of a client, and, like the browser, the proxy cache stores the content. Subsequent requests for this content, by this or any other client of the cache will trigger the cache to deliver the locally stored copy of the content, avoiding a repeat of the download from the original content source [6]. A client, such as a web browser, can store web content for reuse. For example, if the back button is pressed, the local cached version of a page may be displayed instead of a new request being sent to the web server. A web proxy sitting between the client and the server can evaluate HTTP headers and choose to store web content. A content delivery network can retain copies of web content at various points throughout a network. Performance of web cache system is measured in terms of utilization, bandwidth, efficiency, hits, throughput latency, availability and response time [14].

2.4 Static Caching
It is a new approach of web caching which uses yesterday’s log to predict the today’s user request. Static caching is performed only once in a day. It is simpler approach to use and requires very low CPU overhead. It improves cache performance by using compression techniques. Stable web pages can be compressed to reduce their size and frees up the cache space. Cache server performance typically relies on two major factors first is byte hit ratio and second one is hit ratio. Hit ratio is nothing but the percentage of all accesses that are fulfilled by data in cache; while, byte hit ratio is the hit rate with respect to the total number of bytes in the cache [8]. The static caching algorithm defines a fixed set of URLs by analyzing the logs of previous periods. It then calculates the value of the unique URL. Depending on the value, URLs are arranged in the descending order, and the URL with the highest value is selected. This set of URLs is known as the working set. When a user requests a document and the document is present in the working set, the request is fulfilled from the cache. Otherwise, the user request is fulfilled from the origin server [8].

2.5 Dynamic Caching
In contrast to dynamic caching where cached documents are updated more than once a day. As mentioned before, the benefit of current Web caching schemes is limited by the fact that only a fraction of web data is cacheable. Non-cacheable data is of a significant percentage of the total data. For example, measurement results show that 30% of user requests contain cookies. Dynamic caching is more complex than static caching and requires detailed knowledge of the application. One must consider the candidates for dynamic caching carefully since, by its very nature,

177

Vol. 3, Issue 1, pp. 175-185

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
dynamically generated content can be different based on the state of the application. Therefore, it is important to consider under what conditions dynamically generated content can be cached returning the correct response. This requires knowledge of the application, its possible states, and other data, such as parameters that ensure the dynamic data is generated in a deterministic manner [5].

III.

DATA COLLECTION, REDUCTION AND EXPERIMENTAL ANALYSIS

3.1 Document Types
While downloading files and viewing websites, you’ll meet up with many file formats. Most are common, and encountered frequently, others are rarer and require specialist programs to open or use [2]. Here, the categorized list of the documents and files are given below • HTML: “.html”, ”.shtml”, ”.htm”, and “.map” . • Image: “.gif”, “.jpeg”, “gif89”, “.jpg”, “.png”, “.bmp”, “.jpg”, “.pcx”, “.rgb”, “.tif”, and “.ras”. • Audio: “.au”, “.aiff”, “.aif”, “.aifc”, “.mid”, “.snd”, “.wav”, and “.lha”. • Video: “.mov”, “.qt”, “.avi”, “.mpe”, “.movie”, “.mpeg”, “.mpg”, “.mp2”, and “.mp3”. • Compressed: “.z”, “.gz”, “.zip”, and “.zoo”. • Formatted: “.ps”, “.pdf”, “dvi”, “.ppt”, “.tex”, “.rtf”, “.src”, “.doc” and “.wsrc”. • Dynamic: “.cgi”, “.pl”, and “.perl”. When cache is use to store the object then the URL of web page is going to be cache in the cache memory, as the URL is stored the objects associated with the Page such as images, Audio, Video files are also stored at the same place for quick retrieval of that page for next time [10].

3.2 Changing of proxy server
In most of the organization’s or institution server does not support the proxy cache, so it is difficult to use main server as cache server so we have to change the proxy server from main server to other server [4]. Following are the steps to switch machine to other proxy: 1. Open the browser for ex. Internet Explorer 2. In internet explorer pull down the Tools menu and click Internet Options... 3. Click the Connections tab: 4. click the LAN Settings... button: 5. In the Address: box change "proxy1 Address" to "proxy2 Address" or vice versa and click OK. 6. Click OK on the Internet Options dialogue box to get back to the browser screen and you will now be able to get external sites.

3.3 Duplication of Data
Duplication of data means storing the multiple copies of same data object. In case of cache when we cache the object or the webpage that web page is stored at cache memory but when the different users request the same page then the multiple copies of that object or web page is stored at cache memory which results in the wastage of storage space as we all know the maintenance of cache is an expensive task so such wastage is not affordable. To avoid the problem of duplication of the data objects or web page duplicate suppression mechanism is to be used [9]. If the duplicate copy of data is saved at proxy cache then it acquires more space of storage in the analysis part given in work shows that the effect of duplication in the cache space [6].

3.4 Duplicate Suppression
You can reduce storage space requirements by avoiding duplicating copies of the same data. Content Engine provides the option to suppress storage of duplicate content elements. Duplicate suppression applies to any kind of content. Incoming content is not added to the storage area if identical content exists in the storage area; only unique content is added [14]. Many web pages having different URL’s are duplicates of each other. If cache finds the duplicate copy of requested page, it will reduce the network cost when the page is served from cache.

178

Vol. 3, Issue 1, pp. 175-185

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Technique like data compression, caching and content simplification can be used to avoid unnecessary use of network bandwidth [6]. Due to large network size there are many pages on web, most of those pages will not be referenced multiple times by any one cache, means the probability with which the Kth page will be referenced is 1/K. re-referenced follow a distribution similar to Zipf’s law [12]. Utility of web pages can be extended using several methods such as delta encoding, perfecting and partial transfer. Simple hit ratio is always higher than weighted hit ratio because the no of references to the smaller resource is more than those larger resources.

3.5 Experimental Analysis
The experimentation carried out at the lab of our institute. Five systems from the lab are under the observation for 3 months. Those systems are use to analyse for access of internet to check the user interest and habits. File types such as .XML, .TXT, .PL, .GIF, .HTML, .CH, .CSS, .PNG, .PHP, and .BMP considered for analysis [2]. Size of the file given in table is in kb. The result of this analysis for five systems are summarised in table 1.
Table 1. Size of content According to type. Type of files System 1 2 3 4 5 AVG XML 586 0 604 0 0 1190 TXT 35 26 142 17 0 220 Pl 202 0 0 0 107 309 JPEG 1492 981 151 77 516 3217 GIF 464 312 126 44 196 1142 HTML 423 772 608 174 326 2303 CH 259 0 248 0 0 507 .CSS 1169 776 387 315 423 3070 .PNG 495 163 252 487 180 1577 PHP 131 0 0 0 50 181 BMP 0 361 49 0 0 410

Table 1 show that Style and image documents account for close to 60 - 65 percent. Unlike the web server workloads, however Table 1 show that image files consistently more requested document type (40-43 percent), followed by Style documents (about 20-22 percent). It has been observed that some of the file systems are not present on all the systems. Most of the cache space is acquired by the JPEG files.
Table 2. Reduced size of content on Disk. Type of files System 1 2 3 4 5 AVG XML 566 0 594 0 0 1160 TXT 1 6 3 1 0 11 Pl 192 0 0 0 97 289 JPEG 757 626 118 20 281 1802 GIF 215 177 18 32 45 487 HTML 300 558 539 150 93 1640 CH 216 0 248 0 0 464 .CSS 1065 744 205 300 311 2625 .PNG 385 131 135 418 161 1230 PHP 69 0 0 0 44 113 BMP 0 361 13 0 0 374

Table 2 show effect of reduction on the content stored at cache. We get much reduction effect in case of image files. If we discard the duplicate copies of the image files then we can save 50 percent of space required to store that image files, where the reduction in XML, PL and CSS files is very less. Reduction in number of copies of same files decrease the access latency time for the proxy cache [15].

179

Vol. 3, Issue 1, pp. 175-185

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
1600 1400 1200 1000 800 600 400 200 0

System 1

Before

After

Size (KB)

Type of files

Figure.2 Type of files present on system 1

Figure 2 shows the type of data present at the cache of first system, from Figure 2 we can say that the maximum size occupied of first system is by image files. And the files like TXT and BMP is completely absent. That shows the user interest and habits. Here first bar shows the actual size of files with replication where second bar shows the size if we discard replicated data. “Aging daemon” which generates dummy references to all documents to age their reference rate estimates [18]. From Figure 2 it is clear that in image file of type jpeg we get 50 percent of space is saved if we discard the replicated data. Wherein the space saved in gif file is also same i.e. near about 50 percent. In case of HTML and png it is less about 10-20 percent. Wherein case of XML, CSS, and Channel it is negligible. So after reduction of image files we can save the 50 percent storage space occupied by image files. Kai Cheng, Kambayashi, Y., Mohania M., shows that more than 60% of total cache contents are never used and the low utilization rate is largely due to users' unawareness of the cache contents [19].
1200 1000
Size(KB)

System 2

Before

After

800 600 400 200 0

Types of files

Figure 3. Type of files present on system 2

Figure 3 shows the type of data present at the cache of second system, from Figure 3 we can say that the cache occupies less space as compare to system 1. And the files like TXT, XML, PL, CHANNEL and PHP is completely absent. That shows the user interest and habits. Here first bar shows the actual size of files with replication where second bar shows the size if we discard replicated data. Storing of dynamic content in cache is not useful because that content does not used again for second time [5]. From Figure 3 it is clear that in image file of type jpeg and gif we get 40 percent of space is saved if we discard the replicated data. In case of HTML file we get 20 percent. Wherein case of PNG, CSS,

180

Vol. 3, Issue 1, pp. 175-185

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
and BMP it is negligible. So after reduction of files we can save 20-30 percent storage space occupied by files. At the second system most of the files are not present so the actual size occupied by cache is less but we can observe that there is replication of data. Dynamic content cache stores new copy from web server each time [5].

700 600 500
Size(KB)

System 3

Before

After

400 300 200 100 0

Type of files
Figure 4. Type of files present on system 3

Figure 4 shows the type of data present at the cache of third system, from Figure 4 we can say that the maximum size occupied is by XML and HTML files. And the files like PL and PHP is completely absent. That shows the user interest and habits. Here first bar shows the actual size of files with replication where second bar shows the size if we discard replicated data [13]. From Figure 4 it is clear that the space is saved by discarding the replicated data is negligible in case of XML and HTML files. We save maximum size by discarding duplicates of GIF and TXT files about 80-90 percent. Wherein the space saved in JPEG, HTML and BMP file is also near about 10-20 percent. Wherein case of PNG, CSS, and BMP it is 30-40 percent. So after reduction of files there is no much effect in storage space requirement.
600 500
Size (KB)

System 4

Before

After

400 300 200 100 0

Type of Files

Figure 5. Type of files present on system 4

System 4 also has the less overhead on cache size because some of the files are not present in system 4 cache such as XML, PL, CHANNEL, PHP and BMP. And the files which are present on system 4 have no much effect of reduction. In case of jpeg there is maximum effect of reduction i.e. 70 percent. Wherein case of GIF, HTML, CSS and PNG it is only 10-20 percent. From the access log of system 4

181

Vol. 3, Issue 1, pp. 175-185

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
we predict that there is variety of content is surf that’s why there is no much effect of reduction on cache storage [6].
600 500

System 5

Berore

After

Size (KB)

400 300 200 100 0

Type of files

Figure 6. Type of files present on system 5

At system 5 we observe that XML, TXT, CHANNEL and BMP files are not present. Maximum size of cache is occupied by JPEG and CSS files. If we avoid the duplication of JPEG object then we can save 40 percent of storage space occupied by JPEG files [13]. In case of GIF and HTML the effect of reduction is better i.e. 70-80 percent. In case of PNG, PHP and Pl files the effect of reduction is very less nearly 10 percent. If avoid duplication of CSS content then we save 20 percent of storage space required to store CSS files.

IV.

RESULTS AND DISCUSSION

Figure 7 shows the overall cache space occupied by all the files present on all the five systems as well as the effect of reduction on cache space. From Figure 7 we can say that by reducing the XML content we can save 40 percent of cache size. In case of TXT files it is 50 percent. As we observe the graph for PL, PHP and BMP files then there is no effect of reduction. JPEG and CSS files requires maximum of cache space and the effect of reduction is 50 percent in case of JPEG files and 40 percent in case of CSS files. Maximum effect of reduction is seen in HTML and PNG files i.e. 60 percent. Cache space occupied by GIF files is less but the effect of reduction is much i.e. 50 percent.

Figure.7 Average size of files on all Systems

Figure 7 shows the effect of reduction on the files present in cache of all systems from the statistics shown in figure 7 we get the effect of duplicate suppression. Duplicate suppression saves the cache space as well as the access latency is decreases for cache described in [15].

182

Vol. 3, Issue 1, pp. 175-185

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
9000 8000 7000 6000 5000 4000 3000 2000 1000 0 HTML IMAGE STYLE Type of files FORMATTED OTHER

Size (KB)

Figure.8 Average size of files on all Systems

Figure 8 shows the comparative graph of type of files when it is grouped category wise. Files such as GIF, JPEG, BMP and PNG is categorised into image files. CSS and PL files categorised as style file. And the remaining files in the category of other. From Figure 8 we can predict that the image files are most requested content in all. Image files occupy 43 percent of total storage space of cache. And also the effect of reduction has greater significance in saving of storage space. Major portion of storage space is occupied by the image, HTML and Style files 80-90 percent. It is clear that maximum size of cache is occupied by image file and also the effect of duplicate suppression is higher in case of image files. So the reduction in image files helps us to save the cache space as well as the access latency. By using the concept of duplicate suppression we save the space required by the files along with that the access latency at proxy is also decrease. If we store multiple copies of same object that increase the search time for lookup of specific files that increase the response time for user.

V.

CONCLUSION

The analysis based experimental results proves the need for methodology that improve the web access performance to enhance bandwidth utilization and greater connectivity speed. Here the suggested Design aspects improves the web performance in terms of reduced traffic, reduced latency, improved user response time, and optimal use of the existing bandwidth by using web caching. Content aliasing successfully detected using a web-based application, database queries and files system calls even in the large scale web applications. A considerable amount of duplicate storage can be avoided through the suggested methodology. This methodology is useful to avoid content aliasing at proxy cache that saves the unnecessary cache space required to store multiple copies of same content. It is also helpful in reducing the access latency and increase the response time of proxy cache. This methodology helps to web proxy caches to improve the performance during large web services. This work can be further optimize by the Daemon Process, which can be design and run periodically to check the consistency of the data cached and the data at the web server. This can be scheduled during the slack time with the less traffic which will not add any additional toll on the bandwidth as well as it updates the TTL – Time to Live Period of the cached data; which results in the more cache hits with fresh data.

ACKNOWLEDGEMENTS
This paper would not be possible without the continuous guidance and valuable inputs of Dr. Tapan Bagchi, Director – SVKM’s NMIMS Shirpur Campus. The additional support taken from the following corporate professionals, for required data and design aspects based on their experience which creates innovative methodologies and data management solutions to help accelerate business breakthroughs and deliver outstanding cost efficiency: Mr. Sachin Punadikar, Senior Staff Software Engineer, NAS Team, India Software Lab, IBM India. Ozone2, Pune; Mr. Pramod Ghorpade, Dev Manager at NetApp, Bengaluru; Mr. Vinod Ghorpade, Senior Practice Manager at Wipro Technologies, Reading, United Kingdom (UK), Mr. Sujay Ghorpade, SAP PS/c Projects Consultant, Robert Bosch Engineering & Business Solutions, Bangalore.

183

Vol. 3, Issue 1, pp. 175-185

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

REFERENCES
Kartik Bommepally, Glisa T. K., Jeena J. Prakash, Sanasam Ranbir Singh and Hema A Murthy “Internet Activity Analysis through Proxy Log” IEEE, 2010. [2] Jun Wu; Ravindran, K., "Optimization algorithms for proxy server placement in content distribution networks," Integrated Network Management-Workshops, 2009. [3] Ngamsuriyaroj, S. ; Rattidham, P. ; Rassameeroj, I. ; Wongbuchasin, P. ; Aramkul, N. ; Rungmano, S. “Performance Evaluation of Load Balanced Web Proxies” IEEE, 2011. [4] E-Services Team, “Changing Proxy Server” by the Robert Gordon University, School hill, Aberdeen, Scotland-2006. [5] Chen, W.; Martin, P.; Hassanein, H.S., "Caching dynamic content on the Web," Canadian Conference on Electrical and Computer Engineering, 2003, vol.2, no., pp. 947- 950 vol.2, 4-7 May 2003. [6] Sadhna Ahuja, Tao Wu and Sudhir Dixit “On the Effects of Content Compression on Web Cache Performance,” Proceedings of the International Conference on Information Technology: Computers and Communications, 2003. [7] Mark S. Squillante, David D. Yaot and Li Zhang “Web Traffic Modeling and Web Server Performance Analysis” Proceedings of the 38' Conference on Decision & Control Phoenix, Arizona USA December 1999. [8] C. E. Wills and M. Mikhailov, “Studying the Impact of More Complete Server Information on Web Caching,” Computer Communications, vol. 24, no. 2, pp. 184.190, May 2000. [9] J Wang “A Survey of Web Caching Schemes for the Internet” - Cornell Network Research Group (C/NRG), Department of Computer Science, Cornell University 1999. [10] A. Mahanti, C. Williamson, and D. Eager, “Traffic Analysis of a Web Proxy Caching Hierarchy,” IEEE Network Magazine, May 2000. [11] N. Shivakumar and H. Garcia-Molina, “Finding near Replicas of Documents on the Web” Proc. Workshop on Web Databases, Mar. 1998. [12] L. Breslau, P. Cao, L. Fan, G. Phillips, and S. Shenker. Web caching and Zipf like Distributions: Evidence and Implications. In Proc. Infocom ’99. New York, NY, March, 1999. [13] Jeffrey C. mogul “A trace-based analysis of duplicate suppression in HTTP,” Compaq Computer Corporation Western Research Laboratory, Nov. 1999. [14] Guerrero, C.; Juiz, C.; Puigjaner, R.; "Web Performance and Behavior Ontology," Complex, Intelligent and Software Intensive Systems, 2008. CISIS 2008. International Conference on, vol., no., pp.219-225, 4-7 March 2008. [15] Wei-Kuo Liao, Chung-Ta King “Proxy Pre-fetch and Prefix Caching” Parallel Processing, International Conference on, 2001. , vol., no., pp. 95- 102, 3-7 Sept. 2001. [16] Triantafillou, P.; Aekaterinidis, L.; "Web proxy cache replacement: do's, don'ts, and expectations," Network Computing and Applications, 2003. NCA 2003. Second IEEE International Symposium on, vol., no., pp. 5966, 16-18 April 2003. [17] K. Bharat, A. Broder, J. Dean, and M. R. Henzinger, “A Comparison of Techniques to Find Mirrored Hosts on the WWW,” IEEE Data Eng. Bull., vol. 23, no. 4, 2000. [18] Shim, J.; Scheuermann, P.; Vingralek, R.; "Proxy cache algorithms: design, implementation, and performance," Knowledge and Data Engineering, IEEE Transactions on , vol.11, no.4, pp.549-562, Jul 1999. [19] Kai Cheng; Kambayashi, Y.; Mohania, M.; , "Efficient management of data in proxy cache," Database and Expert Systems Applications, 2001. Proceedings. 12th International Workshop on , vol., no., pp.479-483, 2001. Authors Suryakant B Patil was born in Maharashtra, India, in 1975. He received the Bachelor and the Masters both in Computer Engineering, both from the Walchand College of Engineering, Sangli. He is currently pursuing the Ph.D. degree with the Department of Computer Engineering, NMIMS Mumbai and he has submitted his final thesis of the same. His research interests include Software Engineering, Project Management, MIS, Web Engineering, Mobile Communication and information Security. Sachin D Chavan was born in Maharashtra, India, in 1987. He received the Bachelor in Computer Engineering from North Maharashtra University, Maharashtra and pursuing his Masters in Computer Engineering from the NMIMS Mumbai. His research interests include Web Engineering, Design and Analysis of Algorithms, Computer Networking, Operating [1]

184

Vol. 3, Issue 1, pp. 175-185

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Systems and Software Engineering. Preeti S Patil was born in Karnataka, India, in 1977. She received the Bachelor in Computer Engineering from VTU University and the Masters in Computer Engineering from Mumbai University. She is currently pursuing the Ph.D. degree with the Department of Computer Engineering, NMIMS Mumbai and she has submitted her final thesis of the same. Her research interests include Web Engineering, Mobile Communication, information Security, Databases, Data Warehousing and Mining.

185

Vol. 3, Issue 1, pp. 175-185

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

ENERGY EFFICIENT CLUSTER BASED KEY MANAGEMENT TECHNIQUE FOR WIRELESS SENSOR NETWORKS
T. Lalitha1 and R. Umarani2
2

Research Scholar, Bharatiar University, Coimbatore, Tamilnadu, India Research Supervisor, Bharatiar University, Coimbatore, Tamilnadu, India

1

A BSTRACT
Wireless Sensor Networks (WSN) is vulnerable to node capture attacks in which an attacker can capture one or more sensor nodes and reveal all stored security information which enables him to compromise a part of the WSN communications. Due to large number of sensor nodes and lack of information about deployment and hardware capabilities of sensor node, key management in wireless sensor networks has become a complex task. Limited memory resources and energy constraints are the other issues of key management in WSN. Hence an efficient key management scheme is necessary which reduces the impact of node capture attacks and consume less energy. In this paper, we develop a cluster based technique for key management in wireless sensor network. Initially, clusters are formed in the network and the cluster heads are selected based on the energy cost, coverage and processing capacity. The sink assigns cluster key to every cluster and an EBS key set to every cluster head. The EBS key set contains the pairwise keys for intra-cluster and inter-cluster communication. During data transmission towards the sink, the data is made to pass through two phases of encryption thus ensuring security in the network. By simulation results, we show that our proposed technique efficiently increases packet delivery ratio with reduced energy consumption.

K EYW ORDS: Wireless Sensor Networks, Key Management, Data Transmission, Attacks, Cluster

I.
1.1

INTRODUCTION
Wireless Sensor Network

A network comprising of several minute wireless sensor nodes which are organized in a dense manner is called as a Wireless Sensor Network (WSN). Every node estimates the state of its surroundings in this network. The estimated results are then converted into the signal form in order to determine the features related to this technique after the processing of the signals. Based on the multi hop technique, the entire data that is accumulated is directed towards the special nodes which are considered as the sink nodes or the Base Station (BS). The user at the destination receives the data through the internet or the satellite via gateway. The use of the gateway is not very necessary as it is reliant on the distance between the user at the destination and the network [1]. For supervising the physical world, the wireless sensor networks are the promising technology. In order to collect the data from the surrounding in a sensor network application, several minute sensor nodes are organized and collaborated. Sensing modals like image sensors are placed in every node and this possess the ability to communicate in the wireless environment [2].

II. ENERGY EFFICIENT CLUSTER BASED KEY MANAGEMENT TECHNIQUE
2.1 Cluster Formation
In the wireless sensor network, after the nodes are deployed in the physical environment, they first report to the base station their physical locations, and then the network starts to select cluster heads. According to the cluster head selection algorithm, each node decides if it is capable of serving as a cluster head based on the following selection criteria: a) High Energy Resources b) Wide Communication Range c) High Processing Capacity

186

Vol. 3, Issue 1, pp. 186-190

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
For the authentication process, the encryption mechanism is carried on. When the selection criteria are satisfied by a particular node, it is capable of being the cluster head. So, this node, Ni broadcasts a Cluster head beacon (CH_BEACON) packet. The CH_BEACON packet is encrypted with a key called as the primary key, Kpri. Ni

Kpri(CH_BEACON)

broadcast

When the neighboring nodes Si receive this message, a cluster head reply (CH_REPLY) message is sent to the node, Ni by the nodes which intend to join the cluster. The reply message contains the ID and the response content Ack.
CH_REPLY

Ni

Kpri( ID{Si}|| Ack)

Si

If the number of reply messages received by Ni is greater than a threshold Rth, then Ni can be selected as the cluster head, CH. If the number of reply messages received by Ni is greater than a threshold Rth, then Ni can be selected as the cluster head, CH. Finally, the cluster head assigns IDs to all its member nodes that intend to join the cluster.

III. EBS CONSTRUCTION
An EBS consists of several subsets of the member set collection. In the EBS, every subset is analogous to a particular key and the nodes which possess the key become the element of the subset. The dimension of the EBS is represented by (N, K, M) and it depicts a condition of a N membered secure group with numbering from 1 to N and a separate key is maintained for every subset by the key server. In EBS, if there exists a subset Ai, then every member of this subset will have knowledge about the key Ki. In EMS, there are M elements for every t € [1, N] and its union is equal to [1, N] – {t}. Hence, any member t can be ejected by the key server. Then re-keying is performed to enable every member to know the replacement keys for the K keys. To perform this, the M messages are multicast after encrypting them with the keys which correspond to the M elements, which has a union equal to [1, N] – {t}. To restrict decipherability to selected members, encryption of every key is performed by its predecessor. A canonical enumeration technique is made use of, for the construction of EBS subsets. In the formation of subset of K objects out of K + M object set, every feasible method is taken into consideration. Matrix A is formed in order to develop a bit string sequence in its canonical (K, M), in which the K and M are already known, C (K + M, K) columns indicate the successive bit strings of which has a length of K+M objects, where K ones are present in each. For EBS (N, K, M), “A” is known as the canonical matrix. For instance, the canonical matrix A for EBS(8, 3, 2) enclose the enumeration of all C(5, 3) ways to form a subset of 3 keys from 5 keys, as shown in Table 1. Enumeration matrix for EBS(8,3,2)

187

Vol. 3, Issue 1, pp. 186-190

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Every row in the table corresponds to a subset Ti after the construction of the matrix A, where an entry 1 in the row indicates that the corresponding node is present in the subset. Since N = 8, M9 and M10 are not useful, in Table 1, T1 = [5, 6, 7, 8], T2 = [2, 3, 4, 8], T3 = [1, 3, 4, 6, 7], T4 = [1, 2, 4, 5, 7], and T5 = [1, 2, 3, 5, 6, 8]. It is easy to prove: [1,8] – [1] = T1 U T2, [1,8] – [2] = T1 U T3, [1,8] – [3] = T1 U T4, … Hence, on the exit of any node in the network information about the keys will be updated only by two node subsets. In this protocol, only five management keys are necessary whereas 15 keys are necessary in case of LKH. This in turn minimizes the key computation and also saves space for storage. During the construction of the EBS(N, K, M) model in this protocol, the values of the parameters N, K and M are raised in order to facilitate the production of larger number of management keys. Later on, the spare keys are used for the new nodes of the cluster

IV. SIMULATION RESULTS
The proposed Energy Efficient Cluster Based Key Management (EECBKM) technique is evaluated through NS2 simulation. We consider a random network of 100 sensor nodes deployed in an area of 500 X 500m. Two sink nodes are assumed to be situated 100 meters away from the above specified area. In the simulation, the channel capacity of mobile hosts is set to the same value: 2 Mbps. The simulated traffic is CBR with UDP. The number of clusters formed is 9. Out of which, we transmit data from 4 cluster heads to the sink. 3 sensor nodes in each cluster are sending data to their cluster head. The attacker nodes are varied from 2 to 10.
Table 1 summarizes the simulation parameters used No. of Nodes 100 Area Size 500 X 500 Mac 802.11 Routing protocol EECBKM Simulation Time 50 sec Traffic Source CBR Packet Size 512 bytes Rate 250kb Transmission Range 250m No of clusters 1,2,3 and 4 sending data No. of nodes per 3 cluster sending data Transmit Power 0.395 w Receiving power 0.660 w Idle power 0.035 w Initial Energy 17.1 Joules No. of Attackers 2,4,6,8 and 10

4.1 Performance Metrics
The performance of EECBKM technique is compared with the SecLEACH scheme. The performance is evaluated mainly, according to the following metrics. • Average Packet Drop: The number of packets dropped due to various attacks is averaged over all surviving data packets at the destination. • Average Packet Delivery Ratio: It is the ratio of the number .of packets received successfully and the total number of packets transmitted. • Energy: It is the average energy consumed for the data transmission.

188

Vol. 3, Issue 1, pp. 186-190

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

V.

RESULTS

5.1 Based on Attackers
In our initial experiment, we vary the number of attackers as 2,4,6,8 and 10 from various clusters performing node capture attacks.
Attackers Vs Delivery Ratio 0.45 0.445 0.44 0.435 0.43 0.425 0.42 2 4 6 8 10

Delivery Ratio

EECBKM SecLEACH

Attackers

Figure 3: Attackers Vs Delivery Ratio
Attackers Vs Drop 24000 23000 22000 21000 20000 19000 2 4 6 8 10

Packets

EECBKM SecLEACH

Attackers

Figure 4: Attackers Vs Packet Drop
Attackers Vs Energy 20 Energy(J) 15 10 5 0 2 4 6 8 10 SecLEACH EECBKM

Attackers

Figure 5: Attackers Vs Energy

When the number of attackers is increased, naturally the packet drop will increase there by reducing the packet delivery ratio. Since EECBKM reduces node capture attacks, the amount of packet drop is less, when compared with the existing schemes. Figure 3 and 4 give the packets drop and packet delivery ratio when the attackers are increased.

VI. CONCLUSION
In this paper, we have developed an efficient technique for key management in the wireless sensor network. During the formation of a cluster, initially a cluster head is selected based on eligibility criteria such as energy cost, coverage and processing capacity. After the cluster head selection, the

189

Vol. 3, Issue 1, pp. 186-190

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
information about all the members of the cluster is sent to the sink by the cluster head. The sink then provides the cluster head with the cluster key and the EBS key set required for the communication between the nodes. These keys are distributed to the nodes by the cluster head prior communication. After the key distribution, secure channel is established between the nodes and the cluster head. During the data transmission from the cluster members to the sink, the data passes two phases. In the first phase the data is encrypted and transmitted to the cluster head. In the second phase, the data is encrypted by another key by the cluster head and then transmitted to the sink. Thus this technique allows inter cluster as well as intra cluster communication in a very efficient manner with high security. By simulation results, we have shown that our proposed technique efficiently increases packet delivery ratio with reduced energy consumption.

REFERENCES
[1]. Lina M. Pestana Leão de Brito and Laura M. Rodríguez Peralta, “An Analysis of Localization Problems and Solutions in Wireless Sensor Networks”, Polytechnical Studies Review, 2008, Vol VI, ISSN: 1645-9911. [2]. Huang Lee and Hamid Aghajan, “Collaborative Self-Localization Techniques for Wireless Image Sensor Networks”, In Proc. of Asilomar Conf. on Signals, Systems and Computers, Oct. 2005. [3]. D.Saravanan , D.Rajalakshmi and D.Maheswari “DYCRASEN: A Dynamic Cryptographic Asymmetric Key Management for Sensor Network using Hash Function”, International Journal of Computer Applications (0975 – 8887) Volume 18– No.8, March 2011 [4]. Yoon-Su Jeong, and Sang-Ho Lee “Secure Key Management Protocol in the Wireless Sensor Network”, International Journal of Information Processing Systems, Vol.2, No.1, March 2006. [5]. Mohammed A. Abuhelaleh and Khaled M. Elleithy “SECURITY IN WIRELESS SENSOR NETWORKS: KEY MANAGEMENT MODULE IN SOOAWSN”, International Journal of Network Security & Its Applications (IJNSA), Vol.2, No.4, October 2010.

Authors:

Lalitha. T received a Master’s Degree in Computer Applications in 2000 in Vysya College, Salem and received a M.Phil(CS) in 2004 Bharathidasan University, Trichy. She now doing Ph.d Part-Time in Bharatiar University, Coimbatore. She is also working as a Senior Assistant Professor in Department of MCA in Sona College of Technology, Salem. Her research interests include network security, network Simulation as well as validation and verification techniques. She has Published 14 Papers in National and International Journals and published a Book “Problem Solving Techniques.

UmaRani. R. has completed her M.C.A., from NIT, Trichy in 1989. She did her M.Phil from Mother Teresa University, Kodaikanal. She received her Ph.D., from Periyar University, Salem in 2006. Her topic for research was information security. Her area of interest includes information security, data mining and mobile communications. She has published about 50 papers in national and international conferences. She has produced 20 M.Phil., (Computer Science) candidates. She is currently guiding 5 Ph.D., (Computer Science) research scholars. She is also working as Associate Professor in the Department of computer Science, Sri Sarada College for women, Salem. She has Published 35 Papers in International and Ntional Journals and 55 Papers in National and International Conferences.

190

Vol. 3, Issue 1, pp. 186-190

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

ON PRODUCT SUMMABILITY OF FOURIER SERIES USING MATRIX EULER METHOD
1
1

B.P.Padhy, 2Banitamani Mallik, 3U.K.Misra and 4Mahendra Misra

Department of Mathematics, Roland Institute of Technology, Berhampur, Odisha 2 Department of Mathematics, JITM, Paralakhemundi, 3 P.G.Department of Mathematics, Berhampur University, Odisha 4 Principal, N.C. College (Autonomous), Jajpur, Odisha

ABSTRACT
In this paper, a theorem on product summability of Fourier series using Matrix-Euler method is proved.

K EY W ORDS: A - mean, A(E , z ) - product mean, Fourier series.
MATHEMATICS SUBJECT CLASSIFICATION: 42B05, 42B08.

I.
Let

INTRODUCTION

∑a

n

be a given infinite series with the sequence of partial sums

{s n } . Let

A = (a mn )∞×∞ be a

matrix .Then the sequence –to-sequence transformation (1.1)

t m = ∑ a mν sυ , m = 1,2,L
υ =0

m

defines the sequence {t m } of the A -mean of the sequence (1.2) then the series t m → s , as m → ∞ ,

{s n } . If

∑a
∞ n =0

n

is said to be A summable to s .

The conditions for regularity of A -summability are easily seen to be[3] (i) sup ∑ a mn < H where H is an absolute constant.
m

(ii) lim a mn = 0
m→∞

(iii) lim
If (1.5) then the series Let

m→∞

∑a
n =0

mn

=1
n

(E , z ) = Enz

∑a

n

 n  n−υ ∑   z sυ → s , as n → ∞ . υ =0  υ  is said to be summable (E, z ) to a definite number s . = 1 (1 + z )n

191

Vol. 3, Issue 1, pp.

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
(1.6) then the series

Tn = ∑
k =0

n

∑  z (1 + z ) υ ν   
k =0

a nk

k

k 

k −υ

sυ → s as n → ∞ ,

∑a

n

is said to be summable to s by the A(E , z ) method .

It is known [1] that (E, z ) is regular. It is supposed that the method A(E , z ) is regular through out this paper. Let f (t ) be a periodic function with period 2π, integrable in the sense of Lebesgue over (-π,π) then
∞ a0 ∞ + ∑ (a n cos nt + bn sin nt ) = ∑ An (t ) 2 n =1 n =0 is the Fourier series associated with f . We use the following notation through out this paper φ (t ) = f ( x + t ) + f ( x − t ) − 2 f ( x),

(1.7)

f (t ) =

K n (t ) =

1 2π

∑ (1 + z ) ν∑
k k =0 =0

n

a nk

k

 k  k −υ  z ν   

1  sin υ + t 2  . t sin 2

II.

KNOWN THEOREM

Dealing with ( N , p n )(E , z ) method of a Fourier series, Nigam,et.al[2] proved the following theorem: THEOREM-2.1: Let {p n } be a positive, monotonic, non-increasing sequence of real constants such that

Pn = ∑ pυ → ∞ as n → ∞ .
υ =0

n

If

(2.1)

   t    Φ (t ) = ∫ φ (u ) du = O   , as t → +0 1  0 α     t   
t

and (2.2)

α ( n) → ∞ as n → ∞

where α ( t ) be a positive, non-increasing function of t , then the Fourier series summable ( N , p n )(E , z ) to f (x) at the point t = x . In this paper, we have generalized it to A(E , z ) summability of Fourier series.

∑ A (t )
n n =0

is

III.

MAIN THEOREM
Let A = (a mn )∞×∞ be a regular matrix and

THEOREM -3.1:

192

Vol. 3, Issue 1, pp. 191-196

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
   t    Φ (t ) = ∫ φ (u ) du = O   , as 0 α  1       t   
t

(3.1)

t → +0

where α ( t ) is positive ,non-increasing function of t and (3.2) α ( n) → ∞ as n → ∞ , then the Fourier series

∑ A (t ) is summable A(E, z )
n n =0

at the point t .

IV.

REQUIRED LEMMAS
1 . n +1

We require the following Lemmas to prove the theorem. LEMMA -4.1:

K n (t ) = O ( n )
PROOF: For 0 ≤ t ≤

,0 ≤ t ≤

1 , we have sinnt ≤ nsint then n +1
n

K n (t ) ≤

1 2π
1 2π

∑ (1 + z ) ν∑
k k =0 =0
n

a nk

k

 k  k −υ  z ν   

(2υ + 1)sin t
t sin 2

2

(2k + 1) k  k  k −υ ∑ (1 + z )k ν∑ ν  z   =0   k =0 1 n a nk (2k + 1) k = ∑ (1 + z )k (1 + z ) 2π k =0 (2n + 1) n a =
a nk 2π = O(n) .


k =0

nk

LEMMA-4.2:

1 1  K n (t ) = O  , for ≤ t ≤ π . n t 
PROOF: For

1 t t ≤ t ≤ π , we have by Jordan’s lemma, sin   ≥ , sin nt ≤ 1 . Then n 2 π
1 2π
n

K n (t ) ≤

∑ (1 + z ) ν∑
k k =0 =0

n

a nk

k

 k  k −υ  z ν   

1  sin υ + t 2  t sin 2

1 ≤ 2π

∑ (1 + z ) ν∑
k k =0 =0

a nk

k

 k  k −υ  π   z   ν  t  

193

Vol. 3, Issue 1, pp. 191-196

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

1 = 2t

a ∑ (1 + z ) (1 + z )
nk k k =0

n

k

.

 1 = O  .  t

V.

PROOF OF THE THEOREM 3.1

If s n is the n-th partial sum of the Fourier series Lebesgue theorem, following Titchmarch [4] we have

∑ A (t ) of
n n =0

f (t ) , then by using Riemann-

1  sin  n + t 1 2  s n - f (x) = dt ∫ φ (t ) 2π 0 t sin   2 z Thus, the (E, z ) transform E n of s n is given by
π

E nz − f ( x) =

∫  t  ∑ υ  z   2π (1 + z ) sin    
n 0 k =0

1

π

φ (t ) 
2

n

k 

n− k

1   sin  k + t  dt . t  

If Tn denote the A(E , z ) transform of s n , we then have

1 Tn − f ( x) = 2π
π

a nk π φ (t )  k  k  k − −υ 1   ∑ (1 + z) k ∫  t  υ =0 υ  z sinυ + 2 t  dt ∑    k =0 0 sin      2 
n

= ∫ φ (t ) K n (t ) dt
0

In order to prove the theorem, under an assumption, it is sufficient to show that
π

∫ φ (t ) K
0

n

(t ) dt = 0(1) as

n→∞

For

0 < δ < π ,we have Tn − f ( x) = ∫ φ (t ) K n (t ) dt
0

π

δ π   n =  ∫ + ∫ + ∫  φ (t ) K n (t ) dt .   0 δ  1/ n   = I 1 + I 2 + I 3 , say
1

Now
1/ n

I1

=

∫ φ (t ) K
0 1/ n 0

1/ n n

(t ) dt

∫ φ (t )
0

K n (t ) dt.

≤ O(n) ∫ φ (t ) dt , Using Lemma -1   1   , using (3.1). = O(n)O     nα (n) 

194

Vol. 3, Issue 1, pp. 191-196

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
 1  = O  α (n)  , as n → ∞ .    = O(1) , as n → ∞ , using (3.2).
Next
δ

I2


1 n

φ (t ) K n (t ) dt

 δ φ (t )  = O ∫ dt  , using lemma -2 t 1 / n  δ δ  Φ (t )  Φ (t )    = O  + ∫ 2 dt  .   t  1 / n 1 / n t    δ        n  1     1   + ∫ O = O O  uα (u )  du  , where u=1/t   1     1/ δ  α    t     1/ n   and 0 < δ < 1 .

 1   1 n = O  α (n)  + O nα (n)  ∫ du ,       1 / δ
(using second mean-value α (n) is monotonic) theorem for the integral in the 2nd term as

= O(1) + O(1) , as n → ∞ , using (3.2) = O(1) , as n → ∞ .
Finally

I 3 ≤ ∫ φ (t ) K n (t ) dt = O(1) as n → ∞ .
δ

π

by using Riemann –Lebesgue theorem and the regularity condition of the method of summability. Thus, Tn − f (x) = O(1) , as n → ∞ . This completes the proof of the theorem.

VI.

CONCLUSION

Thus, Product Summability of Fourier series by Matrix-Euler method generalizes the ( N , p n )(E , z ) Product Summability of Fourier series and ( N , p n , q n )(E , z ) - Product Summability of Fourier series.

REFERENCES
[1]. Hardy, G.H.: “Divergent series” clarendon press , Oxford university press, (1949). [2]. Nigam,H.K and Kusum sharma : A study on ( N , p n )(E , q ) product summability of Fourier series , ultra scientist ,vol. 22(3) ,m 927-932, (2010). [3]. Teoplitz, O.: “Uber allge-meire Lineare mittelbildungen”, Prace Matimatyczno-Fizyczne, 113-119, 22(1911). [4]. Titechmalch, E. C: The theory of functions, oxford university press, 402- 403(1939).

195

Vol. 3, Issue 1, pp. 191-196

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Authors Biography
B. P. Padhy has got his Ph.D from Berhampur University in 2012.He has 14 years of teaching experience. He has published around 15 research articles in various International and National journals of repute. He has also published a book Entitled “Summability Methods and Applications” under Lap Lambert Academic Publishing GmbH & Co.KG, Germany. Presently he is working as an Assistant Professor in mathematics in Roland Institute of Technology, Berhampur, Odisha. Banitamani Mallik is currently working as Asst. Prof. in Mathematics at JITM, , Paralakhemundi, Gajapti, Odisha which is under Centurion University of Technology & Management. She has teaching experience of more than 15yrs. She has obtained her M.Phil in Mathematics from Utkal University and doing research (Ph. D.) work in the field of “Summability Theory” since 2007 under Berhampur University.

U. K. Misra is working as a faculty in the P.G.Department of Mathematics, Berhampur University since last 28 years. To his credit he has guided 13 Ph.Ds and 1 D.Sc .He has published around 75 papers in various International and National journal of repute. The field of Prof. Misra’s research is Summability theory, sequence space, Fourier series, Inventory control, Mathematical Modeling. He is the reviewer of Mathematical review published by American Mathematical Society.

Mahendra Misra has got his Ph.D in the year 1997 from Berhampur University. He has published around 30 research articles in various International and National journals of repute. To his credit he has produced three Ph.Ds under his guidance. Presently he is working as H.O.D of P.G. Department of Mathematics in N.C. College (Autonomous), Jajpur ,Odisha.

196

Vol. 3, Issue 1, pp. 191-196

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

MATERIAL HANDLING AND SUPPLY CHAIN MANAGEMENT IN FERTILIZER PRODUCTION – A CASE STUDY
T. K. Jack
Dept. of Mechanical Engineering, University of Port Harcourt, Nigeria

ABSTRACT
The fertilizer bagging line operation for packaging and conveying to the final delivery trucks was a source of problem for one fertilizer producer referred to here as Company “N”. This resulted in frequent maintenance activities from constant machine breakdown, loss of production, and the attendant loss in plant revenue. To overcome this, a technical study was conducted to address the problem. Plant site visit and technical inspection revealed the breakdowns were primarily due to fertilizer granules being trapped on the packaging/bagging conveyors, and as a result of fast acidic corrosion causing rust and eventual seizures of the conveyor rollers and lines. The resistances to free roller movement induce a drag load on the drive motors, with the additional loading causing failures of the electronic control sensors and stoppages. The unreliability of the bagging operations in the unplanned, repeated start-stop sequence in the packaging operation also led to safety concerns, with a few incidences of injuries being recorded. The engineered solution is presented in this report.

I.

INTRODUCTION

Company “N” was set up to meet the fertilizer demands of local farmers, and also to utilize the excess gas resources of the country. The products of the Company are Ammonia, Urea and NPK fertilizers. Ammonia is the major raw material in the production of Urea and NPK fertilizers. Expected total daily production of Urea and NPK type fertilizers without machine breakdown and no loss of time on the part of the bagging personnel, for three shifts (8 working hours per shift), in the bagging/material handling section of the Company “N”, is about 1500 tons for its eight bagging lines (four each for Urea and NPK type bagging).[1] This is operationally managed into six production/packaging line units (three each for Urea and NPK); with one stand-by each. However, due to frequent maintenance activities from bagging machine breakdowns, and improper orientation (in form of training) on the part of the packaging line/bagging personnel (often hourly paid contract staff), the bagging target was never achieved. This was a huge loss in revenue to the company and source of worry to the government Agricultural Ministry in meeting annual farming plans and targets. The bagging lines had teething installation challenges before final commissioning. [2] Bagging personnel were often recruited on contract basis and immediately engaged in active packaging of fertilizers, without proper orientation and training. Plant maintenance crew gangs often had other responsibilities in other plant sections and unable to give full attention to the increasing bagging machine line breakdowns. In the sections that follow is a brief description of the fertilizer packaging line operation - Fig. (1); the identified machine problem source points, and the preventive and corrective maintenance actions required.

II.

THE BAGGING/PACKAGING OPERATION OF COMPANY “N”

A typical bagging line arrangement consists of: (1) an in-feeder chute (provided with bag clamp and sensor, and pneumatic operation), where the bag is fed with the material (either Urea or NPK) to be bagged. (2) The bags when loaded with the product (50 kg quantity of product), then drops on to a slate/wooden conveyor, which conveys the bag to, (3) the sealing machine unit for the sealing of the

197

Vol. 3, Issue 1, pp. 197-199

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
inner bag (polythene – protection against moisture), and (4) then conveyed on to the sewing machine for sewing of the jute type bag, from where it is, (5) conveyed on to the bag turner; it is (6) further conveyed on to the compressing unit, and finally on to, (7) the loading plate which is in a pallet form. A pallet contains twenty (20), 50 kilogram (kg) bags of product (Urea or NPK Fertilizers). (8) The Pallet is finally carried off the bagging machine by a forklift to the storage, awaiting the delivery trucks.
Feeder

Wooden Conveyor

Sealing Machine

Sewing Machine

Bag Turner

Compressing Unit

Pallet Loading Plate

To Storage and Delivery Trucks

Fig.(1): Original Installed Fertilizer Bagging Line-to-Storage/Delivery Trucks

III.

SOURCE OF MAINTENANCE PROBLEMS

Identified are two major problems. Machine associated problems, and the issues that affect worker productivity. The greatest maintenance activities occur in the sealing machine unit, pallet pusher chain, sewing machine. A source of problem is fertilizer product granules being trapped at sections of the bagging machine unit. The acidic nature of fertilizer also has a deteriorating corrosive effect particularly on the pusher chain-links. The conveyor chain links slowly rust away, leading to complete fracture of chain parts, thus resulting in difficulty in conveying the bags since the broken ends of the chain link, hook on to the conveyor rail. The conveyor rollers are also affected. Issues affecting worker productivity result from sluggish operational activities like work permit issuance delays, supervisory instructions time lag and prioritization in repair personnel allocation, delays in request for replacement spares from materials warehouse, resulting in the actual production time not exceeding 4 hours per shift. This implies an actual total production time of 12 hours for the three shifts. The question then is, “can these production delays and lapses not be minimized to the barest or if possible, done away with completely. A careful study of the bagging techniques was conducted to find solutions.

IV.

FINDING SOLUTIONS

The technical services engineering consultants in collaboration with the plant maintenance engineers and the final product distribution unit, made some modifications to the production-to-distribution supply chain in which they scrapped the sealing unit and entire palletizing units for six of the eight bagging lines (three each for Urea and NPK bagging lines.), and adopted a just-in-time [3] loading method since the breakdowns often result in the delivery trucks queuing for hours and in certain instances, days. The reduced bagging/production-to-distribution chain activities carried out are: 1) Feeding the bag with the product; 2) Loading it on the wooden/slate conveyor; 3) Sewing the bag, conveying it on, and loading it straight on delivery trucks, or send for storage. The modifications made to the original bagging line are shown in fig. (2).

198

Vol. 3, Issue 1, pp. 197-199

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Feeder

Wooden Conveyor

Sewing Machine

To Storage or Delivery Truck

To Customers

Fig. (2): Modified Fertilizer Bagging Line-to-Storage/Delivery Line

A preventive maintenance approach with an in house cleaning/routine maintenance to handle the corrosion problems in the conveyor chain links was adopted. Also, a general weekly maintenance on the entire lines is carried out on Sundays, since there is no bagging activity for that day. These weekly maintenance plans include, putting back to service out-of-use lines with new chain links which are often machined at the general service machine shop, to avoid delays in order delivery of out-of-stock Overseas re-order spare chain replacements. Old and worn conveyor rollers are also replaced during this period.

V.

RESULTS AND DISCUSSIONS

These slight modifications, allowed for better just-in-time material flow [3] and increased the total production time to between six to seven hours per shift, with some lines recording no loss time due to breakdown, in a 24-hour continuous production period. The truck queues were completely reduced with slight queues only experienced in the weekend preceding the restart of production on Monday. It was observed that, the increased emphasis placed on safety, and proper training for bagging personnel also added to the improved productivity.

VI.

CLOSING

Significant increase in the life of the machines and improvements in packaging line/bagging activity was achieved. Life here, referring to machine time spent in continuous packaging/bagging with fewer breakdowns. The changes to the original installed design made the packaging/bagging operation more manual but reliable and stable. The study was unable to deduce if the problems were as a result of poor installation from onset, but it was suggested that in future expansion of the plant operation, designs for new bagging line units take into considerations the observations made, and modifications effected.

REFERENCES
[1] Jack, T. K., Statistic Process Control in the Fertilizer Industry, M.E.M. Technical Report, University of Port Harcourt, (Unpublished), 1989 [2] Maintenance Manager, Company “N”, private communications [3] Maynard, H. B., Industrial Engineering Handbook, 5th ed., McGraw-Hill, 2001 [4] Kuye, O. A., Sampling Inspection and Quality Control, Lecture Notes, University of Port Harcourt, 1988

Author
Tonye K. Jack is a Registered Engineer, and ASME member. He worked on plant maintenance and rotating equipment in the Chemical Fertilizer industry, and on gas turbines in the oil and gas industry. He has Bachelors degree in Mechanical Engineering from the University of Nigeria, and Masters Degrees in Engineering Management from the University of Port Harcourt, and in Rotating Machines Design from the Cranfield University in England. He is a University Teacher in Port Harcourt, Rivers State, Nigeria, teaching undergraduate classes in mechanical engineering. His research interests are on rotating equipment engineering, maintenance, engineering management, engineering computer programs, and applied mechanics.

199

Vol. 3, Issue 1, pp. 197-199

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

VLSI IMPLEMENTATION OF ERROR TOLERANCE ANALYSIS FOR PIPELINE BASED DWT IN JPEG 2000 ENCODER
Rajamanickam. G & Jayamani. S
Deptt. of ECE, KSR College of Tech., Anna University, Coimbatore, Tiruchengode

A BSTRACT
The JPEG 2000 image compression standard is designed for a broad range of data compression applications. The Discrete Wavelet Transformation (DWT) is central to the signal analysis and is important in JPEG 2000 and is quite susceptible to computer-induced errors. These errors get spread to many output transform coefficients if the DWT is implemented by using lifting scheme. This paper proposes an efficient Error Tolerance Scheme (ETS) to detect errors occurring in DWT. A pipeline-based DWT structure is also developed in this paper to speed up the error detection process. Some standard images are used as test samples to verify the feasibility of the proposed ETS design. Experimental results and comparisons show that the proposed ETS achieves better performance in error detection time and error tolerance capability.

K EYW ORDS: JPEG 2000, DWT, error detection, error tolerance.

I.

INTRODUCTION

JPEG 2000, a still-image compression standard is being designed to address many applications, e.g., internet, printing, and medical imagery [1] & [2]. Besides, it achieves better compression performance over the JPEG, JPEG 2000 also provides rich features. For example, it allows efficient lossy and lossless compressions within a single unified coding framework, provides superior image quality at low bit rates, supports additional features such as region of interest coding, and has a more flexible file format, avoiding excessive computational and memory complexity. The DWT is central to the JPEG 2000 image compression standard which includes lifting configurations for implementing the forward and inverse transforms. The main properties of DWT are the space-frequency localization and inherent multiresolution structure. In other words, wavelets allow efficient representation of a signal with a small number of nonzero coefficients. Also, wavelets take advantage of data correlation in space and frequency. DWT is implemented with computer hardware ultimately, the processing operations are susceptible to transient failures, and primarily single-event upsets, alternately termed as soft errors. These factors will increase the influences as VLSI feature sizes shrink [3]. Error tolerance is a new design and test paradigm, which takes into consideration whether erroneous outputs of defective circuits still produce acceptable results [5] & [6]. Error tolerance classifies a system as being acceptable/ unacceptable by estimating the performance degradation due to errors, rather than relying solely on the conventional perfect/imperfect classification. Error tolerance analyzes the system level effects of errors, and accepts circuits if the performance degradation can meet the application-specific or range of acceptability. This paper proposes an ETS design that targeted for detecting errors of the DWT subsystem in JPEG 2000, as DWT is one of the most important subsystems in terms of both computation and memory requirements. The remainder of this paper is organized as follows: Section II reviews the basic principles and key features of DWT by lifting scheme. In section III presents the proposed ETS structure, pipeline-based DWT design, error model definition, error detection, and tolerance strategy. Section IV shows the experimental results and comparisons for performance evaluation and discussion. Finally, Section V provides the final conclusions.

200

Vol. 3, Issue 1, pp. 200-208

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

II.

BACKGROUND

The DWT is usually computed through convolution and sub-sampling with a couple of filters to produce an approximation signal L (low pass filter result) and a detail signal H (high pass filter result). The multi-resolution decomposition is obtained by iterating the convolution and sub-sampling of these two filters over the approximation components. For 2-D signals, there exist separable wavelets for which the computation can be decomposed into horizontal processing (on the rows) followed by vertical processing (on the columns), using the same 1-D filters. Figure 1 shows a one level decomposition. Subsequent levels are obtained by iterating on the low pass signal LL [3] & [4].

Figure 1. A 1-Level 2-D seperable wavelet decomposition

The general block diagram of the lifting technique is illustrated in Fig. 2, which consists of four steps: 1. Split step: The original samples are separated into two disjoint sets, named even part and odd part. 2. Predict step: The even samples are multiplied by the time domain equivalent of s(z) are added to the odd samples. 3. Update step: The updated odd samples are multiplied by the time domain equivalent of t(z) are added to the even samples. 4. Scaling step: The even and odd samples are multiplied by k-1 and k, respectively.

Figure.2. Block diagram of the lifting scheme.

III.

PROPOSED ERROR TOLERANCE SCHEME

The Proposed ETS design which consists of an Input Parity Procedure (IPP), an Output Parity Procedure (OPP) and a parity analyzer. The main objective of the ETS is to compare the differences between Cin and Cout values to find the errors that occurred in the DWT. Then the parity analyzer will further analyze, whether the errors can be tolerated or not. Each row pixels of an n× n image will be divided into even and odd number of data samples input for DWT operation. Thus, the size of inputdata vector X shown in Fig.3 is 1× n. A is n × n matrix of wavelet transform via lifting. Additionally a

201

Vol. 3, Issue 1, pp. 200-208

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
1×n tolerance weighting matrix W has to be developed to establish the IPP and OPP structures for error detection.[1]

Figure.3. Block diagram of the ETS design

3.1 Modeling Errors
The proposed ETS design mainly focuses on the effects of computer-induced errors, which will be modeled through transfer matrices related to the lifting sections. Forward 2D DWT, the numerical errors are caused by an underlying computer-induced error which will propagate their corrupting influence to the output. The error mechanisms can be defined by two types such as Intensive Error Model (IEM) and Distributed Error Model (DEM), as shown in figure.4. If the numerical errors influence the pixels in contiguous rows of an image, then the error model is called IEM. On the other hand, the DEM is presented if many single row pixels of an image will be influenced by the numerical errors. According to the error models, the proposed ETS design will be demonstrated as an effective method to explore the error impact in the DWT and further analyze the tolerance of errors for JPEG 2000 encoder applications [11].

Figure. 4. Error models. (a) IEM. (b) DEM

3.2 Tolerance Weighting
In order to achieve the error tolerance of human visual system and to increase the flexibility of the proposed ETS design, a tolerance weighting matrix, W has to be developed. The weighting factors of W are very important for error tolerance analysis, since the most significant data are generally centralized in the central parts of an image. The error influence in the central parts is more serious than that in the boundaries of an image. Thus, a weighting matrix has to be built for supporting evaluation of error influence when the proposed ETS is active. Fig 5 shows an image divided into some blocks to set the different parity weighting factors. Generally, good parity weighting factors should have gain factors whose ranges span three to four orders of pixel magnitude. Based on the parity weighting factor, we built a 1×n tolerance weighting matrix for error detection and error tolerance evaluation purpose [1].

202

Vol. 3, Issue 1, pp. 200-208

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Figure.5. Schematic representations of parity weighting of an image

3.3 Pipeline-Based DWT Design
Based on the conventional structure for one level 9/7 DWT via lifting. Initially input data samples are split into odd and even subsequences. For an nxn image, the input-data vector is represented by, X= [x0 x1 x2 x3…….xn-2 xn-1] s0= [x0 x2…….xn-2] = [s00 s10 …..s0(n/2)-1] d0=[x1 x3…….xn-1]= [d0 d1 …..d Where sli
l th 0 0 0 (n/2)-1] th

(1) (2) (3)

and d i represent the i even and odd samples in the l stage of lifting step.

Split step
The input sequences xi are split into even and odd parts, s0i and d0i di0 = x2i+1 si0= x2i (4) (5)

Lifting step
The two splitting sequences (s0i and d0i) are performed by two lifting steps Step1 di1= di0 +α ×(si0+s0i+1), si1= si0 +2β ×( di1) Step2 di2= di1 +γ×(si1+s1i+1) si2= si1 +2δ ×( di2) (8) (9) (6) (7)

Scaling step
Through the normalization factors k_1and k, the low-pass and high-pass wavelet coefficients si and di can be obtained. di= k × di2 si= k-1 × si2 (10) (11)

203

Vol. 3, Issue 1, pp. 200-208

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
The final output of the pipeline based DWT is expressed in the following matrix form

(12) where Ma, Mb, Mc, and Md are the nxn matrices and the parameters α, β, γ, δ and k represent floating point arithmetic for a 9/7 wavelet transform via lifting[12]. The direct mapping of 1-Dimensional lifting based architecture for the 9/7 filter is shown in the figure 6.

Figure.6. Direct mapping of 1-D lifting-based architecture for the 9/7 filter

3.4 Parity Analyzer
The parity analyzer plays the role of a comparator to check a syndrome (the difference between Cin and Cout) and determine whether a syndrome is tolerant with a chosen threshold or not. The thresholds of IEM and DEM are defined as THIEM and THDEM, respectively, to detect the Intensive and Distributed errors in an image as shown in equation (13).[9]

(13) The human visual system is more sensitive about the brightness variations than the changes in chrominance. Thus, the brightness variation detection method which plays an important role for error detection in an image is adopted here to redefine the thresholds of error detection. For the computational efficiency consideration, the criterion of brightness variation is determined by using the histogram- based method, which usually shows sensitivity to the image changes within a similar brightness condition. The histogram difference is given by equation (14). DHIS =∑ Hj - Hi (14) where Hi and Hj signify the histograms in the ith and jth rows of an image. By setting threshold of the histogram difference DHIS, the image holding brightness variations can be detected [1].

204

Vol. 3, Issue 1, pp. 200-208

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
In order to accurately evaluate the quality impact of an image by brightness variations, the BVRs are normalized as fractional variations. The Brightness Variations for Intensive Error Model and Distributed Error Model is given by the equation(15) and (16) respectively. × 100% (15)

× 100%

(16)

3.5 Acceptable Error Rate
The objective evaluation of a specific image presented here depends on the relation between BVR and acceptable error rate (AER). Error rate (ER) is the percentage of vectors for which values at a set of outputs deviate from error free response, during normal operation. AER is a percentage of acceptable errors for all injection errors; AER represents the capability of error tolerance with injection errors in an image [15]. × 100% (17)

IV.

EXPERIMENT RESULTS

The 9/7 lifting DWT of the JPEG 2000 image compression standard is used as a Circuit Under Test (CUT) to demonstrate the good performance in error detection time and error tolerance capability of the proposed ETS design. Consider the lifting DWT operating with error injection values modeled by a Gaussian noise source. The error detection performance is strongly dependent on the noise variation and the selected detection threshold. Cin - Cout, from the parity analyzer of the proposed ETS is on the order of 10-10 with round off errors. Thus, the necessary thresholds have to be chosen well above this level for error detection. To examine the effectiveness of the proposed ETS design in different experimental conditions, six benchmark sequences (Lena, Peppers, Baboon, Barbara, Gold hill, and Cameraman) are selected in the experiments. The comparisons between the proposed ETS design and the work are presented in this section to demonstrate that the proposed ETS design has good performance in error detection time. Additionally, quality observation and objective analysis are discussed to evaluate the capability of error tolerance.

(a)

(b)

205

Vol. 3, Issue 1, pp. 200-208

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

(c)

(d) Figure.7.Quality observation for the benchmark image “Baboon.” (a) Error free. (b) BVRDEM = 6.2%. (c) BVRDEM=12.1%. (d)BVRDEM=18.7%.

4.1 VLSI Simulation output for pipeline based DWT

Figure.8. Simulation output for pipeline based DWT

206

Vol. 3, Issue 1, pp. 200-208

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963 4.2 Table 1: Acceptable Error Rate (AER) for Test Images
Threshold 2% BVR IEM(THIEM=10-4) 4% 6% 8% 2% BVR IEM(THDEM=10-7) 4% 6% 8%

Test images Lena Peppers Baboon Cameraman

6.5 5.1 7.0 5.1

AER% 12.1 18.9 9.8 13.1 10.7 14.9 19.8 16.2

24.9 19.9 27.1 22.1

4.9 4.2 6.2 2.1

AER% 10.9 17.2 9.1 12.1 5.2 14.1 18.7 8.6

22.3 18.7 25.1 11.8

4.3 Comparisons
The proposed ETS design shown in Fig. 4 can be implemented by using a comparator and some multipliers, adders, buffers, and registers. Based on the circuit design shown in [8], the number of logic gates of the proposed ETS design is about 6,972. However, about 180 k logic gates are needed for a VLSI architecture design of JPEG 2000 encoder [3]. Thus, the area overhead of the proposed ETS design is only about 3.9 percent, which is a reasonable design for circuit testing. The comparisons between the proposed method and previous work [9] are shown in Table 2, which clearly indicates that the proposed ETS design has good performance in computational complexity (error detection time) and error tolerance capability with little area overhead.
Table2: Comparison Results DWT Design Computational Area complexity overhead Proposed work Pipelined based structure Conventional structure O(n2) O(n3) 3.9% Tolerance analysis Error Tolerance Fault Tolerance

Existing work

-

V.

CONCLUSION

An effectively ETS design for lifting DWT error detection and error tolerance evaluation in JPEG 2000 encoder is presented in this paper. The paper first developed a pipeline based DWT structure to support the proposed ETS design for speeding up the error detection time. Then, an IPP, an OPP, and a parity analyzer based on the weighting sum technique are built of the proposed ETS design to detect the errors. The error detection performance depends on the detection thresholds, which are determined by the brightness variations. Experimental results show that the proposed ETS with pipeline-based DWT design can significantly improve the error detection time compared with the previous work with conventional DWT structure. Additionally, according the quality observation and objective evaluation for the test images, the proposed ETS design also demonstrates the good performance in error tolerance capability.

ACKNOWLEDGEMENT
The authors would like to thank K.S. Rangasamy College of Technology, Anna UniversityCoimbatore, Tiruchengode for supporting this project for the Master’s degree at Department of Electronics and Communication Engineering. Authors also would like to thank anonymous reviewers for theirs useful comments.

207

Vol. 3, Issue 1, pp. 200-208

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

REFERENCES
[1] Chun-Lung Hsu, Member, Yu-Sheng Huang, Ming-Da Chang, and Hung-Yen Huang,”Design of an Error-Tolerance Scheme for Discrete Wavelet Transform in JPEG 2000 Encoder”, IEEE Trans. May 2011. D. Shin and S.K. Gupta, “A Re-Design Technique for Datapath Modules in Error Tolerant Applications,” Proc. Asian Test Symp., pp. 431-437, Nov. 2008. Y.H. Seo and D.W. Kim, “VLSI Architecture of Line-Based Lifting Wavelet Transform for Motion JPEG 2000,” IEEE J. Solid-State Circuits, vol. 42, no. 2, pp. 431-440, Feb. 2007. T. Acharya and C. Chakrabarti, “A Survey on Lifting-Based Discrete Wavelet Transform Architectures,” J. VLSI Signal Processing Systems, vol. 42, no. 3, pp. 321-339, Mar. 2006. H. Chung and A. Ortega, “Analysis and Testing for Error Tolerant Motion Estimation,” Proc. 20th IEEE Int’l Symp. Defect and Fault Tolerance in VLSI Systems, pp. 514-522, 2005. M. Breuer, S. Gupta, and T. Mark, “Defect and Error Tolerance in the Presence of Massive Numbers of Defects,” IEEE Trans. Design Test Computers, vol. 21, no. 3, pp. 216-227, May/June 2004. K.J. Cho, K.C. Lee, J.G. Chung, and K.K. Parhi, “Design of Low-Error Fixed-Width Modified Booth Multiplier,” IEEE Trans. Very Large Scale Integration Systems, vol. 12, no. 5, pp. 522-531, May2004. L. Liu, N. Chen, H. Meng, L. Zhang, Z. Wang, and H. Chen, “A VLSI Architecture of JPEG 2000 Encoder,” IEEE J. Solid-State Circuits, vol. 39, no. 11, pp. 2032-2040, Nov. 2004. G.R. Redinbo and C. Nguyen, “Concurrent Error Detection in Wavelet Lifting Transforms,” IEEE Trans. Computers, vol. 53, no. 10, pp. 1291-1302, Oct. 2004.. C. Constantinescu, “Trends and Challenges in VLSI Circuit Reliability,” IEEE Micro, vol. 23, no. 4, pp. 14-19, July/Aug. 2003. J. Seifert, Z. Xiaowei, and L.W. Massengill, “Impact of Scaling on Soft-Error Rates in Commercial Microprocessors,” IEEE Trans. Nuclear Science, vol. 49, no. 6, pp. 3100-3106, Dec. 2002. C.J. Lian, K.F. Chen, H.H. Chen, and L.G. Chen, “Analysis and Architecture Design of Lifting Based DWT and EBCOT for JPEG 2000,” Proc. Int’l Symp. VLSI Technology, Systems, and Applications, pp. 180-183, 2001. “ITU-T Recommend. T.800-ISO/IEC FCD15444-1: JPEG 2000 Image Coding System,” Int’l Organization for Standardization, ISO/IEC JTC1 SC29/WG1, 2000. C. Christopoulos, A. Skodras, and T. Ebrahimi, “The JPEG2000 Still Image Coding System: An Overview,” IEEE Trans. Consumer Electronics Design, vol. 46, no. 4, pp. 1103-1127, Nov. 2000. H.W. Haussecker, “Simulation Estimation of Optical Flow and Heat Transport in Infrared Image Sequences,” Proc. IEEE Workshop Computer Vision, pp. 85-93, June 2000.

[2] [3] [4] [5] [6]

[7]

[8]

[9] [10] [11] [12]

[13] [14] [15]

Authors
G. Rajamanickam, received the B.E degree in Electronics and Communication Engineering from affiliated institution of Anna University, Chennai in 2009. He is currently doing M.E degree in VLSI Design at K.S.Rangasamy College of technology, Namakkal, India. His area of interest is Image processing and VLSI.

S. Jayamani, is Assistant professor in the Department of Electronics and Communication Engineering, K.S.Rangasamy college of technology, Namakkal, India. He has 2.6 years of teaching experience. His interesting subjects are Embedded system and VLSI.

208

Vol. 3, Issue 1, pp. 200-208

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

LOCATION BASED SERVICES IN ANDROID
Ch. Radhika Rani1, A. Praveen Kumar2, D. Adarsh2, K. Krishna Mohan2, K.V.Kiran2
2

Asst. Professor, Dept. of C.S.E., K L University, A.P, India IV/IV B.Tech Students, Dept. of C.S.E, K L University, A.P, India

1

ABSTRACT
Location Based Services provide the users a set of services which originate from the geographic location of the user’s mobile device. Using these services it is possible for the users to find and locate other persons, vehicles, resources and also to provide location-sensitive services, in addition to tracking their own location. The request for location can originate in the mobile device or in another entity like application provider or network operator. It is possible to automatically trigger Location Based Services when the mobile device is at a particular location. These services can also originate in the user’s mobile device itself in order to satisfy location-based requests like finding areas of interest, checking traffic conditions, finding our friends, vehicles, resources, machines and emergency requests. In this paper we will discuss how to implement these location based services in Android after giving a brief introduction to Android and its constituents.

KEYWORDS: Android, Location based service, Location Manager, Location Provider, GPS, Geocoding,
Overlays.

I.

INTRODUCTION

Android is a combination of three components: • Free and open source operating system for mobiles. • Open source development platform for creating mobile applications. • Devices, particularly mobile phones that run Android operating system and the applications created for it. Android SDK was released by Open Handset Alliance in the month of November of the year 2007. Android is actually developed using the kernel of Linux 2.6 and the highlighting features of Android include the following [7]: • No fees for licensing, distribution and release approval • GSM,3G EDGE networks for telephony • IPC message passing • Background processes and applications • Shared data stores • Complete multimedia hardware control • API’s for location based services such as GPS. The Location Based Service (LBS) applications can help user to find hospitals, school, gas filling station or any other facility of interest indicated by user within certain range [2]. Just like a GPS device its location will also be updated as soon as user changes his/her position. Android can be considered as a unified software package. This software package includes an operating system, middleware and core applications. Android SDK provides some tools and API’s which are required to develop Android applications using the programming language of Java. Android platform provides open system architecture along with a powerful debugging environment. It is also characterized by optimized graphics systems, rich media support and a embeddable web browser. It uses a Dalvik virtual machine (DVM) which gives the same priority to all processes and hence optimises the execution of application.GPS, camera, compass and 3d-accelerometer are also supported by Android and they provides some useful APIs for the functions of map and location. Anyone can access, control

209

Vol. 3, Issue 1, pp. 209-220

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
and process the Google map which is also free and implement location based services in his mobile device [3]. The skeleton of Android framework and the jargon of Android application development are discussed in section II and section III. The design of location based services in Android is discussed in section IV, the discussion regarding the development of an application which can find route between two geographical locations is mentioned in section V and its corresponding results are mentioned in section VI. After this, section VII gives the conclusion and the section VIII gives the future scope.

II.

SKELETON OF ANDROID

The skeleton of Android framework [9] and its constituents are shown in the following figure:

Figure 1. Skeleton of Android

2.1. Applications Layer
Android ships with a set of core applications including an email client, SMS program, calendar, maps, browser, contacts, and others. All applications are built using the Java. Each of the application aims at performing a specific task that it is actually intended to do.

2.2. Application Framework Layer
The next layer is the application framework. This includes the programs that manage the phone's basic functions like resource allocation, telephone applications, switching between processes or programs and keeping track of the phone's physical location. Application developers have full access to Android's application framework. This allows them to take advantage of Android's processing capabilities and support features when building an Android application. We can think of the application framework as a set of basic tools with which a developer can build much more complex tools.

2.3. Libraries Layer
The next layer contains the native libraries of Android. These shared libraries are all written in C or C++, compiled for the particular hardware architecture used by the phone and preinstalled by the phone vendor. Some of the core libraries are listed in Fig.1.

2.4. Android Runtime Layer
Android Runtime layer includes Dalvik Virtual Machine (DVM) and a set of core java libraries. Every Android app gets its own instance of DVM. Dalvik has been written so that a device can run multiple virtual machines efficiently and it executes files with .dex (dalvik executable format) extension optimised for minimum memory.

210

Vol. 3, Issue 1, pp. 209-220

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963 2.5. Linux Kernel
This layer includes Android’s memory management programs, security settings, power management software and several drivers for hardware, file-system access, networking and inter process communication. The kernel also acts as an abstraction layer between hardware and rest of the software stack.

III.

JARGON OF ANDROID APP DEVELOPMENT

The basic components of an Android application include Activity, Broadcast Receiver, Service, and Content Provider. Each of these which when used for any application has to be declared in the AndroidManifest.xml. The user interface of the component is determined by the Views. For the communication among these basic components we use Intents and Intent filters which play crucial role during app development [1].

3.1. Activity
An Activity is, fundamentally, an object that has a lifecycle. An Activity is a chunk of code that does some work; if necessary, that work can include displaying a UI to the user. It doesn't have to, though some Activities never display UIs. Typically, we will designate one of our application's Activities as the entry point to our application [4].

3.2. Broadcast Receiver
Broadcast Receiver is yet another type of component that can receive and respond to any broadcast announcements.

3.3. Service
A Service is a body of code that runs in the background. It can run in its own process, or in the context of another application's process, depending on its needs. Other components "bind" to a Service and invoke methods on it via remote procedure calls. An example of a Service is a media player; even when the user quits the media-selection UI, she probably still intends for her music to keep playing. A Service keeps the music going even when the UI has completed.

3.4. Content Provider
Content Provider is a data storehouse that provides access to data on the device; the classic example is the Content Provider that's used to access the user's list of contacts. Our application can access data that other applications have exposed via a Content Provider, and we can also define our own Content Providers to expose data of our own.

3.5. Intents
Intent is a simple message object that represents an "intention" to do something. For example, if our application wants to display a web page, it expresses its "Intent" to view the URI by creating an Intent instance and handing it off to the system. The system locates some other piece of code (in this case, the Browser) that knows how to handle that Intent, and runs it. Intents can also be used to broadcast interesting events (such as a notification) system-wide. There are two types of intents namely implicit and explicit intents. Implicit intents has no specified component where as Explicit intents do specify the component.

3.6. AndroidManifest.xml
The AndroidManifest.xml file is the control file that tells the system what to do with all the top-level components (specifically activities, services, intent receivers, and content providers described below) we have created. For instance, this is the "glue" that actually specifies which Intents our Activities receive.

3.7. Views
A View is an object that knows how to draw itself to the screen. Android user interfaces are comprised of trees of Views. If we want to perform some custom graphical technique (as we might if we're writing a game, or building some unusual new user interface widget) then we would create a View.

3.8. Notification
A Notification is a small icon that appears in the status bar. Users can interact with this icon to receive information. The most well-known notifications are SMS messages, call history, and voicemail, but applications can create their own. Notifications are the strongly-preferred mechanism for alerting the

211

Vol. 3, Issue 1, pp. 209-220

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
user of something that needs their attention.

IV.

DESIGN OF LOCATION BASED SERVICES (LBS)

Location based services is an umbrella term is used to describe the different technologies used to find the current location of a device. The main LBS elements include: nt • Location Manager • Location Provider

Figure 2. Address search interface

Location Manager acts as a hook to LBS. Each of the Location providers represent a different location finding technology used to find the current location of a device . Using Location Manger we can device[7]. Perform following tasks: • Findind current location • Tracking the movement • Setting proximity alerts for specified locations • Checking all the available location providers We can use the Location Controls in DDMS perspective in Eclipse to perform location changes directly into emulator’s GPS Location Provider either manually or by using KML (or GPX) tabs in either Location Controls. In the Manual tab, we can send geo coordinates (latitude and longitude) manually. and GPX stands for GPS Exchange format. Most GPS systems record tracking files using GPX. KML stands for Keyhole Markup Language. It is used extensively online to define geographic information. We can write KML file by our hand or generate it using Google Earth to find directions between two generate locations. The most common location providers are GPS provider and Network provider. They can be accessed by using the static string constants mentioned below that can return the corresponding prov provider name: • LocationManager.GPS_PROVIDER • LocationManager.NETWORK_PROVIDER To access a specific provider we can use getProvider() method as shown in the following code snippet: String providerName = LocationManager.GPS_PROVIDER; LocationProvider gpsProvider; gpsProvider = locationManager.getProvider(providerName);

212

Vol. 3, Issue 1, pp. 209-220

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
To see all the available location providers we use getProviders( ) method as shown in the following code snippet: boolean e = true; List<String> providers = locationManager.getProviders(e); To select best provider we can set criteria regarding cost, power consumption, altitude, speed, accuracy, direction, we can use get Best Provider( ) method as shown in following code snippet : Criteria criteria = new Criteria(); criteria.setAccuracy(Criteria.ACCURACY_COARSE); criteria.setPowerRequirement(Criteria.POWER_LOW); criteria.setAltitudeRequired(false); criteria.setBearingRequired(false); criteria.setSpeedRequired(false); criteria.setCostAllowed(true); String bp = locationManager.getBestProvider(criteria, true);

4.1. Finding our Location
First we need to get a debugging certificate from google using MD5 fingerprint which can be generated using key tool. MD5 fingerprint is unique for any user. The Map API key thus generated is used for display of Google map in the emulator. The purpose of location-based services is to find the physical location of the device. Access to the location-based services is handled by the Location Manager system Service. To access the Location Manager, request an instance of the LOCATION_SERVICE using the get System Service() method, as shown in the following code snippet: String ss = Context.LOCATION_SERVICE; LocationManager lm; lm = (LocationManager)getSystemService(ss); Before we can use the Location Manager we need to add one or more uses-permission tags to our manifest to support access to the LBS hardware. We can have fine and coarse permissions for GPS and Network providers respectively. Permission will have coarse permission granted implicitly. We can find the last location fix determined by a particular Location Provider using the getLastKnownLocation method, passing in the name of the Location Provider. The following examplefinds the last location fix taken by the GPS provider: String provider = LocationManager.GPS_PROVIDER; Location location = locationManager.getLastKnownLocation(provider); Before using this method , we need to request updates using location manager at least once. We use request Updates method for this. It takes the location provider, minimum time, minimum distance and Location Listener object as arguments. lm.requestUpdates( provider, time,distance,listener); We can also stop receiving updates using the remove Updates method which takes the location provider as argument. lm.removeUpdates(provider);

213

Vol. 3, Issue 1, pp. 209-220

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Figure 3. Finding our location

4.2. Proximity Alerts
It’s often useful to have our applications react when a user moves toward, or away from, a specific location. Proximity alerts let our applications set triggers that are fired when a user moves within or beyond a set distance from a geographic location. To set a proximity alert for a given coverage area, select the center point (using longitude and latitude values), a radius around that point, and an expiry time-out for the alert. The alert will fire if the device crosses over that boundary, both when it moves out from outside to within the radius, and when it moves from inside to beyond it. When triggered, proximity alerts fire Intents, most commonly broadcast triggered, Intents. To specify the Intent to fire, we use a Pending Intent, a class that wraps an Intent in a kind of , method pointer, as shown in the following code snippet: Intent in = new Intent(MY_ACTION) Intent(MY_ACTION); PendingIntent pi = PendingIntent.getBroadcast(this, -1, in, 0);

Figure 4. Proximity Alert

If TREASURE_PROXIMITY_ALERT is static string then to start listening for proximity alerts, we can register receiver as follows: IntentFilter filter = new IntentFilter(TREASURE_PROXIMITY_ALERT); registerReceiver(new ProximityIntentReceiver(), filter);

214

Vol. 3, Issue 1, pp. 209-220

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963 4.3. Geocoding and Reverse Geocoding
Geocoding lets us translate between street addresses and longitude/latitude map coordinates. This cangive us a recognizable context for the locations and coordinates used in location-based services and map-based activities. The geocoding lookups are done on the server, so our applications will require us to include an Internet uses-permission in our manifest. The Geocoder class provides access to two geocoding functions [5]: • Forward Geocoding • Reverse Geocoding Forward Geocoding converts the address into latitude and longitude. Reverse Geocoding converts latitude and longitude to corresponding address. First we create a Geocoder object using which we need to perform geocoding and reverse geocoding Geocoder geocoder = new Geocoder(getApplicationContext(),Locale.getDefault()); For geocoding we use getFromLocationName method of Geocoder class. The following is the sample code snippet : List<Address> locations = null; Geocoder gc = new Geocoder(this, Locale.getDefault()); try { locations = gc.getFromLocationName(streetAddress, 10); } catch (IOException e) {} For reverse geocoding we use getFromLocation method of Geocoder class. The following is the sample code snippet: List<Address> addresses = null; Geocoder gc = new Geocoder(this, Locale.getDefault()); try { addresses = gc.getFromLocation(latitude, longitude, 10); } catch (IOException e) {}

4.4. Map based Activities
To use maps in our applications we need to extend MapActivity. The layout for the new class must then include a MapView to display a Google Maps interface element. The Android maps library is not a standard Android package; as an optional API, it must be explicitly included in the application manifest before it can be used. Add the library to our manifest using a uses-library tag within the application node, as shown in the following XML snippet [8]: <uses-library android:name="com.google.android.maps"/> To see map tiles in our Map View we need to add a <uses-permission> tag to our application manifest for INTERNET. MapView controls can be used only within an Activity that extends MapActivity. Override the onCreate method to lay out the screen that includes a MapView, and override isRouteDisplayed to return true if the Activity will be displaying routing information (such as traffic directions).By default the Map View will show the standard street map. In addition, we can choose to display a satellite view, StreetView, and expected traffic, as shown in the following code snippet: mapView.setSatellite(true); mapView.setStreetView(true); mapView.setTraffic(true); We can also query the Map View to find the current and maximum available zoom levels, as well as the center point and currently visible longitude and latitude span (in decimal degrees).

215

Vol. 3, Issue 1, pp. 209-220

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
int m= mapView.getMaxZoomLevel(); GeoPoint center = mapView.getMapCenter(); int latS = mapView.getLatitudeSpan(); int longS = mapView.getLongitudeSpan(); We can also optionally display the standard map zoom controls using the setBuiltInZoomControls method. mapView.setBuiltInZoomControls(true); We can use the Map Controller to pan and zoom a MapView. We can get a reference to a MapView’s controller using getController method . MapController mapController = myMapView.getController(); We can re-center and zoom the Map View using the setCenter and setZoom methods available on the Map View’sMapController. mapController.setCenter(geopointobject); mapController.setZoom(1);

4.5. Creating and using Overlays
Overlays enable us to add annotations and click handling to MapViews. Each Overlay lets us draw 2D primitives, including text, lines, images, and shapes, directly onto a canvas, which is then overlaid onto a Map View [6]. We can add several Overlays onto a single map. All the Overlays assigned to a Map View are added as layers, with newer layers potentially obscuring older ones. User clicks are passed through the stack until they are either handled by an Overlay or registered as clicks on the Map View itself. Each Overlay is a canvas with a transparent background that is layered onto a Map View and used to handle map touch events. To add a new Overlay, we create a new class that extends Overlay. We override the draw method to draw the annotations we want to add, and override onTap to react to user clicks (generally made when the usertaps an annotation added by this Overlay).

Figure 5. Overlays on Maps

V.

DISCUSSION

In Android, a wide variety of applications can be developed in the field of Location Based Services. One such application is “Finding the route between two locations”. For preparing this application we first we need three activities namely

216

Vol. 3, Issue 1, pp. 209-220

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
• • • Activity requesting source address from user. Activity requesting destination address from user. Map Activity which should display the actual route between the two locations specified in ty above activities.

For preparing the user interface of the activities requesting source and destination addresses, we need to use the corresponding XML files of the activities. Once the user interface of these two activities is ready we need to design the user interface of the Map Activity. As this activity should hold the Google Map, the procedure illustrated in the above section should be followed for the access to Google Map API. Then we should create two explicit intents (which are generated when an event is PI. performed by user like clicking a button) one for communication between the source (passing the source address to destination activity) and destination and other for commu communication between destination and Map Activity (passing the source and destination addresses to the Map Activity).The overall stepwise procedure for developing this application is mentioned below • • • Preparing map resource and internet access for it Acquiring KML route file from google ML Drawing the Path using the following small procedure i) Building URL from source to destination ii) Connecting to URL and creating DocumentBuilder for parsing the KML file iii) Splitting each point in the path and drawing each line on the map • Drawing points and lines on the Map

VI.

RESULTS

The application thus developed using the above procedure helps to find the route between the two locations. The first activity which takes the source address from the user is shown in the following user figure.

Figure 6. Source Activity

For instance if the source address is given as “Benz Circle, Vijayawada, Andhra Pradesh, India”, after clicking the “Go” button explicit intent loads the second activity which requests the destination address to be entered by the user. The second activity is shown in the following figure

217

Vol. 3, Issue 1, pp. 209-220

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Figure 7. Destination Activity

For instance if the destination address is given as “K L University, Vaddeswaram Andhra Pradesh, Vaddeswaram, India”, after clicking the “Go” button explicit intent loads the MapActivity which shows the route between these two locations. The MapActivity thus loaded is shown in the following figure figure.

Figure 8. MapActivity displaying route

In the above figure the blue circle indicates the source and the green circle indicates the destination and the green curved line indicates the route between these locations.

VII.

CONCLUSION

Location Based Services are those services which provide both information and entertainment and are accessible with mobile devices through the mobile network. They utilize the ability to make use of the s geographical position of the mobile device. They can utilize multiple technologies such as the GPS technologies satellite network, cellular networks, Wi networks and other technologies. Context awareness is the Wi-Fi excellent feature for the LBS. Location Based Services can be used in a variety of aspects like vehicle tracking, monitoring driving habits, locating our employees, finding the route between two places or g to find the route to a specified location from current position etc. A of these need the use of GPS All along with some tracking programs Android provides a very nice platform for dev programs. developing LBS applications. It provides separate methods and classes for each and every separate entity involved in

218

Vol. 3, Issue 1, pp. 209-220

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
the development of the application. Location Based Services thus help the users in a variety of aspects and thus has a greater scope of development in the near future.

VIII.

FUTURE SCOPE

After performing a detailed survey it is observed that obtaining user location from a mobile device can be complicated. There are several reasons why a location reading (regardless of the source) can contain errors and be inaccurate. Some sources of error in the user location include:

8.1. Multitude of Location Sources
GPS, Cell-ID, and Wi-Fi can each provide a clue to user’s location. Determining which to use and trust is a matter of trade-offs in accuracy, speed, and battery-efficiency.

8.2. User movement
Because the user location changes, you must account for movement by re-estimating user location every so often.

8.3. Varying accuracy
Location estimates coming from each location source are not consistent in their accuracy. A location obtained 10 seconds ago from one source might be more accurate than the newest location from another or same source. These problems can make it difficult to obtain a reliable user location reading. This document provides information to help you meet these challenges to obtain a reliable location reading. It also provides ideas that you can use in your application to provide the user with an accurate and responsive geo-location experience. The future work is to eliminate these problems and provide an efficient way to find out the location of user accurately.

ACKNOWLEDGEMENTS
We thank our principal Prof. K. Rajsekhar Rao for providing guidance in understanding the concepts. We also want to thank our beloved head of the computer science department, Prof. S. Venkateswarulu for providing valuable information regarding the various categories of the applications that can be developed in Android.

REFERENCES
[1] Chris Haseman, Android Essentials, firstPress, 9_17, 2008 [2] Cláudio Bettini,Sushil Jajodia,Pierangela Samarati, X.Sean Wang, Privacy in Location-Based applications, Springer,1_2,2009 [3] Georg Gartner, Felix Ortag, Advances in Location-Based Services, Springer, 4_5, 2011 [4] James Steele, Nelson To,Shan Conder,Lauren Darcey,The Android Developer's Collection, AddisonWesley Professional, 30A_40A, 2012. [5] Jerome(J. F.) DiMarzio, Android - A Programmer’s guide, Tata McGraw-Hill Education, 203_237,2010 [6] Mark L. Murphy, The busy coder's guide to Android development, CommonsWare, 287_292,2008 [7] Reto Meier, Professional Android 2 Application Development,Wrox, 247_264, 2010 [8] [Online] chengalva.com [9] [Online] developer.android.com Authors Ch. Radhika Rani is a Lecturer of CSE Dept. in K L University. She is very good at programming and is well versed in subjects like Microprocessors, Embedded Systems, Digital Logic Design

A. Praveen Kumar is a student studying final year of B. tech in K L University with CSE as his specialization. He has built some interesting apps in Android.

219

Vol. 3, Issue 1, pp. 209-220

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
D. Adarsh is a student studying final year of B. tech in K L University with CSE as his specialization. He has keen interest in the area of Android app development.

K. Krishna Mohan is a student studying final year of B .tech in K L University with CSE as his specialization. He developed many Android applications.

K. Venkata Kiran is a student studying final year of B. tech in K L University with CSE as his specialization. He has built some interesting apps in Android.

220

Vol. 3, Issue 1, pp. 209-220

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

A ROBUST DESIGN AND SIMULATION OF EFFICIENT MICROMECHATRONIC SYSTEM USING FSV CONTROLLED AUXILIARY DAMPED PMLSM
Sarin CR1, Ajai M2, Santhosh Krishnan3, Arul Gandhi 4
1, 3 &4 2

Department of Mechatronics Engineering, Karpagam College of Engineering, Coimbatore, India. Department of Industrial Refrigeration & Cryogenics, TKM College of Engineering, Kollam, India.

ABSTRACT
A Micromechatronic arm is best suited for many of the low range material handling systems where the power can be small but force in linear movement cases might be relatively high. Because of its simple structure and improved characteristics the PM stepper motor is an interesting alternative for low power applications where pneumatic or hydraulic linear drives are to be avoided .This paper studies and analyses a robust design and simulation of efficient Micromechatronic system using a Finite set value controlled auxiliary damped system to provide better operation of a permanent magnet linear stepper motor. The auxiliary damping control and current control circuit can improve the efficiency by reducing the vibration of forcer and adjusting better mechanical design. The processor based FSV program implementation can provide a smooth damping to achieve steady state in least time.

KEYWORDS: Auxiliary winding, damping control, FSV control, Linear Motor, Micromechatronic actuator,
PMLSM

I.

INTRODUCTION

Over the last few years the development of linear actuators has led to their increasing use in linear motion of mechatronic systems. Processor-controlled motors are one of the most versatile forms of positioning systems. Industrial applications of a Micromechatronic linear motor can be found in high speed pick and place equipment and multi-axis machine CNC machines [1]. Hydraulic or pneumatic system would formerly have used widely which lack higher degree of automation and require routine maintenance to avoid safety hazards and messy oil leaks [2]. Electric actuator systems are quiet, clean, non-toxic and energy efficient and can be integrated into sophisticated control systems even uses data bus communication [3]. The major disadvantages with electric actuators are increase in component costs about 40% higher and relatively low degree of force generated [4]. The micro-mechatronic system consists of an articulated arm with sensors and hydraulic actuators that are activated electronically. Externally controlled micro- actuation benefits from the ability of operation on arbitrary surfaces, high power at small velocities, more power per weight, increased reliability, reduced noise and longer lifetime [5][6]. The paper gives preliminary work devoted to the design and control the system with Finite Setting Value algorithm for a flexible Permanent magnet linear stepper motor associated to a hydraulic system which is more efficient by it’s novel design for friction compensation and momentum control by means of a force-current feedback loop control of auxiliary damp winding and a feed forward controller [7]. Linear stepping motors are an excellent solution for positioning applications that require rapid acceleration and high-speed moves with low mass payloads. This interface is intended to be used for grasping tasks, where unilateral constraints are usually present and very high power is needed which is obtained by the hydraulic block controlled

221

Vol. 3, Issue 1, pp. 221-232

International Journal of Advances in Engineering & Technology, March 2012. Technology, ©IJAET ISSN: 2231-1963
by PMLSM. Dramatic reductions in voltage and power requirements can be obtained by making microelectric motors and drive systems and by interfacing them with mechanical systems. mechanical The projected system of design bestow a robust design and simulation of efficient Micromechatronic nd system for the actuation of various linear mechanisms using Finite Set Value controlled auxiliary winding for the damping and vibration control of a Permanent magnet linear stepper motor which has many advantages over the conventional system design and control.

II.

MICRO-ELECTRIC PM LINEAR STEPPER MOTOR

The proposed linear motor is a brushless permanent magnet electric motor that can converts digital control pulses into axial shaft motion by dividing a full motion into a large number of steps which had its stator "unrolled" so that instead of producing a rotational torque, it produces a linear force along its length. One of the most common mode of operation is as a Lorentz Lorentz-type actuator, in which the applied force is linearly proportional to the current and the magnetic field as described in Figure 1 [7][8]. F = qv × B (1)

Figure 1 : Representations of force distribution due to Lorentz force

A permanent magnet linear m micro-stepper motors as shown in Figure 2. , on the other hand, effectively have multiple "toothed" permanent magnets arranged in periphery of a central shaft of iron and linearly distributed alternative electromagnets at the inner periphery of stator The electromagnets stator. are energized step by step with an external control circuit like a microcontroller. To make the motor shaft move, first one electromagnet is given power, which makes the nearest gear's teeth magnetically , attracted to the electromagnet's teeth. When the gear's teeth are thus aligned to the first electromagnet, they are slightly offset from the next electromagnet.

Figure 2 : Schematic Design of Linear PMSM ure

A PMLSM has two parts, a stator and a forcer as shown in Figure 3. The stator hosts a three three-phase winding which generates a travelling magnetic field. The mover has a number of permanent magnets. A PMLSM is capable of generating two forces, longitudinal and normal, which can be more or less independently assigned by an a appropriate choice of the three-phase currents. So, when the next electromagnet is turned on and the first is turned off, the forcer moves slightly to align with the next one and from there the process is repeated Each of those slight movements is called a "step," with an repeated. integral number of steps making a full motion. In that way, the motor can be turned by a precise angle [9].

222

Vol. 3, Issue 1, pp. 221-232

International Journal of Advances in Engineering & Technology, March 2012. Technology, ©IJAET ISSN: 2231-1963

Figure 3 : Basic Design of Linear PMSM Fig

Many advantages are achieved using this kind of motors, such as simplicity since no brushes or contacts are present, low cost, high reliability, high linear force at low speeds, and high accuracy of motion. The linear stepping motor is not subject to the same linear velocity and acceleration The acceleration. block diagram representation of a Mechatronic actuation composed of mechanical system actuated by ntation controlled electrical linear motor shown in Figure 4.

Figure 4 : Mechatronic actuation system Fig

III.

RELATED WORKS AND STRATEGIES

S. Seshagiri [10] demonstrated that stepper system could be fast and accurate with advanced feedback- position control . Bodson m., Chiasson j [11] used the technique of exact feedback linearization using full state-feedback, with extensions to the partial state feedback an experimental feedback, state-feedback and validation of the controller. Crnosija P., Kuzmanvoic B., and Ajdukovic S [12] improved performance by field control using a micro computer model based feedback controller with a programmable model-based procedure which is more users friendly. Chen et al. [13] proposed a new improved model for profile al. tracking using an advanced feedback controller with a least least-squares-based identification procedure based which was later by learning based control for precision control. These gave new way of intelligent controlling of the stepper motor to facilitate with new intelligent methods like knowledge based ing learning and artificial intelligent control. B. K. Bose [14] put forward a system for speed control of motor using intelligent neural network algorithms which give an initiation for implying such methods in control process. However, many of the above systems have shown much complication in vibration and damping control of a linear actuator basically about a permanent magnet linear stepper motor. Many systems with stepper motors need to control the acceleration / deceleration when changing the er speed and motor is not subject to the same linear velocity and acceleration as in rotary one. The PM linear motor has the highest force density among micromotors. But it had several disadvantage In any l disadvantage. of the mover's position one of its poles was generating a significant breaking force as shown in Figure 5, diminishing the total tangential force produced by the motor and thus reducing its efficiency. Beside this the magnetic flux passing through both the mover and the platen gave rise to a very strong normal force of attraction between the two armatures. This was over 10 times the peak holding tangential force of the motor. Due to this high attractive force produced of all the pole sophisticated poles linear bearing systems are required to maintain the precise air air-gap between the mover and platen [15].

223

Vol. 3, Issue 1, pp. 221-232

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Figure 5 : Alignment of forcer position

Moreover, as the forcer speed increases, linear force decreases and the force curve may be extended by using current limiting drivers and increasing the driving voltage. Attempting a high-speed move with a low mass payload results in the majority of the motor linear force applied to overcoming the friction. Linear motors are not compact force generators compared to a rotary motor with a transmission offering mechanical advantage. For example to produce even 6.5 N (1.5 lb) of continuous force, a linear motor’s cross section is approximately 50 mm x 40 mm (2”x 1.5”) [16]. One of the major disadvantages of steppers is that it exhibits more vibration, as the discrete step tends to snap the shaft from one position to another due to the effect of momentum which is more at some speeds and can cause the motor to lose linear force. Thus it may affect effective force distribution, step angle resolution and accurate stepping position. So satisfactory damping of oscillations is an important issue to be addressed when dealing with the stability of shaft systems. The effect can be mitigated by accelerating quickly through the speed range, physically damping the system, or using a auxiliary flux distribution driver [17]. Tai-Sik , Seok [18] and Dong-Hun developed an active control scheme to damp the vibration of LHSM using a reluctance network based on the finite-element. But it suffers difficulties from a complex design and size of the control system. Barhoumi and Ben salah [19] designed a new model for positioning control of stepper motor using BP Neural Networks, showed many advantages over the conventional systems which make use of adaptive neural networking and knowledge base algorithms for the control. Kenneth Wang-, Norbert Chow [20] reported the satisfactory result of electronic damping based on the force constant and advanced algorithms for excitation and damping control, which is the motivation of this paper. Still the Kenneth - Norbert system suffer from some draw backs other the derived algorithms are more time consuming and the control of not much smoother. The proposed study is designed to achieve same with the use of an auxiliary winding at definite intervals in between stator poles which is exited by digitally controlled current from a processor. Finite set Value based algorithm is implemented in this system which proved a fast iterative mode of current control by probalistic selection of Finite error value coefficient.

IV.

MECHANICAL DESIGN CRITERIA

An oscillating mass will always have momentum, which is a function of its shape, mass distribution, and rate of spin. Notice, for example, that a compact object with all the mass concentrated near the center of mass spins much easier than an object that has a lot of mass located far from the center of mass. Euler's first law states that the linear momentum of a body is equal to the product of the mass of the body and the velocity of its center of mass [21] [22] [23]. The distribution of mass describes an object’s mass moment (I). By knowing the mass (m), Linear Moment (I), and the object’s velocity (v), momentum ( ) can be found.
I = mv (2)

Using the continuity condition,
=ρAV, (3)

The momentum equation simplifies to
ΣF = Σ Vi = ρAVi2 (4)

224

Vol. 3, Issue 1, pp. 221-232

International Journal of Advances in Engineering & Technology, March 2012. Technology, ©IJAET ISSN: 2231-1963
The change of momentum will have two parts, momentum inside the control volume, and momentum passing through the surface. The mass, velocity and dimensions of the shaft cannot be varied to The reduce the momentum as it may affect the optimum design of shaft dimensions to obtain a sufficient imensions linear Lorentz force. As the case of the micro stepper motor the variation in dimensions make poor makes changes in momentum control Moreover the density of internal forces at every point is not control. necessarily equal, i.e. distribution of stress is at shaft pole and momentum is maximum shaft axis. This variation of internal forces throughout the body is governed by law of motion of conservation of linear momentum, which normally are applied to a mass particle but are extended in continuum mechanics to a body of continuously distributed mass. The most convenient way of control of the momentum is to use damping systems. The mechanical damping systems cannot be used in micromotors due to its huge size. The fluid damping control luid system cannot be used in electrical motors. Thus most favored mode of operation is to use electrical damping.

V.

AUXILIARY ELECTRICAL SYSTEM

To avoid the instability of the shaft after the step movement, a constant position contr system is o control designed and used in winding mechanism. Auxiliary-coil actuators are a special form of linear motor, coil capable of moving an inertial load at extremely high accelerations and relocating it to an accuracy of millionths of an inch over a limited ra range of travel [25].

Figure 6 : Auxiliary winding slots portioning

A method of reducing vibration in shaft systems by using a distortion-damper system with a two damper auxiliary winding is proposed where additional distortion source adopted [26] as described in Figure 6. Here eliminates the distortion within one sampling period by using a sampled ere sampled-data control system which makes use of the finite finite-settling-value method [27] [28]. The system is demonstrated by . applying the additional field which is opposite in polarity to the main field flux at both end to eliminate the distortion produced There must be initialization of to switch on the auxiliary windings produced. so that the winding facing the motion is switched on only after the forcer passed over the one and the other should be exited instantaneously with the pole excitation. The switching of winding is done by processor control which create a oppos opposing flux so that the forcer movement is limited within main field flux area and the auxiliary supply is eliminated after the damping control by current decaying to save power and the smooth effective distortion control [29]. The value of the excitation current in the auxiliary winding is controlled using the processing system which makes the current to decay makes gradually until the system is acquired stability which is represented using a block diagram in Figure 7.

Figure 7 : Auxiliary winding slots portioning

225

Vol. 3, Issue 1, pp. 221-232

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

VI.

MODELING OF THE AUXILIARY DAMPED PMLSM

Characteristics of auxiliary winding control are based on solution of electromagnetic field equations and control mode of processor based switches and control for solution of traveling magnetic field equations [3][23][31]. Basic problems of field design revolve around estimating the distribution of magnetic flux in circuit, which may include permanent magnets, air gaps, high permeability conduction elements, and electrical currents. Exact solutions of magnetic fields require complex analysis of many factors, although approximate solutions are possible based on certain simplifying assumptions [30]. Obtaining an optimum magnet design often involves experience and tradeoffs. There are two systems to be designed – a system to produce an auxiliary field and current limiting circuit [31].
6.1. Auxiliary

Damping Control

Momentum equations for a motor design a feed-back controller, which can follow a desired momentum even in the case of large acceleration [32]. The mechanical part of the motor equations ΣF =I +kfw(t)+kmi(t) Fi=− (Vap(t ( +Ri(t)+Kbi(t) (

(5)

The field excitation required in auxiliary windings to eliminate distortion for unit time period
(6)

The field excitation in main winding Vapp(t) = L ( (7)

This sequence of equations leads to current to be applied on the auxiliary winding initially, =+ Vapp (t)
(8)

Thus the resulting force developed in auxiliary winding will be = - Kf w(t) + Km i(t)
(9)

The equations (6) to (9) describes the magnitude force F to be applied in order to obtain the resultant magnetic field force that can with hold the forcer head for a given time period dt.
6.2. Auxiliary

Current Control

Once the forcer reaches the main magnetic field the auxiliary supply will be initiated which will control the forcer position within the limit so that the distortion and vibration of the forcer head may reduce and the damping is within the limit [33]. Once the stability is achieved the system current must be switched off which may help to save energy. A gradually decaying current system have a better performance over the forcer than instant switching which may be attained using current limiting system. Chopping can limit the current in the winding without generating excess heat. It is quite elegant and efficient [34]. The essence of chopping is to switch the operating voltage on and off at a frequency higher than the operating range, and allow the winding itself to act as a filter, where Pulse-width modulation, in which the 'duty cycle' determines the behavior of the current.

226

Vol. 3, Issue 1, pp. 221-232

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Figure 8 : Conceptual model of Chopper circuit

The most common approach is to automatically adjusting the duty cycle of the switches. The conceptual model of a Chopper circuit as shown in Figure 8, define when an incoming activate signal is received, the coil is switched on. Current through the coil is developed as a voltage across the position sensor. This voltage is proportional to the amount of current in the coil and forms an important part of our feedback loop. The value of the resistor is very low, typically 1 ohm, which equates to 1 amp, when the voltage at the sense resistor is 1 volt.) If R is large, the current through the motor windings will decay quickly when the higher level control system turns off this motor winding, but when the winding is turned on, the current ripple will be large and the power lost in R will be significant. If R is small, this circuit will be very energy efficient but the current through the motor winding will decay only slowly when this winding is turned off, and this will reduce the cutoff speed for the motor [25]. The peak power dissipated in R will be I2R during Toff and zero during Ton; thus, the average power dissipated in R when the motor winding is on will be: P = I2R (10) The sensed voltage is compared with a reference voltage, and when the sensed voltage becomes greater than the reference the logic it reduces the current value. When the voltage drops below the reference voltage the coil will give a signal to processor for further action unless of course the incoming activate signal is removed, in which case the coil is always off. Thus the feedback logic flips the switch on and off when the current is too high, maintaining a smooth current decaying. The reference voltage is typically adjustable which allows matching the current in the circuit to the motors rated current.
6.3. Finite-Settling-Value Microprocessor Control

An algorithm model of the microcontroller architecture is designed and desired operation is formulated using Finite-Settling-Value [27][28], and a software simulator implementing the auxiliary-controlled actuator model including all the nonlinear elements is formulated to predict the real system's behavior and operation quite well [35][36] . 1. START 2. Initial values of main field flux, current position of shaft, shaft dimensional characteristics, proportion momentum, and frictional loss are determined. 3. FSV error co-efficient is initialized which is based on the current decay constant and is maximum at excitation level. 4. The direction of forcer movement is determined and corresponding switches are turned on and along with auxiliary winding supply at moving side. 5. Once the forcer is moved the Ton,, Toff, I, Fi is determined and the auxiliary coil at stationary side is energized. 6. From the inertia force equation the value of the auxiliary winding current is found and the reference voltage is set. 7. Once the forcer attains stability over 60% which is identified from position sensor current, the current control circuit is activated. 8. The current control circuitry is kept on till the value reaches zero which is determined by

227

Vol. 3, Issue 1, pp. 221-232

International Journal of Advances in Engineering & Technology, March 2012. Technology, ©IJAET ISSN: 2231-1963
Toff = L 9. The conditions are reset to new values when it reaches the next step procedure.
(11)

VII. RESULTS AND DISCUSSIONS
The experimental results of both the Modeling and Simulink procedures and the control system testing circuit are shown in Fig. 9 to Fig. 14. The influence of various parameters over the working conditions and characteristics is described. In the full step sequence, only one coil sector is energized at the same time and forcer moves to the same sector. Still such mode of steps reduces the step resolution. The change in the forcer position with respect to the time for an auxiliary damped system is a s shown in Figure 9.

Figure 9 : Auxiliary Damped Full Step position Control

The microcontroller based continuously varying mode of excitation as shown in Figure 9 instead of instant switching makes the system to enable a smooth forcer movement with less vibration, improved damping control and higher accuracy. The isolated auxiliary current distribution for a complete full step motion for both au auxiliary winding related to corresponding poles is described in Figure 10,

Figure 10 : Distribution for auxiliary winding current between two isolated auxiliary windings

The table of content shown below gives the current distribution of main and auxiliary winding in Half Step Sequence. Here, the step angle reduces to half the angle in full mode so that the angular resolution is also increased i.e. it becomes double the angular resolution in full mode.

228

Vol. 3, Issue 1, pp. 221-232

International Journal of Advances in Engineering & Technology, March 2012. Technology, ©IJAET ISSN: 2231-1963
Current at Coil A (%) Current at Coil B (%) Current at Coil C (%) Current at Aux Coil (A1) (%) 0 100 79.5 52.7 28.2 14.9 0 0 0 0 0 0 0 Current at Aux Coil A2 (%) 100 100 79.5 52.7 28.2 14.9 0 0 0 0 0 0 0 Current at Aux Coil A3 (%) 0 0 0 0 0 0 0 0 0 6.2 16.4 21.3 31.9 Aux Current FSV Error Value (%) 100 100 80 60 40 20 0 0 0 0 0 0 100

Sl . No

Step

Time (S)

1 2 3 4 5 6 7 8 9 10 11 12 13

1 1 1 1 1 1 1 1 1 1 1 1 1

0.04 0.08 0.12 0.16 0.20 0.24 0.28 0.32 0.36 0.40 0.44 0.48 0.52

100 100 100 100 100 100 100 100 100 100 100 100 100

0 0 0 0 0 0 0 19.0 38.1 56.9 78.7 96.3 100

0 0 0 0 0 0 0 0 0 0 0 0 0

The auxiliary current distribution for a half step motion for corresponding auxiliary winding s is described in Figure 11. which shows that at each step angle the auxiliary winding current related to moving position (A2) may vary from its predistributed level gradually and the counterpart (A1) from zero to maximum, enables a smooth current decaying. For the next half step before full excitation is achieved A 3 may increase gradually to a level depend on full excitation value to obtain even varying nd of main flux and reduced vibration.

.

Figure 12 : Current variation in auxiliary windings (A1,A2,A3) for a half step

The velocity distribution of the forcer with respect to the duty cycle is describes that the acceleration due field excitation is varying as in a quadratic form, but later deforms due to the initial excitation in auxiliary winding as shown in Figure 13.

229

Vol. 3, Issue 1, pp. 221-232

International Journal of Advances in Engineering & Technology, March 2012. Technology, ©IJAET ISSN: 2231-1963

Figure 13 : Auxiliary current Switching Characteristics

The Force – Speed curve plotted in Figure14. shows that as the linear velocity is increased the output power decreases gradually a in inverse promotion do minimum . As the speed – force value cannot be compromised; it is favorable to use a electro mechanical system for effective mode of operations.

Figure 14 : Linear Velocity Vs Force

VIII. CONCLUSION
The paper proposes new design criteria over the conventional system which gives a smooth energy efficient operation of the linear actuation system. The limits imposed by a auxiliary current control and the selected decay mode give a better performance and reduced distortion of the forcer specifically it's ability to follow a smooth operation. So long as these limits allow the designer to pecifically se achieve the desired resolution in the micro stepping application the devices provide a cost effective implementation. The novel algorithm is feasible and can produce significant energy savings with minimal performance degradation. Future work will involve developing alternative on on–line intelligent algorithms, including approaches for effective development in reduced friction characteristics and improved damping control.

REFERENCES
[1]. V. F. Cardoso, Graça Minas1and S Lanceros-Méndez, “ Design and fabrication of piezoelectric Micro S. actuators for micro fluidic applications Semana de Engenharia 2010 Guimarães, 11 a 15 de Outubro applications”, [2]. M. T. Mason, Mechanics of Robotic Manipulation. Cambridge, MA: MIT Press, 2001. [3]. Boldea, I., "Linear Electromagnetic Actuators and their Control: A Review," Proceedings of the 10th International Power Electronics and Motion Control Conference (EPE PEMC '2002), Cavtat & Dubrovnik (EPE-PEMC (Croatia), 2002. [4]. Cassat, A., Corsi, N., Moser, R., Wavre, N., "Direct Linear Drives: Market and Performance Status," Wavre, Proceedings of the 4th International Symposium on Linear Drives for Industry Applications LDIA '2003, Birmingham (UK), 2003

230

Vol. 3, Issue 1, pp. 221-232

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
[5]. D. Popa and H. E. Stephanou, “Micro and meso scale robotic assembly,” SME J. Manuf. Processes, vol. 6, no. 1, pp. 52–71, 2004. [6]. P. Dario, R. Valleggi, M. C. Carrozza, M. C. Montesi, and M. Cocco, “Microactuators for microrobots: A critical survey,” J. Micromech. Microeng., vol. 2, no. 3, pp. 141–157, Sep. 1992. [7]. Viorel I.A., Szabó L., "Hybrid Linear Stepper Motors," Mediamira, Cluj (Romania), 1998. [8]. I. Boldea and S. A. Nasar. “Permanent magnet linear alternators part I: Fundamental equations”. IEEE Transactions on Aerospace and Electronic Systems, AES-23(1), 1987. [9]. S.A. Nasar and I. Boldea, “Linear Electric Motors: Theory, Design and Practical Applications. Englewood Cliffs, NJ: Prentice-Hall”, 2004. [10]. Perriard, Y., Scaglione, O. and Markovic, M. “Self-Sensing Methods for PM Motors in Mechatronics Applications”. Proceedings of the IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM 2010). [11]. Bodson m., Chiasson j.n., Novotnak r.t., Rekowski r.b,”High Performance nonlinear feedback control of a permanent Magnet stepper motor”, I trans. Control syst. Technol., 1993, 1, (1), pp. 5–14 [12]. Crnosija P., Kuzmanvoic B., and Ajdukovic S., “Microcomputer implementation of optimal algorithms for closed-loop control of hybrid stepper motor drives,” IEEE Transactions on Industrial Electronics, vol. 47, pp. 1319-1325, 2000. [13]. W. D. Chen, K. L. Yung, and K. W. Cheng, “Profile tracking performance of a low ripple hybrid stepping motor servo drive,” Proc. Inst. Elect. Eng.—Control Theory Appl., vol. 150, no. 1, pp. 69–76, Jan. 2003. [14]. B. K. Bose, “Neural network applications in power electronics and motor drives—An introduction and perspective,” IEEE Trans. Ind. Electron., vol. 54, no. 1, pp. 14–33, Feb. 2007. [15]. E. Fitan, F. Messine, and B. Nogarède. “An electromagnetic actuator design problem: A general and rational approach”. IEEE Transactions on Magnetics, 40(3), 2004. [16]. D. Zarko. “A Systematic Approach to Optimized Design of Permanent Magnet Motors”. PhD thesis, University of Wisconsin-Madison, 2004. [17]. B. Murphy, “Design and Construction of a Precision Linear Motor and Controller “, Masters’ Thesis, Dept. of Mechanical Engineering, Texas A&M University, May 2003. [18]. T.-S. Hwang, J.-K. Seok, and D.-H. Kim, “Active damping control of linear hybrid stepping motor for cogging force compensation,” IEEE Trans. Magn., vol. 42, no. 2, pt. 2, pp. 329–334, Feb. 2006. [19] Barhoumi EM, Ben salah B , “New Positioning Control of Stepper Motor using BP Neural Networks “,Journal of Emerging Trends in Computing and Information Sciences,,Volume 2 No.6, June 2011 [20]. Kenneth Wang-Hay Tsui, Norbert Chow Cheung, Kadett Chi-Wah Yuen, “Novel Modeling and Damping Technique for Hybrid Stepper Motor”, IEEE Transactions On Industrial Electronics, Vol. 56, No. 1, January 2009 [21]. Tai-Sik Hwang; Jul-Ki Seok; Dong-Hun Kim; “Active damping control of linear hybrid stepping motor for cogging force compensation” , Volume: 42 Issue:2,pp 329 - 334, Feb. 2006 [22]. Kaunas: Technologija, 2006. – P. 125–128. 9. Budig Peter-Klaus. “Simplification of the mechanical design of drives with the application of direct drives “,Publication of the 4th International Symposium “Topical Problems of Education in the Field of Electrical and Power Engineering”. – Tallinn Technical University, Kuressaare. – 2007. [23]. Basak, A., Flores Filho, A. F., Nakata, T., Takahashi, N. “Three dimensional computation of force in a novel brushless DC linear motor “. IEEE Transactions on Magnetics. 1997. Vol. 33, No. 2, 3 p. [24]. Bobrow, J. E., Desai, J. 1990. “Modeling and Analysis of a High-Torque, Hydrostatic Actuator for Robotic Applications”. Experimental Robotics I, the First Int. Syrup. V. Hayward, O. Khatib (Eds.) SpringerVerlag, pp. 215-228. [25]. K.-F. Böhringer, B. R. Donald, R. R. Mihailovich, and N. C. MacDonald, “A theory of manipulation and control for microfabricated actuator arrays,” in Proc. IEEE Workshop MEMS, Jan. 1994, pp. 102–107 [26]. Liu, C., Kuo, J. “Experimental investigation and modeling of linear variable reluctance machine with magnetic-flux decoupled windings”. IEEE Transactions on Magnetics. 1994. Vol. 30, No. 6, 3 p. [27]. S. Choura, “Design of finite time settling regulators for linear systems,” Trans. ASME, vol. 116, pp. 602–608, 1994.

231

Vol. 3, Issue 1, pp. 221-232

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
[28]. J. M. Coron, “On the stabilization in finite time of locally controllable systems by means of continuous time-varying feedback law,” SIAM J. Contr. Optim., vol. 33, pp. 804–833, 1995. [29]. Pfister, P.-D. and Perriard, Y. (2010) “Very High Speed Slotless Permanent Magnet Motors: Analytical Modeling, Optimization, Design and Torque Measurement Methods”. IEEE Transactions on Industrial Electronics, 57(1) pp. 296-303 [30]. S. F. Bart, M. Mehregany, L. S. Tavrow, J. H. Lang, and S. D. Senturia, “Electric micromotor dynamics,” IEEE Trans. Electron Devices, vol. 39, no. 3, pp. 566–575, Mar. 1992 [31]. K.J. Binns, P.J. Lawrenson, and C.W. Trowbridge. “The Analytical and Numerical Solution of Electric and Magnetic Fields”. John Wiley and Sons. [32]. J Lidenholm “Power System Stabilizer Performance” ISSN 1401-5757, UPTEC F07 109 [33]. M. Wahlén, “Transfer function for excitation system and automatic voltage regulator”,VG Power AB, 2004. [34]. AN1495, Application Note, “Microstepping Stepper Motor Drive Using Peak Detecting Current Control”, Thomas Hopkins,(2010),pp 4-9. [35]. S. Brückl, “Feed-drive system with a permanent magnet linear motor for ultra precision machine tools,” Proc. of IEEE International Conference on Power Electronics and Drive Systems, pp.821-826, July 1999. [36]. Ziqing Ye; Min Chen; ,“DSP based field oriented hybrid stepper motor control”, IEEE International Conference on E-Product E-Service and E-Entertainment (ICEEE), 2010, pp 1 - 4

Authors
Sarin CR received B.E in Electrical and Electronics Engineering from Dhanalakshmi Srinivasan Engineering College, Perambalur, India in 2010 and currently he is perusing his M.Tech in currently he is perusing his M.E in Mechatronics Engineering from Karpagam College of Engineering, Coimbatore, India He has published one paper on Intelligent Non Destructive testing and Evaluation and two papers are expected to be published in shorter. He has special interest on the fields of Robotics, Artificial intelligence, Micro mechatronic systems, intelligent control of Electrical Drives. Ajai M received B.E in Mechanical Engineering from Dhanalakshmi Srinivasan Engineering College, Perambalur,India in 2010 and currently he is perusing his M.Tech in Industrial Refrigeration and Cryogenics Engineering from Thangal Kunju Musaliar College of Engineering,Kollam,India. He has special interest on the fields of Mechanical Machine design, Quality control, Fluid Systems control, and Machine Dynamics.. Santhosh Krishnan received B.E in Electronics and Instrumentation Engineering from MAM College of Engineering, Trichy, India in 2010 and currently he is perusing his M.E in Mechatronics Engineering from Karpagam College of Engineering, Coimbatore, India. He has special interest on the fields of Robotics; Sensor based Automation and control of machines, Non Destructive testing and Evaluation.

Arul Gandhi received B.E in Electrical and Electronics Engineering from Karpagam College of Engineering, Coimbatore, India in 2007 and currently he is perusing his M.E in Mechatronics Engineering from Karpagam College of Engineering, Coimbatore, India. He has wide industrial experience in Embedded based control of machines and systems. He has special interest on the fields of Robotics, Embedded Systems, Navigation and Wireless communication, Knowledge based algorithm design.

232

Vol. 3, Issue 1, pp. 221-232

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

INFLUENCE OF FILLER MATERIAL ON GLASS FIBER/ EPOXY COMPOSITE LAMINATES DURING DRILLING
M.C.Murugesh1 and K.Sadashivappa2
1 2

Senior Lecturer, Dept. of Mechanical Engg., GM Institute of Technology, Davangere Professor & Head, Department of IP Engineering, Bapuji Institute of Engineering and Technology, Davangere

ABSTRACT
The use of polymeric composite materials has increased considerably over the last decade. Drilling is a frequently practiced machining process in industry due to the need for component assembly in mechanical pieces and structures. Machining processes are generally used to cut; drill or contour composite laminates for building products. In fact, drilling is one of the most commonly used manufacturing processes to install fasteners for assembly of laminate composites. The material anisotropy resulting from fiber reinforcement heavily influences the machinability during machining. Machining of fiber reinforced plastic (FRP) components is often needed in spite of the fact that most FRP structures can be made to near-net shape and drilling is the most frequently employed secondary machining process for fiber reinforced materials. The use of filler material like TiO2 and Graphite have shown that better bonding of the fiber matrix has got its effect on thrust and delamination factor values.

KEYWORDS: Drilling; Polymer-matrix composites; Thrust force; Delamination

I. INTRODUCTION
Fiber reinforced plastics have been widely used for manufacturing aircraft and spacecraft structural parts because of their particular mechanical and physical properties such as high specific strength and high specific stiffness. Another relevant application for fiber reinforced polymeric composites (especially glass fiber reinforced plastics) is in the electronic industry, in which they are employed for producing printed wiring boards. Drilling of these composite materials, irrespective of the application area, can be considered as a critical operation owing to their tendency to delaminate when subjected to mechanical stresses. With regard to the quality of machined component, the principal drawbacks are related to surface delamination, fiber/resin pullout and inadequate surface smoothness of the hole wall. Among the defects caused by drilling, delamination appears to be the most critical [1] Figure 1 shows that the factors such as cutting parameters and tool geometry/material must be carefully selected aiming to obtain best performance on the drilling operation, i.e., to obtain best hole quality, which represents minimal damage to the machined component and satisfactory machined surface.

Figure 1. Principal aspects to be considered when drilling fibre reinforced plastics

233

Vol. 3, Issue 1, pp. 233-239

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Composite materials are constituted of two phases: the matrix, which is continuous and surrounds the other phase, often called as reinforcing phase [3]. Epoxy resins are widely used as matrix in many fibre reinforced composites; they are a class of thermoset materials of particular interest to structural engineers owing to the fact that they provide a unique balance of chemical and mechanical properties combined with wide processing versatility [4]. Within reinforcing materials, glass fibres are the most frequently used in structural constructions because of their specific strength properties [3]. The present study focuses on mach inability study of GFRP laminated composites with filler material Tio2 and graphite and evaluation of thrust force, delamination factor for two drill diameters at different speeds

1.1.

Machining of composite materials

The machining of GFRP is quite different from that of metals and results in many undesirable effects such as rapid tool wear, rough surface finish and defective subsurface layers caused by cracks and delaminations. At the beginning of drilling operation, the thickness of the laminated composite material is able to withstand the cutting force and as the tool approaches the exit plane, the stiffness provided by the remaining plies may not be enough to bear the cutting force, causing the laminate to separate resulting in delamination. The delaminations that occur during drilling severely influence the mechanical characteristics of the material around the hole. In order to avoid these problems, it is necessary to determine the optimum conditions for a particular machining operation. Drilling is a particularly critical operation for fiber reinforced plastics (FRP) laminates because the great concentrated forces generated can lead to widespread damage. The major damage is certainly the delamination that can occur both on the entrance and exit sides of the work piece [4]. The delamination on the exit surface, generally referred to as push down delamination, is more extensive, and is considered more severe. Hocheng and Tsao have beautifully explained the causes and mechanisms of formation of these push down delaminations and they have also reasoned out the dependence of extent of delamiantion on the feed rate [5]. In earlier studies it has been observed that the extent of delamination is related to the thrust force, feed, material properties and speed, etc. and that there is a critical value of the thrust force (dependent on the type of material drilled) below

which the delamination is negligible [6]. 1.2. Specimen Preparation
The method that is used in the present work for manufacturing the laminated composite plates is hand lay-up as shown in Figure 2, which is the oldest method that was used to get the composite materials. The type of Glass Fiber mat selected to make specimens was, Mat – II (330GSM). The matrix material used was a medium viscosity epoxy resin (LAPOX L-12) and a room temperature curing polyamine hardener (K-6), both manufactured by ATUL India Ltd, Gujarat, India. This matrix was chosen since it provides good resistance to alkalis and has good adhesive properties. Based on volume fraction the calculations were made for 60-40 (60% Glass Fiber – 40% Epoxy Resin) combination showed a better result. Two filler materials TiO2 and Graphite were added to Mat II 60-40 combination by keeping Epoxy percentage constant (40%). Based on literature survey the amount of filler added was 3, 6, & 9 % of Graphite and 1, 2, & 3 % of TiO2, the details are as shown in Table 1. after preparation of the specimen the specimens were tested in impact testing machine to obtain impact toughness value.

Figure 2. Hand lay-up Technique

234

Vol. 3, Issue 1, pp. 233-239

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Table 1. Filler material specimen details

1.3.

Experimental Setup

The high speed steel twist drill has an 1180 point angle. Two diameters 6.35mm and 4.76mm were selected to work on radial drilling machine which has a maximum spindle speed of 2650 rpm. There details are shown in Table 2. A piezoelectric dynamometer was used to acquire the thrust force as shown in Figure 3. The damage around the holes was measured using a tool maker’s microscope.
Table 2: Drill tool dia and corresponding speed

Figure 3. Schematic setup to measure thrust force

II. RESULTS AND DISCUSSIONS
2.1 Thrust force
Cutting forces are very useful for drill-wear monitoring, because these forces generally increase with tool wear. Thus, within the tool wear region, cutting forces provide good assessment of the tool condition. If the tool cannot withstand the increased cutting forces, catastrophic tool failure becomes inevitable. Consequently, tool life, which is a direct function of tool wear, is best determined by monitoring thrust force. Due to the thrust developed during drilling, many common problems exist. Some of the problem causes in drilling are fiber breakage, matrix cracking, fiber/matrix debonding, fiber pull-out, fuzzing, thermal degradation, spalling and delamination. The thrust force and torque developed in drilling operation is an important concern. Monitoring of thrust force in drilling is needed for the industry. In Figure 4 a qualitative trend of thrust force as a function of the drilling is shown. As can be seen, a pushing action is exerted by the drill on the work piece. In the first phase the thrust force continues to increase as an increasing part of the cutting lips is engaged in the material; in the second phase the thrust force remains at an almost constant value as the drill sinks into the work piece. In the third phase the thrust force rapidly decreases when the twist drill exits.

235

Vol. 3, Issue 1, pp. 233-239

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Figure 4. Responses of cutting forces during drilling showing key process points

The value of thrust force was measured using a piezoelectric dynamometer. Figures 5 and 6 show the results of the thrust force for the two sets of drilling tests, as a function of the cutting parameters.

5a

5b

Figure 5. Effect of spindle speed on thrust force for 6.35mm and 4.76mm twist drill for TiO2 filler material.

6a

6b

Figure 6. Effect of spindle speed on thrust force for 6.35mm and 4.76mm twist drill for graphite filler material.

After drilling glass fiber reinforced epoxy composite laminates manufactured by hand lay-up; using two different HSS twist drill and various cutting speeds, the cutting speed is the cutting parameter that has the highest physical as well as statistical influence on the thrust force and surface roughness in GFRP material, the following conclusions can be drawn: • From the Figures 5 and 6 it could be seen that as the speed increases the thrust force decreases for both the drill diameters. This is due to abrasiveness inherent property of the filler material.

236

Vol. 3, Issue 1, pp. 233-239

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

As the filler percentage increased from (1% to 3%) TiO2 and (3% to 6%) Graphite the thrust force values are less at all the speed intervals and has shown downward trend. When comparing among drill diameters 4.76mm have recorded higher values than 6.35 mm

• 2.2 Delamination factor (Fd) Delamination is caused by different drilling parameters. Several ratios were established for damage evaluation. One of them is delamination Factor (Fd), a ratio between the maximum delaminated diameter (Dmax) and drill diameter (D0). Fd = Dmax/D0. Figure 7 shows Tool Maker’s microscope with which delamination was measured.

Figure 7. Schematic view of delamination factor and a view of tool makers microscope

Delamination is commonly classified as peel-up delamination at the twist drill entrance and pushdown delamination at the twist drill exit as shown in Figure 8.

Figure 8. Delamination at the twist drill entrance (left) and exit (right) when drilling laminates

9a 8b

9

Figure 9.The Effect of Spindle Speed on the Delamination Factor (Fd) for 6.35 mm and 4.76 mm twist drill for TiO2 filler material.

10a

10b

Figure 10.The Effect of Spindle Speed on the Delamination Factor (Fd) for 6.35mm and 4.76 mm twist drill for Graphite filler material.

237

Vol. 3, Issue 1, pp. 233-239

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
• •
From the Figures 9 and 10 it could be seen that as the speed increases the delamination factor decreases for both drill diameters . It is seen that decrease in spindle speed results in increase in the thrust force. This increase in thrust force causes increased delamination. Hence , thrust force is recognized as the main cause of damage As the filler percentage increased from 1% to 3% for TiO2 and 3% to 9% for Graphite the delamination factor values are less at all the speeds and has shown downward trend.

III.

CONCLUSIONS

Based on the experimental results presented, the following conclusions can be drawn: Considerable efforts have been focused on the better understanding of the phenomena associated with the cutting mechanism. Conventional high speed steel twist drills are used for drilling operation Abrasion, was the principal wear mechanism and led to the elevation of the thrust force. The increasing of thrust force as a result of increasing drill pre-wear leads to destroying the matrix and micro-cracking at the ply interfaces, which deteriorates the surface finish. The principal factors used to evaluate the performance of the process are undoubtedly the damage caused at the drill entry or exit of the hole produced. The damage decreases with cutting parameters, which means that the composite damage is smaller for higher cutting speed within the cutting range tested. Delamination decreases as the spindle speed is elevated, probably owing to the fact that the cutting temperature is elevated with spindle speed, thus promoting the softening of the matrix and inducing less delamination. The effect of addition of filler material like TiO2 and Graphite have shown that higher the percentage lesser the values of thrust and delamination factor , which insists that the better bonding of the filler material with the fiber matrix have increased the capacity of force sustainability.

REFERENCES
[1]. A.M. Abrao et.al,. Drilling of fiber reinforced plastics: A review, Journal of Materials Processing Technology 186 (2007) 1–7. [2]. W.D. Callister, Materials Science and Engineering: An Introduction, sixth ed., Wiley, Canada, 2002. [3]. M.A. Boyle, C.J. Martin, J.D. Neuner, Epoxy resin, in: H. Hansmann (Ed.), Composites Compendium, ASM Handbook FB MVU, Werkstofftechnologien, Kunststofftechnik, 2003. [4]. S. Jain, D.C.H. Yang, Delamination-free drilling of composite laminates, Trans. ASME J. Eng. Ind. 116 (4) (1994) 475–481. [5]. H. Hocheng, C.C. Tsao, Analysis of delamination in drilling composite materials using core drill, Aust. J. Mech. Eng. 1 (1) (2003) 49–53. [6]. H. Saghizadeh, C.K.H. Dharan, Delamination fracture toughness of graphite and aramid epoxy composites, Trans. ASME J. Eng. Mater. Technol. 108 (1986) 290–295. [7]. Paulo Davim, J; et al: Experimental study of drilling glass fiber reinforced plastics (GFRP) manufactured by hand lay-up, ‘Composites Science and Technology’, vol. 64, 2004, 289-297. [8]. Campos Rubio J; et al: Effects of high speed in the drilling of glass fibre reinforced plastic: Evaluation of the delamination factor, 'International Journal of Machine Tools & Manufacture’, vol. 48, 2008, 715720. [9]. Abrao, AM; et al: The effect of cutting tool geometry on thrust force and delamination when drilling glass fibre reinforced plastic composite, 'Materials and Design’, vol. 29, 2008, 508-513. [10]. Mohan, NS; et al: Delamination analysis in drilling process of glass fiber reinforced plastic (GFRP) composite materials, ’Journal of Materials Processing Technology’, vol. 186, 2007, 265-271. [11]. Paulo Davim, J; et al: Drilling fiber reinforced plastics (FRPs) manufactured by hand lay-up: influence of matrix (Viapal VUP 9731 and ATLAC 382-05), ‘Journal of Materials Processing Technology’, 155 –156, 2004, 1828–833. [12]. Okenwa, I Okoli; Ainullotfi Abdul - Latif: Failure in composite laminates: overview of an attempt at prediction,’Composites: Part A’, vol. 33, 2002, 315-321. [13]. Khashaba, UA; et. al: Machinability analysis in drilling woven GFR/epoxy composites: Part II – Effect of drill, 'wear,’ Composites: Part A‘, vol. 41, 2010, 1130–1137. [14]. Hocheng, H; Tsao, CC: Effects of special drill bits on drilling-induced delamination of composite materials, ’International Journal of Machine Tools & Manufacture’, vol. 46, 2006, 1403–1416. [15]. Khashaba, UA: Delamination in drilling GFR - thermoset composites,’ Composite Structures’, vol. 63, 2004, 313–327.

238

Vol. 3, Issue 1, pp. 233-239

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
[16]. Malhotra, SK: Some studies on drilling of fibrous composites, ’Journal of Materials Processing Technology’, vol. 24, 1990, 291-300. [17]. Hocheng, H; Tsao, CC: The path towards delamination-free drilling of composite materials, 'Journal of Materials Processing Technology’, vol. 167, 2005, 251–264. [18]. Hocheng, H; Tsao, CC: Effects of exit back-up on delamination in drilling composite materials using a saw drill and a core drill, 'International Journal of Machine Tools & Manufacture’, vol. 45, 2005, 1261–1270. [19]. Chung - Chen Tsao; Wen - Chou Chen: Prediction of the location of delamination in the drilling of composite laminates, ’Journal of Materials Processing Technology’, vol. 70, 1997, 185-189. [20]. Caprino, G; Tagliaferri, V: Damage development in drilling glass fibre reinforeced plastics, 'International Journal of Machine tools and Manufacturing’, vol. 35,1995, 817-829. [21]. S. Basavarajappa;K.V.Arun; J Paulo Davim: Effect of Filler Materials on Dry Sliding Wear Behavior of Polymer Matrix Composites – A Taguchi Approach, Journal of Minerals & Materials Characterization & Engineering, Vol. 8, No.5, pp 379-391, 2009.

M.C.MURUGESH received his Diploma in Mechanical Engineering from Bapuji Polytechnic, Davangere, got his BE degree and Masters degree in Mechanical Engineering from UBDT Engineering college, Mysore University, Karnataka in 1993 and 2004.He has served as mechanical engineer in Aditya Birla Group company , Harihar for seven years. His research interest area is on composite materials, presently doing Ph.D work on the same field under Visvesvaraya Technological University, Karnataka. Currently he is a senior Faculty member from Department of Mechanical Engineering GM Institute of Technology, Davangere, Karnataka State, India. K. SADASHIVAPPA received his BE degree and Masters Degree from Mysore University in the production field in 1984 and 1990 respectively. He received the Ph.D degree in Mechanical from IIT Madras in 1997. Currently he is a Professor and Head, Department of Industrial & Production, BIET, Davangere, Karnataka. He has guided two Ph.D scholars. He has got two National awards in his name from CSIR & AICTE New Delhi. He has published number of papers in journals and conferences. He has also published four book chapters under AICTE CEP publication for Engineering teachers.

239

Vol. 3, Issue 1, pp. 233-239

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

LFSR TEST PATTERN FOR FAULT DETECTION AND DIAGNOSIS FOR FPGA CLB CELLS
Fazal Noorbasha, K. Harikishore, Ch. Hemanth, A. Sivasairam, V. Vijaya Raju
Department of ECE, KL University, Vaddeswaram, Guntur (Dist.), AP, India

A BSTRACT
The increasing growth of sub-micron technology has resulted in the difficulty of VLSI testing. Test and design for testability are recognized today as critical to a successful design. Field Programmable Gate Arrays (FPGAs) have been used in many areas of digital design. Because FPGAs are reprogrammable, faults can be easily tolerated once fault sites are located. In this paper, we discuss about fault detection and fault diagnosis techniques for FPGA CLBs. The most of the discussion will be made using Configurable Logic Block (CLB) instead of whole FPGA for simplicity. In order to generate testing pattern we aid Linear Feedback Shift Register (LFSR). Fault detection and location will be carried out using output response analyzer. Diagnosis will be discussed accordingly. For this analysis we have taken XC4000 FPGA. VHDL is used as HDL language.

K EY WORDS: FPGA, CLB, LFSR, XC4000, VHDL
I. INTRODUCTION

Field programmable gate arrays (FPGAs) are a popular choice among VLSI devices, any logical circuit can be implemented into the FPGA at low cost. It consists of an array of configurable logic blocks (CLBs), programmable interconnect and programmable Input/output blocks. Many methods have been proposed to test FPGAs [1]. In some works, the circuits under consideration are programmed FPGAs, in which logic circuits have been implemented. Since an FPGA can be programmed in many different ways, this method is not applicable to manufacturing time testing, as we do not know the final configuration. Testing faults in general FPGAs has been proposed by many researchers. In these methods, the FPGA under test is not mapped to a specific logic function. As a result, multiple test sessions are usually required, with each session dealing with one configuration. The BIST architecture requires the addition of three hardware blocks to a digital circuit: a pattern generator, a response analyzer and a test controller. Examples of pattern generators are a ROM with stored patterns, a counter and a linear feedback shift register. LFSR is constructed using flip-flops connected as a shift register with feedback paths that are linearly related using XOR gates. An LFSR can be used for generation of pseudo-random patterns, polynomial division, response compaction etc. [2]. LFSR is more popular for implementation of both TPG and ORA due its compact and simple structure. Figure 1 shows the generic FPGA architecture for VLSI design.

Figure 1. Generic FPGA architecture

240

Vol. 3, Issue 1, pp. 240-246

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
A typical response analyzer is a comparator with stored responses or an LFSR used as a signature analyzer. Traditional chip testing deals with fault detection only, while fault diagnosis is often conducted at the system level. This is because components in the chip cannot be repaired. However, faults in FPGAs can be easily tolerated by not including faulty elements in the final circuit. Therefore, FPGA chips with faults can still be used if we can identify the fault sites. In this paper, we propose a chip-level diagnosis methodology for faults in CLBs. Our method is also based on the BIST technique, which means that the testing process is conducted by the chip itself, and the requirement for external ATE support is limited.

II.

PRELIMINARIES AND FAULT DIAGNOSIS

In order to diagnose faults, there must first be a way to test modules in FPGAs. A candidate for this purpose is the built-in self-test (BIST). This structure reconfigures part of the functional circuit to be a test pattern generator (TPG), and some other to be an output response analyzer (ORA) [4]. The rest circuit consists of the circuit under test (CUT). The TPG is either a linear feedback shift register (LFSR) that generates pseudorandom test sequences, or simply a counter that provides an exhaustive test set.

Figure 2. Connections between testing module and set of CLBs under test

The test inputs are fed to the CUT, while the output responses are collected and analyzed by the ORA. The ORA can be either a signature analyzer or a comparator-based analyzer [5]. Our test architecture works as follows. An FPGA is divided into disjoint sets of CLBs, where each set can be configured into a TPG and ORA as shown in figure 2. Such a set acts as a module in the PMC model since it is able to test another module and determine whether the CUT passes or fails the given test. All the CLBs under test are programmed in the same way; therefore, they perform the same logic function and could be given the same test patterns. Thus outputs of the TPG are fed to all CLBs in the set under test, and the results are analyzed by the ORA. Since each CLB can be programmed in many ways, it is not possible to test any CLB in a single test run. As a result, a complete test of all faults in a CLB usually requires several steps, and in each step a CLB is programmed in a particular way. Our fault diagnosis process contains two sessions: 1) horizontal diagnosis, and 2) vertical diagnosis. Horizontal diagnosis is illustrated in figure 3 (a). The outputs of the rightmost cells are compared with correct results for identifying faulty rows.

Figure 3. (a) Horizontal Diagnosis (b) Vertical diagnosis

241

Vol. 3, Issue 1, pp. 240-246

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
The interconnection structures for vertical diagnosis are shown in figure 3(b). The outputs of the bottommost cells are compared with correct results for identifying faulty columns. By intersecting faulty columns with faulty rows, the faulty cells can be identified. The diagnosis resolution of our approach is a single CLB due to the structure of a basic cell. We arranged the CLBs shown above in an array of 8×8 for the experiment purpose. Since CLBs are configurable; we can ‘inject’ faults into CLBs by simulating the effects of logic faults. We consider two types of faults: stuck-at faults and open faults. We simulate the effect of a stuck-at fault by connecting the faulty line to logic ‘0’ or ‘1’. To simulate the effect of an open fault, all we have to do is to disconnect the faulty line from its driving source [6]. Once by using the above diagnosis we can find the faulty CLBs easily. Once the location is found we can avoid those CLBs from programming and FPGA still be used efficiently.

III.

LFSR FOR TEST PATTERN GENERATION

Different structures of LFSR will generate different sequence of test pattern. It means that if the BIST time is limited, the structure of LFSR will affect the BIST time and Fault Coverage (FC) of Circuit under Test (CUT). LFSRs are more popular because of their compact and simple design. Cellular Automaton LFSR are more complex to design but provide patterns with higher randomness and perform better in detection of faults such as stuck-open or delay faults, which need two-pattern testing. A software tool is used which automatically generates built-in self-test blocks into VHDL models of digital circuits by giving the suitable values of initial seed and primitive polynomial in TPG block [3]. Figure 4 shows the LFSR for test pattern of G and F of XC4000 FPGA. It will generate 8bits, 4-bit for G and 4-bit for F.

Figure 4. LFSR test pattern signals generator of G and F inputs of XC4000 FPGA

IV.

TESTING OF FPGA CLB

A simplified diagram of the CLB in Xilinx XC4000 Family is shown in figure 5. This CLB has 13 inputs (I= 13), in which one is the clock input (K), nine signals are input to the combinational part (F1 to F4, G1 to G4, and Hl), and the other three are for the sequential part (DIN, S/R, and EC). There are four outputs in a CLB (0 = 4), in which two are combinational outputs and the other two are for sequential circuits. The combinational part consists of three look-up tables (LUTs), and three multiplexers (MUXs), whose outputs are H1, X, and Y, respectively. The sequential part is made up of the remaining components: two D flip-flops (F=2), the S/R control, and the remaining MUXs.

242

Vol. 3, Issue 1, pp. 240-246

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Figure 5. Simplified diagram of a CLB in XC4000

The testing of FPGAs falls into two categories: testing of unprogrammed FPGAs (configurationindependent testing) and testing of programmed FPGA (configuration-dependent testing) [7]. Here, we focus on the testing of unprogrammed FPGAs. The LUTs consist of SRAM. To test the memory elements, each bit has to be set to both ‘1’ and ‘0’. Therefore, at least two phases are required to exercise all possible faults in the LUTs. There are 4-to-1 MUXs in a CLB. As result, we need at least four test phases so that each input-output connection of these MUXs can be exercised. We found that four test phases are enough to exercise all possible configurations in the CLB. These configurations are given as follows. Phase I: The LUTs are configured as exclusive-OR (XOR) of the nine inputs. F’ is connected to both flip flops, G’ is connected to Y and H’ is connected to X. Phase 2: The LUTs are configured as exclusive-NOR (XNOR) of the nine inputs. G’ is connected to both flip flops, F’ is connected to X, and H’s is connected to Y. Phase 3: The LUTs implement XOR, and DIN is connected to both flip-flops. Phase 4: The LUTs implement XNOR and H‘is connected to both flip-flops.

Figure 6. Comparator based ORA

The combinational part is tested in the first two phases. Flip-flops are tested in all four phases. In order to fully test the upper MUXs, in phase 1 the four connections are (C 1, C2, C3, C4)=>(Hl, DIN, S/R, EC) (i.e., H1 is connected to C1, DIN is connected to C2, etc.). In phase 2, the connection is (C2, C3, C4, Cl) => (Hl, DIN, S/R, EC), and in phase 3 we have (C3, C4, C1, C2) => (Hl, DIN, S/R, EC). Finally, in phase 4 the connection is (C4, C1, C2, C3) => (Hl, DIN, S/R, EC). As a result, all connections are exercised in four phases. The testing pattern is generated using 8-bit LFSR whose polynomial is x8 +x6 +x5 +x +1. The output response analyzer we used here is a comparator based

243

Vol. 3, Issue 1, pp. 240-246

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
ORA. There are B CLBs in a node, and each CLB has four output lines. Since all C-mode CLBs are configured in the same way, they should give the same output response if all CLBs are fault-free [8]. Thus the same output signals in all CLBs are fed to one ORA for comparison as shown in figure 6(e.g., output YQs in all C-mode CLBs are sent to the same ORA). We need four ORAs as each CLB has four output lines. In each ORA, all B inputs are fed to a XOR gate. If all CLBs under test are fault-free, the output of the XOR gate should always be zero. Unless the number of faulty CLBs is even and those faults are identical, the XOR gate will generate a 1 output at least once, and that will set the FF to be 1 and the detected fault is recorded.

V.

SIMULATION RESULTS

The following results are obtained using XILINX tool. The Xilinx XC4000 model is used for fault modeling. Initially we designed front end model. The function generators are programmed with simple functions .For testing purpose we generated TPG using 8-bit LFSR. The test pattern results S/R control is shown in below figure 7. The LFSR G and F test patterns are shown in below figure 8. The outputs of LFSR are fed to the CLB function generators. LFSR used here is of external type. The LFSR generates 8-bits using this 8-bits we have taken two 4-bits for G and F to test the FPGA CLBs. Initially it was implemented in DSCH and later in Xilinx for better view of results. Faults are incorporated into the CLBs for testing purpose. Faults that are incorporated here are stuck-at fault and open fault. By using this test pattern we have tested only stuck at faults and open faults.

Figure 7. S/L control signals for XC4000 FPGA CLBs

Figure 8. G and F input test pattern signals of XC4000 FPGA generated by 8-bit LFSR

244

Vol. 3, Issue 1, pp. 240-246

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

VI.

CONCLUSION

In this paper we present a methodology for the diagnosis of faulty CLBs in FPGAs. The reference CLB we have used is XC4000 series. For generation of test pattern 8-bit LFSR is used. Analysis purpose we have injected some of the common faults like stuck at fault and open fault. In this method all the testing is done within the FPGA which prevents the use of external hardware. The main advantage of this method is testing time of the CLBs mainly depends on number of faults rather than the chip size. Hence it yields more advantages while diagnosing the larger chips. For testing of delay faults and stuck-open faults, test patterns which are more random in nature provide better fault coverage, so in many cases they are being looked upon as the alternatives to conventional LFSRs for test pattern generation and output response analysis in BIST. Also the resolution of the fault diagnosis algorithm in interconnect testing is greatly dependent on the structure of the original application configuration. The complexity of the CLB test configuration will increase in some worst case. All these problems need to be studied in the future research work in this area.

REFERENCES
[1]. D. Gil, J. Gracia, J.C. Baraza, P.J. Gil, (2003) “Study, Comparison and Application of different VHDLBased Fault Injection Techniques for the Experimental Validation of a Fault-Tolerant System”, Microelectronics Journal, vol. 34, no. 1, pp. 41-51. [2]. Guihai Yan,E, Yinhe Han, And Xiaowei Li, (2011) “SVFD: A Versatile Online Fault Detection Scheme via Checking Of Stability Violation”, IEEE Transactions On Very Large Scale Integration (VLSI) Systems, Vol.19, No.9, pp.1627-1640. [3]. Shikha Kakar, Balwinder Singh and Arun Khosla, (2009) “Implementation of BIST Capability using LFSR Techniques in UART”, International Journal of Recent Trends in Engineering ,Vol 1, No. 3, pp. 301-304. [4]. Mehdi B. Tahoori,(2011) “High Resolution Application Specific Fault Diagnosis of FPGA”, IEEE Transactions On Very Large Scale Integration (VLSI) Systems, Vol. 19, No. 10, PP.1775-1786. [5]. I. Pomeranz and S. Reddy,(2004) “Improving the stuck-at fault coverage of functional test sequences by using limited-scan operations,” IEEE Transactions On Very Large Scale Integration (VLSI) Systems, vol. 12, no. 7, pp.780–788. [6]. P. Bernardi,(2006) “A pattern ordering algorithm for reducing the size of fault dictionaries,” in Proc. IEEE VLSI Test Symp., pp. 45–51. [7]. N. Hernandez and V. Champac, (2008) “Testing skew and logic faults in SOC interconnects,” in Proc. IEEE Comput. Soc. Annu. Symp. VLSI, pp. 151–156. [8]. A. Namazi, M. Nourani, and M. Saquib, (2007) “A robust interconnect mechanism for nanometer VLSI,” presented at the Int. Test Synthesis Workshop (ITSW), San Antonio, TX.

Authors Fazal Noorbasha was born on 29th April 1982. He received his, B.Sc. Degree in Electronics Sciences from BCAS College, Bapatla, Guntur, A.P., Affiliated to the Acharya Nagarjuna University, Guntur, Andhra Pradesh, India, in 2003, M.Sc. Degree in Electronics Sciences from the Dr. HariSingh Gour University, Sagar, Madhya Pradesh, India, in 2006, M.Tech. Degree in VLSI Technology, from the North Maharashtra University, Jalgaon, Maharashtra, INDIA in 2008, and Ph.D. Degree in VLSI from Department Of Physics and Electronics, Dr. HariSingh Gour Central University, Sagar, Madhya Pradesh, India, in 2011. Presently he is working as a Assistant Professor, Department of Electronics and Communication Engineering, KL University, Guntur, Andhra Pradesh, India, where he has been engaged in teaching, research and development of Low-power, High-speed CMOS VLSI SoC, Memory Processors LSI’s, Fault Testing in VLSI, Embedded Systems and Nanotechnology. He is a Scientific and Technical Committee & Editorial Review Board Member in Engineering and Applied Sciences of World Academy of Science Engineering and Technology (WASET), Advisory Board Member of International Journal of Advances Engineering & Technology (IJAET), Member of International Association of Engineers (IAENG) and Senior Member of International Association of Computer Science and Information Technology (IACSIT). He has

245

Vol. 3, Issue 1, pp. 240-246

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
published over 20 Science and Technical papers in various International and National reputed journals and conferences. Harikishore Kakarla was born in Vijayawada, Krishna (Dist.), AP, India. He received B.Tech. in Electronics & Communication Engineering from, JNTU, Hyderabad, AP, India, M.Tech. from SKD University, Anantapur, AP, India. He is pursuing Ph.D in the area of VLSI in KL University, Vijayawada, AP, India. He has published 03 International Journals and 01 National Conference Level. Presently he is working as a Assistant Professor, Department of Electronics and Communication Engineering, KL University, Guntur, Andhra Pradesh, India, where he has been engaged in teaching, research and development of Low-power, High-speed CMOS VLSI SoC, Memory Processors LSI’s, Fault Testing in VLSI, Embedded Systems and Nanotechnology.

246

Vol. 3, Issue 1, pp. 240-246

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

FUZZY BASED ROTOR GROUND FAULT LOCATION METHOD FOR SYNCHRONOUS MACHINES
Mohanraj. M, Rani Thottungal and Manobalan.M
Department of Electrical and Electronics Engineering, Kumaraguru College of Technology, Coimbatore, India

A BSTRACT
This paper presents a fuzzy rule based on-line rotor ground fault location method for synchronous machines with static excitation systems, which, combined with rotor ground fault protection, can detect and locate faults in the rotor. Synchronous machine field winding is fed by rectifiers through an excitation transformer. The main contribution of this technique to locate the position of the ground fault in the rotor winding online, reduce the repair time. The system is based on the analysis of the ac and dc components of the excitation voltage and the voltage measured in a grounding resistance located in the neutral terminal of the excitation transformer. This technique has been validated through computer simulations.

K EYW ORDS: AC generator excitation, fault location, power generation protection, synchronous generator
excitation.

I.

INTRODUCTION

The use of protection devices in power systems is absolutely necessary in order to safeguard them against short circuits, overloads and, in general, abnormal operations, or faults. The protection system in generating units is especially important since it must reliably guarantee the power supply as per IEEE Standard C37.102 [1]. Some of the most common malfunctions in synchronous generators, such as vibrations and unbalanced stator voltages, are caused by ground faults in the rotor. Generally, the initial ground fault does not cause any damage to the machine, because this circuit is usually ungrounded. However, the probability of a second fault increases after the first one since it establishes a reference for voltages induced in the field, thereby increasing the stress to ground at other points on the field winding. If this second fault occurs, the field winding will be partially short circuited, producing unbalanced fluxes in the machine, with the consequent vibrations and unbalance in the stator voltage.

Fig. 1. Device for rotor fault detection.

247

Vol. 3, Issue 1, pp. 247-254

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Most rotor ground fault detection devices for synchronous generators are based on detecting abnormal values in certain electrical variables, such as the stator no-load voltage [2], or the air gap flux [3], so they can only detect double faults. To achieve the early detection of the initial failure of the rotor to ground before severe damage occurs in the generator, authors presented in [4] a detection technique for synchronous generators with static excitation systems, where the excitation field windings are fed by an excitation transformer through a rectifier. Fig. 1 shows a scheme of the proposed technique This technique exhibited two important advantages when compared to current fault detection systems. 1) It could discriminate between a ground fault on the ac or the dc side of the excitation circuit. 2) It did not need any additional injection source to detect the fault, because the proposed detection technique is supplied by the power network itself. This technique required a Wye secondary in the excitation transformer for the rectifier power supply. In this situation, the proposed technique requires a high-value grounding resistance. Otherwise, if the system is fed by a transformer with delta secondary winding, an artificial neutral will be required (i.e., an earthing transformer or zig-zag impedance). In any case, once detected, locating the fault can be a long and laborious process, especially with multipole synchronous generators. When the fault is on the rotor winding, the connections between poles must be opened in order to locate the ground fault. Except for some cases [6], in order to determine whether the fault is on the rotor winding or in the external buses and dc source, the machine has to be removed from service. This paper presents a fuzzy rule based rotor ground fault location method for synchronous machines applicable to the fault detection system presented in [4].The fuzzy rule is based on the value minimum and maximum variation of Vdc,Vac,Rf,Vfdc,Vfac due to the effect of fault at different position of winding. The variation of values is studied from the paper [5]. This new system eliminates the current need to disconnect all poles to locate the fault. Moreover, having available information about the position of the ground fault in the excitation winding could save time in the generator repair process. This fact is of particular interest in the case of hydro generators because of the feasibility of removing one pole without requiring a complete rotor extraction. We aim to make a contribution to the field of fault location in synchronous machines, where there are already developments in ground fault location for the stator winding [7], in fault location between the core laminations [8] and even in insulation fault location is using partial discharges [9]. The subject of rotor fault location in electrical machines has been extensively studied in the case of induction machines, where there are very significant contributions in the detection of broken bars, running the machine in transient state [10],[11] and even in magnetic saturation conditions [12], and its combination with the effects of eccentricity[13]. However, with a few exceptions [6], there are not many contributions to rotor fault location methods for synchronous machines. This is the field that will be covered in this paper. This section II presents the principles of the proposed method. Then, Section III analysis the results of the simulations obtained by applying the proposed method. Finally, section IV concludes with the main contributions of the proposed technique.

II.

PRINCIPLES OF THE ROTOR GROUND FAULT LOCATION METHOD

This procedure for rotor ground fault location was developed with the analysis of numerous laboratory tests, which followed the development of a previous rotor ground fault protection technique presented by the authors in [4]. The rotor ground fault protection technique was based on the analysis of the ac component of the voltage at the grounding resistance. But also a dc component appears in the grounding resistance in case of earth fault in the field winding. The procedure proposed in this paper will use both ac and dc components to locate the rotor ground fault. The neutral of the excitation transformer is grounded, so the dc voltage supplied by the rectifier has a ground reference. In case of normal operation, the voltage between the midpoint of the field winding and ground is zero. Moreover, the voltage between the positive brush and ground is half the voltage supplied by the rectifier (Vfdc/2), in a similar way the voltage at the negative brush has the same amplitude but negative polarity (−Vfdc/2). The distribution of the voltage to ground is linear along the

248

Vol. 3, Issue 1, pp. 247-254

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
field winding. For a general position in the field winding X (%), the voltage between this point and ground can be obtained as presented in (1) as VT . This voltage relationship is illustrated in Fig. 2
VT = V fdc(X – 50/100) (1)

Fig. 2. Excitation dc equivalent circuit.

Fig. 3. Equivalent circuit for rotor ground fault dc current calculation for any rotor position X (%).

The dc current that flows from the field winding to earth, in case of earth fault, is calculated using Thevenin’s theorem. Then, the equivalent circuit between any point of the field winding and earth consists of a voltage source and a resistance. The voltage source corresponds to the voltage in open circuit, according to (1). On the other hand, the resistance of this equivalent circuit be approximated by the grounding can resistance Rg, as this resistance is much higher (around 5 k , to limit the ground fault current) than the resistances of the field winding, the transformer or the rectifier. Finally, the dc equivalent circuit for any rotor position X and the fault resistance is presented in Fig. 3. Hence, the dc current depends on the position of the rotor earth fault and the fault resistance. The principle behind this method is the relationship between the dc voltage component in the grounding resistance V dc and the position of the fault along the field winding. It is important to realize that the voltage measurements available with the machine online are the ac and dc components of the field winding voltage and the voltage in the grounding resistance in the event of a fault. A. Relationship between V dc and Fault Position The dc component of the voltage in the grounding resistance is related to the rotor fault position. The amplitude of V dc is maximum, with positive polarity, when the fault occurs in the negative terminal, which will be considered as the start of the winding (0%). If, on the other hand, the fault occurs in the positive terminal, which is at the end of the winding (100%), V dc has the same amplitude, but the polarity is negative. In contrast, V dc is negligible for faults at the midpoint of the winding (50%).

Fig. 4. V dc-Fault position relationship.

Fig. 5. V dc-Fault position relationship for different Rf.

The value of V dc for different fault positions and a particular fault resistance Rf is shown in Fig. 4. The maximum value is referred to as V dc0. The maximum value of the dc component V dc0 depends on the value of the fault resistance Rf and the value of the dc component of the excitation field voltage Vfdc. B. Estimate of the Fault Resistance Rf. In order to estimate the fault resistance Rf, the ac components should be used. In a similar way, the ac current that flows from the field winding to earth depends on the ac component of the excitation

249

Vol. 3, Issue 1, pp. 247-254

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
voltage Vfac and the fault resistance Rf. The equivalent ac circuit is composed by a voltage source and impedance. The voltage source corresponds to the ac component of the excitation voltage Vfac, and the impedance corresponds to the ground resistance Rf. In this case, the ground fault position does not affect the value of the current.

Fig. 6. Equivalent dc circuit for V dc0 calculation.

Fig. 7. Equivalent ac circuit for Rf calculation.

After analyzing numerous tests, it was determined that the fault resistance Rg could be reasonably estimated using this ac equivalent circuit, as shown in Fig. 7. The dc and ac equivalent circuits, as shown in Figs. 6 and 7, are checked using all test measurements, where the fault resistance Rf and the rotor fault position X are known in advance. Therefore, the fault resistance Rf can be estimated according to the following expression: Rf = Rg ((V fac / V ac)-1) C. Fuzzy Rule. The fuzzy rule is based on the value minimum and maximum variation of Vdc,Vac,Rf,Vfdc,Vfac due to the effect of fault at different position of winding. If (fdc is PS) and (dc is PS) and (rf is NVB) the (output1 is 60) (1) If (fdc is PS) and (dc is PS) and (rf is NB) then (output1 is 80) (1) If (fdc is PS) and (dc is PS) and (rf is NS) then (output1 is 100) (1) If (fdc is PS) and (dc is PS) and (rf is mf4) then (output1 is 0) (1) If (fdc is PS) and (dc is PS) and (rf is PS) then (output1 is 20) (1) If (fdc is PS) and (dc is PS) and (rf is PB) then (output1 is 40) (1) If (fdc is PS) and (dc is PS) and (rf is PVB) then (output1 is 50) (1) If (fdc is NB) and (dc is NB) and (rf is mf4) then (output1 is 0) (1) If (fdc is NS) and (dc is NS) and (rf is mf4) then (output1 is 0) (1) If (fdc is NS) and (dc is NS) and (rf is PS) then (output1 is 20) (1) If (fdc is Z) and (dc is Z) and (rf is PVB) then (output1 is 50) (1) If (fdc is PS) and (dc is PS) and (rf is PB) then (output1 is 60) (1) If (fdc is PB) and (dc is PB) and (rf is PB) then (output1 is 80) (1) If (fdc is PB) and (dc is PB) and (rf is NS) then (output1 is 100) (1) If (fdc is z) and (dc is NB) and (rf is mf4) then (output1 is 0) (1) If (fdc is z) and (dc is NS) and (rf is PS) then (output1 is 20) (1) If (fdc is z) and (dc is Z) and (rf is PB) then (output1 is4 0) (1) If (fdc is z) and (dc is PS) and (rf is NVB) then (output1 is6 0) (1) If (fdc is z) and (dc is PB) and (rf is NB) then (output1 is 80) (1) If (fdc is z) and (dc is NB) and (rf is PVB) then (output1 is 50) (1) If (fdc is z) and (dc is NS) and (rf is NS) then (output1 is 100) (1) If (fdc is PS) and (dc is NB) and (rf is mf4) then (output1 is 0) (1) If (fdc is PS) and (dc is NS) and (rf is PS) then (output1 is 20) (1) If (fdc is PS) and (dc is Z) and (rf is PB) then (output1 is 4 0) (1) If (fdc is PS) and (dc is PS) and (rf is NVB) then (output1 is6 0) (1) If (fdc is PS) and (dc is PB) and (rf is NB) then (output1 is 80) (1)

250

Vol. 3, Issue 1, pp. 247-254

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
If (fdc is PS) and (dc is NB) and (rf is PVB) then (output1 is 50) (1) If (fdc is PS) and (dc is z) and (rf is NS) then (output1 is 100) (1) If (fdc is PB) and (dc is NB) and (rf is mf4) then (output1 is 0) (1) If (fdc is PB) and (dc is NS) and (rf is PS) then (output1 is 20) (1) If (fdc is PB) and (dc is Z) and (rf is PB) then (output1 is 4 0) (1) If (fdc is PB) and (dc is PB) and (rf is NVB) then (output1 is6 0) (1) If (fdc is PB) and (dc is PS) and (rf is NB) then (output1 is 80) (1) If (fdc is PB) and (dc is PS) and (rf is PVB) then (output1 is 50) (1) If (fdc is PB) and (dc is PS) and (rf is NS) then (output1 is 100) (1) D. Rotor ground fault location block diagram. To summarize the complete rotor fault location procedure, we include the rotor location block diagram in Fig. 8. The algorithm steps for estimating the rotor fault position are as follows. 1) The measurement of the excitation voltage Vf and grounding resistance voltage V yields their ac and dc components Vfac, Vfdc, V ac, and V dc, respectively. 2) The ground fault resistance Rf is estimated from the ac components which is already calculated. 3) Based on the values of Rf, Vfdc, Vdc specific fault position is calculated using Fuzzy logic controller.

Fig. 8. Rotor fault location device layout.

The proposed rotor ground fault location method was simulated using the MATLAB/Simulink Power Systems software. The simplified schematics of the MATLAB simulation are shown in Fig. 9, representing a 5-kVA synchronous machine and its excitation system, similar to the machine used in the experimental tests. Detailed data on the simulated devices can be found in Tables I and II. In the simulation, the excitation transformer is modelled as a three-phase voltage source; the rotor windings of the synchronous machine are modelled with ten sets of resistances, inductances, and capacitors; finally, the rectifier is modelled with a three-phase thyristor-controlled rectifier bridge. In the example, the excitation transformer neutral grounding resistance is a 5k resistance.

251

Vol. 3, Issue 1, pp. 247-254

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Fig. 9. Simplified schematics of the excitation system for rotor ground fault location simulation TABLE I
CHARACTERISTICS OF SYNCHRONOUS GENERATOR USED IN THE EXPERIMENT

TABLE II
SYNCHRONOUS GENERATOR STATIC EXCITATION SYSTEM TECHNICAL DATA

III.

SIMULATION RESULT

Fig. 10. Voltage waveform measured at the grounding resistance for a fault at 0% winding position with Rf = 0.

Fig. 11. Voltage waveform measured at the grounding resistance for a fault at 50% winding position with Rf =0.

252

Vol. 3, Issue 1, pp. 247-254

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Fig. 12. Voltage waveform measured at the grounding resistance for a fault at 100% winding position with Rf = 0.

Fig. 13. Voltage waveform measured at the grounding resistance for a fault at 0% winding position with Rf = 5k

Fig. 14. Voltage waveform measured at the grounding resistance for a fault at 50% winding position with Rf = 5k .

Fig. 15. Voltage waveform measured at the grounding resistance for a fault at 100% winding position with Rf = 5k .

Figs. 10–12 show the voltage in the grounding resistance for a fault in the negative brush (0%), midpoint (50%), and positive brush (100%), respectively. A zero fault resistance and a field voltage of 20 V dc (corresponding to a firing angle of 48◦) are used for the simulations. The simulations at the same conditions, but using a fault resistance of 5 k , are included in Figs. 13– 15. As expected, the values are lower than in the previous case without fault resistance.

IV.

CONCLUSION

The method is able to locate the fault with the machine online. Moreover, it offers a significant improvement over the detection technique proposed by the authors in [4]. The fuzzy controller locates the position of fault in the rotor of synchronous machines faster due to the pre programmed laboratory value. In practical application it reduces the time needed for calculation. It directly determines the fault position in rotor winding. The main advantage of this technique is that it contributes valuable information on the fault position before tripping the generator. It is possible to incorporate it easily into actual rotor ground protection systems. Having information on the position of a ground fault in the excitation winding saves time in the generator repair process.

REFERENCES
[1] IEEE Guide for AC Generator Protection, IEEE Standard C37.102, 2006. [2] M. Kiani, W-J. Lee, R. Kenarangui, and B. Fahimi, “Detection of rotor faults in synchronous generators,” in Proc. IEEE Int. Symp. Diagnosis Elect. Mach., Power Electron. Drives, 2007, pp. 266–271. .[3] R. L. Stoll andA.Hennache, “Method of detecting and modelling presence of shorted turns in DC field winding of cylindrical rotor synchronous machines using two airgap search coils,” IEEE Proc. Electric Power Appl., vol. 135, no. 6, pp. 281–294, Nov. 1988.

253

Vol. 3, Issue 1, pp. 247-254

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
[4] C. A. P. Gaona, F. Bl´azquez, P. Fr´ıas, and M. Redondo, “A novel rotor ground fault detection technique for synchronous machines with static excitation,” IEEE Trans. Energy Convers., vol. 18, no. 4, pp. 965–973, Dec. 2010. [5]Carlos A. Platero, Member, IEEE, Francisco Blazquez, Member, IEEE, Pablo Frıas, and Miguel Pardo,” New On-Line Rotor Ground Fault Location Method for Synchronous Machines with Static Excitation” IEEE Trans. Energy Convers., vol. 26, no. 2, pp. 572–580, Jun. 2011. [6] I. Kerszenbaum and J. Lopetrone, “Novel Hall-effect turbo-generator’s rotor DC ground-fault localizer,” in Proc. IEEE Int. Electric Machines Drives Conf. Rec., 1997, pp. TC1/5.1–5.3. [7] M. Zielichowski and M. Fulczyk, “Analysis of operating conditions of ground-fault protection schemes for generator stator winding,” IEEE Trans. Energy Convers., vol. 18, no. 1, pp. 57–62, Mar. 2008. [8] G. B. Kliman, S. B. Lee, M. R. Shah, R. M. Lusted, and N. K. Nair, “A new method for synchronous generator core quality evaluation,” IEEE Trans. Energy Convers., vol. 19, no. 3, pp. 576–582, Sep. 2004. [9] J. Borghetto, A. Cavallini, A. Contin, G. C. Montanari, M. de Nigris, G. Pasini, and R. Passaglia, “Partial discharge inference by an advanced system. Analysis of online measurements performed on hydrogenerator,” IEEE Trans. Energy Convers., vol. 19, no. 2, pp. 333–339, Jun. 2004. [10] M. Riera-Guasp, M. F. Cabanas, J. A. Antonino-Daviu, M. Pineda- S´anchez, and C. H. R. Garc´ıa, “Influence of nonconsecutive bar breakages in motor current signature analysis for the diagnosis of rotor faults in induction motors,” IEEE Trans. Energy Convers., vol. 25, no. 1, pp. 80–89, Mar. 2010. [11] H. Douglas, P. Pillay, and A. K. Ziarani, “Broken rotor bar detection in induction machines with transient operating speeds,” IEEE Trans. Energy Convers., vol. 20, no. 1, pp. 135–141, Mar. 2005. [12] J. Sprooten and J. C. Maun, “Influence of saturation level on the effect of broken bars in induction motors using fundamental electromagnetic laws and finite element simulations,” IEEE Trans. Energy Convers., vol. 24, no. 3, pp. 557–564, Sep. 2009. [13] Z. Liu, X. Yin, Z. Zhang, D. Chen, and W. Chen, “Online rotor mixed fault diagnosis way based on spectrum analysis of instantaneous power in squirrel cage induction motors,” IEEE Trans. Energy Convers., vol. 19, no. 3, pp. 485–490, Sep. 2004. Biographies M. Mohanraj, from Erode, finished his UG in Bharathiar University and obtained PG in Power System Engineering in Annamalai University and he is currently working as Assistant Professor in EEE department, Kumaraguru College of Technology, Coimbatore and a Life member of ISTE. His research area includes Wind Energy Conversion, Solar Energy, Machines and Power Quality issues.

Rani Thottungal, obtained her B.E and M.E degrees from Andhra University and Doctorate from Bharathiar University. She is currently working as Professor and Head in EEE department, Kumaraguru College of Technology, Coimbatore. Her research interest includes Power System, Power Inverter and Power Quality Issues.

M. Manobalan, from Neyveli, finished his UG in St.Joseph’s college of Engineering and pursuing his final year M.E. in Power Electronics and Drives in Kumaraguru College of Technology, Coimbatore.

254

Vol. 3, Issue 1, pp. 247-254

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

TURNING PARAMETER OPTIMIZATION FOR SURFACE ROUGHNESS OF ASTM A242 TYPE-1 ALLOYS STEEL BY TAGUCHI METHOD
Jitendra Verma1, Pankaj Agrawal2, Lokesh Bajpai3
1

Department of Mechanical Engineering, Samrat Ashok Technological Institute, Vidisha (M.P) 464001, India. 2&3 Professor in Department of Mechanical Engineering, Samrat Ashok Technological Institute, Vidisha (M.P.) 464001, India.

ABSTRACT
The purpose of this research paper is focused on the analysis of optimum cutting conditions to get lowest surface roughness in turning ASTM A242 Type-1 ALLOYS STEEL by Taguchi method. Experiment was designed using Taguchi method and 9 experiments were conducted by this process. The results are analyzed using analysis of variance (ANOVA) method. Taguchi method has shown that the cutting speed has significant role to play in producing lower surface roughness about 57.47% followed by feed rate about 16.27%. The Depth of Cut has lesser role on surface roughness from the tests. The results obtained by this method will be useful to other researches for similar type of study and may be eye opening for further research on tool vibrations, cutting forces etc.

KEYWORDS: ASTM A242 Type-1 alloy steel; Machining; Dry turning; Signal-to- noise ratio; Taguchi method

I.

INTRODUCTION

Increasing the productivity and the quality of the machined parts are the main challenges of metalbased industry; there has been increased interest in monitoring all aspects of the machining process. Surface finish is an important parameter in manufacturing engineering. It is a characteristic that could influence the performance of mechanical parts and the production costs. The ratio between costs and quality of products in each production stage has to be monitored and immediate corrective actions have to be taken in case of Deviation from desired trend. Surface roughness measurement presents an important task in many engineering applications. Many life attributes can be also determined by how well the surface finish is maintained. Machining operations have been the core of the manufacturing industry since the industrial revolution [1] and the existing optimization researches for Computer Numerical Controlled (CNC) turning were either simulated within particular manufacturing circumstances or achieved through numerous frequent equipment operations. These conditions or manufacturing circumstances are regarded as computing simulations and their applicability to real world industry is still uncertain and therefore, a general optimization scheme without equipment operations is deemed to be necessarily developed. Surface roughness is commonly considered as a major manufacturing goal for turning operations in many of the existing researches. The machining process on a CNC lathe is programmed [13]. Many surface roughness prediction systems were designed using a variety of sensors including dynamometers for force and torque. Taguchi and Analysis Of Variance (ANOVA) can conveniently optimize the cutting parameters with several experimental runs well designed [15]. Taguchi parameter design can optimize the performance characteristics through the settings of design parameters. This study describe how to select the control factors levels (cutting speed, feed rate, Depth of cut) that can minimize the effect of noise factors on the response (surface roughness). An experimental work will be conducted to analyse the influence of

255

Vol. 3, Issue 1, pp. 255-261

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
cutting parameters (control factors) on surface roughness (signal factor), then select the optimal cutting parameters which lead to optimal response by assistance of optimal signal factor [18]. Sundaram and Lambert [20, 21] considered six variables i.e. speed, feed, and depth of cut, time of cut, nose radius and type of tool to monitor surface roughness. To improve the efficiency of these turning processes, it is necessary to have a complete process understanding and model. To this end, a great deal of research has been performed in order to quantify the effect of various hard turning process parameters to surface quality. These factors can be divided into a) setup variables, b) tool variables, and c) work piece variables. In order to gain a greater understanding of the turning process it is necessary to understand the impact of the each of the variables, but also the interactions between them. It is impossible to find all the variables that impact surface roughness in turning operations. In addition, it is costly and time consuming to discern the effect of the every variable on the out put. In order to simplify the problem, one needs to eliminate or select specific variables that correspond to practical applications.

II.

TAGUCHI METHOD

Taguchi method is a powerful tool for the design of high quality systems. It provides simple, efficient and systematic approach to optimize designs for performance, quality and cost [22]. Taguchi method is efficient method for designing process that operates consistently and optimally over a variety of conditions. To determine the best design it requires the use of a strategically designed experiment [23]. Taguchi approach to design of experiments in easy to adopt and apply for users with limited knowledge of statistics, hence gained wide popularity in the engineering and scientific community [17-18]. The desired cutting parameters are determined based on experience or by hand book. Cutting parameters are reflected. Steps of Taguchi method are as follows: (1) Identification of main function, side effects and failure mode. (2) Identification of noise factor, testing condition and quality characteristics. (3) Identification of the main function to be optimized. (4) Identification the control factor and their levels. (5) Selection of orthogonal array and matrix experiment. (6) Conducting the matrix experiment. (7) Analysing the data, prediction of the optimum level and performance. (8) Performing the verification experiment and planning the future action. [4]

III.

EXPERIMENTAL SET UP AND CUTTING CONDITIONS

MATERIALS AND METHODS Experimental procedures and conditions: In this study ASTM A242 TYPE-1 ALLOY steel and 250 mm long with 50 mm diameter was used as work material for experimentation using a lathe turning machine. The chemical composition of the selected work piece is shown as
Table 1 Composition OF ASTM A242 Type-1 ALLOYS STEEL C 0.15% Mn 1.0% Si 0.4% P 0.15% Cr 0.5% Cu 0.2% S 0.05% P 0.045% Nb+V+Ti 0.15

Universal turning machine tool was used in the experiments.. Cutting speed, feed rate and depth of cut were selected as the machining parameters to analyse their effect on surface roughness. A total of 9 experiments based on Taguchi’s orthogonal array were carried out with different combinations of the levels of the input parameters. Among them, the settings of cutting speed include 100, 125 and 150 rpm; those of feed rate include 0.05, 0.1, 0.15 mm rev-1; the depth of cut is set at 0.5, 1.0 and 1.5 mm. The cutting parameters are shown in the Table 2. Three levels of cutting speed, three levels of feed rate and three levels of depth of cut were used and are shown in the Table 2. The different alloying elements present in a work piece are shown in the table 1.

256

Vol. 3, Issue 1, pp. 255-261

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Table 2 Cutting parameters Symbol A B C Parameters/Level Cutting speed Feed rate Depth of cut Level 1 -18.30 -16.46 -16.78 Level 2 -20.28 -18.85 -18.5 Level 3 -15.78 -19.05 -19.09 Units m/min mm/rev. mm

IV.

RESULTS AND DISCUSSION

4.1 Analysis of S/N ratio based on Taguchi Method To select an appropriate orthogonal array for experiments, the total degrees of freedom need to be computed. The degrees of freedom are defined as the number of comparisons between process parameters that need to be made to determine which level is better and specifically how much better it is. For example, a Three-level process parameter counts for four degrees of freedom. The degrees of freedom associated with interaction between two process parameters are given by the product of the degrees of freedom for the two process parameters [8]. The mean S/N ratio for each level of the other cutting parameters can be computed in the similar manner. The mean S/N ratio for each level of the cutting parameters is summarized and called the S/N response table for surface roughness. In addition, the total mean S/N ratio for the nine experiments for surface roughness, listed in Table.3 the greater S/N ratio corresponds to the smaller variance of the output characteristic around the desired value.
Table 3 Experimental Results Test. No 1 2 3 4 5 6 7 8 9 Cutting Speed Feed rate 100 100 100 125 125 125 150 150 150 0.05 0.1 0.15 0.05 0.1 0.15 0.05 0.1 0.15 DOC 0.5 1.0 1.5 1.5 0.5 1.0 1.0 1.5 0.5 Mean Surface Roughness Ra (µm) 5.62 10.04 09.88 10.25 9.30 11.60 5.04 6.93 6.27 Signal to Noise Ratio (S/N) -14.99 -20.03 -19.89 -20.21 -19.36 -21.29 -14.18 -17.18 -15.99

According to Taguchi method, S/N ratio is the ratio of “Signal” representing desirable value, i.e. mean of output characteristics and the “noise” representing the undesirable value i.e., squared deviation of the output characteristics. It is denoted by η and the unit is dB. The S/N ratio is used to measure quality characteristic and it is also used to measure significant welding parameters [9].
Table No. 4 Response table
Parameters/Level Cutting speed Feed rate Depth of cut Level 1 -18.30 -16.46 -16.78 Level 2 -20.28 -18.85 -18.5 Level 3 -15.78 -19.05 -19.09 Max-Min 4.5 2.59 2.31 Rank 1 2 3

257

Vol. 3, Issue 1, pp. 255-261

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Level 1 -13 -14 -15 -16 -17 S/N -18 RESPO NSE -19 -20 -21 Level 2 Level 3

-15.78 -18.3 -20.28

Cutting speed

Average Cutting speed

Fig. 1 CUTTING SPEED VS S/N RESPONSE
Level 1 Level 2 Level 3

S/N RESPONSE

-13 -14 -15 -16 -17 -18 -19 -20 -21

-16.46

-18.85

-19.05

Feed Rate

Average Feed Rate

Fig. 2 FEED RATE VS S/N RESPONSE
Level 1 Level 2 Level 3

S/N RESPONSE

-13 -14 -15 -16 -17 -18 -19 -20 -21

-16.78 -18.5

-19.09

Depth of cut

Average Depth of cut

Fig. 3 DEPTH OF CUT VS S/N RATIO

4.2 Analysis of variance (ANOVA) The main aim of ANOVA is to investigate the design parameters and to indicate which parameters are significantly affecting the output parameters. In the analysis, the sum of squares and variance are calculated. F-test value at 95 % confidence level is used to decide the significant factors affecting the process and percentage contribution is calculated [19]. The ANOVA analysis for percentage calibration is shown in Table-5

258

Vol. 3, Issue 1, pp. 255-261

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Table 5 Result of ANOVA for Surface Roughness contribution
Symbol A B C Error Total Parameter Cutting Speed Feed Rate Depth of Cut DOF 2 2 2 2 8 Sum of Square 30.5208 12.4602 8.6427 1.4823 53.106 Mean Square 15.2604 6.2301 4.3213 0.74115 26.553 Percentage Contribution 57.47% 23.46 % 16.27 % 2.79 %

Fig. 4 Percentage contribution by Pie Chart for surface roughness

V.

CONCLUSION

The following are conclusions drawn based on the tests conducted on turning ASTM A242 Type-1 ALLOYS STEEL and 250 mm long with 50 mm diameter. 1. From the ANOVA, Table 5 and the P value, the cutting speed is the only significant factor which contributes to the surface roughness i.e. 57.47 % contributed by the cutting speed on surface roughness. 2. The second factor which contributes to surface roughness is the feed rate having 23.46 %. 3. The third factor which contributes to surface roughness is the depth of cut having 16.27%. 4. The Validation experiment confirms that the error occurred was less than 2.79% between equation and actual value. 5. It is recommended from the above results that cutting of 18.30 to 15.78 m/min can be used to get lowest surface roughness. 6. Taguchi gives systematic simple approach and efficient method for the optimum operating conditions. So now it is found by this research how to use Taguchi’s parameter design to obtain optimum condition with lowest cost, minimum number of experiments and industrial engineers can use this method.

REFERENCES
[1]

[2] [3] [4]

Elias N. Malamas, Euripides G.M. Petrakis, Michalis Zervakis “A SURVEY ON INDUSTRIAL VISION SYSTEMS, APPLICATIONS AND TOOLS” Department of Electronic and Computer Engineering Technical University of Crete Chania Crete Greece J. Z. Zhang, J. C. Chen, E. D. Kirby, “Surface roughness optimization in an end-milling operation using the Taguchi design method, ” Journal of Materials Processing Technology 184, pp. 233-239, 2007. P. J. Ross, “ Taguchi technique for quality engineering, ” New York: McGraw-Hill, 1988. Onkar N. panday “total quality management”

259

Vol. 3, Issue 1, pp. 255-261

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
[5]

[6] [7]

[8] [9] [10]

[11] [12]

[13]

[14] [15] [16]

[17] [18] [19]

[20] [21] [22] [23]

N. Nalbant, H. Gokkaya, G. Sur, Application of Taguchi method in the optimization of cutting parameters for surface roughness in turning, Materials and Design, date received 21.07.2006 and date accepted 06.01.2006. I.A. Choudhury, M.A. El-Baradie, Surface roughness prediction in the turning of high strength steel by factorial design of experiments, Journal of Materials Processing technology, 67 (1997) 55-67. B. Erginc, Z. Kampu, B. Sustarsic, The use of the Taguchi approach ro determine the influence of injection –molding parameters on the properties of green parts, Journal of achievement in Materials and Manufacturing Engineering, 15, 2006. P.J. Ross. 2005. Taguchi Techniques for Quality Engineering. 2nd Ed. Tata McGraw Hill. Ugur Esme. 2009. Application of Taguchi method for the optimization of resistance spot welding process. The Arabian Journal for Science and Engineering. 34(28): 519-528. A. G. Thakur, T. E. Rao, M. S. Mukhedkar and V. M. Nandedkar “APPLICATION OF TAGUCHI METHOD FOR RESISTANCE SPOT WELDING OF GALVANIZED STEE” ARPN Journal of engineering and Applied Sciences ©2006-2010 Asian Research Publishing Network (ARPN). All rights reserved S. Thamizhmanii, S. Saparudin and S. Hasan, Analyses of roughness, forces and wear in turning gray cast iron, Journal of achievement in Materials and Manufacturing Engineering, volume20, issues 1-2, 2007. Palanikumar, L. Karunamoorthy, R. Krathikeyan, Assessment of factors influencing surface roughness on the machining of glass –reinforced polymer composites, Journal of Materials and Design, 27 (2006) 862871. Xue Ping, C. Richard Liu, Zhenqiang Yao, Experimental study and evaluation methodology on hard surface integrity, International Journal Advanced Manufacturing Technology, ODI 10.1007/s00170-0060576-6. T. Tamizharasan, T. Selvaraj, A. Noorul Hag, Analysis of tool wear and surface finish in hard turning, International Journal of Advanced Manufacturing Technology (2005), DOI 10/1007 /s 00170-004-2411-1. W.H. Yang, Y.S. Tang, Design optimization of cutting parameters for turning operations based on Taguchi method, Journal of Materials Processing Technology, 84 (1998) 122-129. Ersan Aslan, Necip Camuscu Burak Bingoren, Design of optimization of cutting parameters when turning hardened AISI 4140 steel (63 HRC) with Al 2 O3 + TiCN mixed ceramic tool, Materials and Design, date received 21.07.2005 and date accepted 06.01.2006. D.C. Montgometry, Design and analysis of experiments, 4th edition, New York: Wiley; 1997. N. Nalbant, H. Gokkaya, G. Sur, Application of Taguchi method in the optimization of cutting parameters for surface roughness in turning, Materials and Design, date received S. Thamizhmanii*, S. Saparudin, S. Hasan “Analyses of surface roughness by turning process using Taguchi method”; Journals of Achievements in Materialsand Manufacturing Engineering VOLUME 20ISSUES 1-2January-February2007 R.M. Sundaram, B.K. Lambert, Mathematical models to predict surface finish in fine turning of steel, Part I, International Journal of Production Research 19 (1981) 547–556. R.M. Sundaram, B.K. Lambert, Mathematical models to predict surface finish in fine turning of steel, Part II, International Journal of Production Research 19 (1981) 557–564. W.H. Yang, Y.S. Tang, Design optimization of cutting parameters for turning operations based on Taguchi method, Journal of Materials Processing Technology, 84 (1998) 122-129. Ersan Aslan, Necip Camuscu Burak Bingoren, Design of optimization of cutting parameters when turning hardened AISI 4140 steel (63 HRC) with Al 2 O3 + TiCN mixed ceramic tool, Materials and Design, date received 21.07.2005 and date accepted 06.01.2006.

ABOUT THE AUTHOR
Jitendra Verma was born in 15th Sep 1986. He received his B.Tech in Manufacturing Technology from JSS Academy of Technical Education, Noida (U.P.) in 2007. Currently, He is pursuing his M.Tech (C.I.M.) from Samrat Ashok Technological Institute, Vidisha (M.P.). His research interests are surface roughness, welding, Turning and machining.

Pankaj Agrawal was born in 18th August 1967. He is currently working as a Professor in Mechanical Engineering Department of Samrat Ashok Technological Institute, Vidisha (M.P.). He has more then 18 years experience in teaching, one year industry and 10 years of research experience. He has done his B.E. in Mechanical Engineering from Samrat Ashok Technological Institute, Vidisha (M.P.) in 1990. He has done M.Tech and then Ph.D. in 2003 from BARKATULLAH UNIVERSITY BHOPAL in 2003. He has published many papers in

260

Vol. 3, Issue 1, pp. 255-261

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
various journals and conferences of international repute. His main interests are hybrid manufacturing, stereo lithography, Supply Chain Management and Flexible Manufacturing Systems etc. Lokesh Bajpai was born in 19th December 1960.He is currently working as Professor in Mechanical Engineering Department of Samrat Ashok Technological Institute, Vidisha (M.P.). He has done B.E. from GEC Jablpur (M.P.) in 1984.He has done M.E., Ph.D. F.I.E. (India), MISME, M.I.S.C.A. He has more then 24 years experience in teaching and 14 years of research experience. He has published many papers in various journals and conferences of international repute. His main interests are computer integrated manufacturing, Flexible Manufacturing, computer aided process planning etc.

261

Vol. 3, Issue 1, pp. 255-261

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

PREDICTION OF HEAT-RELEASE PATTERNS FOR MODELING DIESEL ENGINE PERFORMANCE AND EMISSIONS
B.VenkateswaraRao1 and G. Amba Prasad Rao2
Graduate Student, 2Associate Professor Department of Mechanical Engineering, National Institute of Technology, Warangal, India
1

A BSTRACT
In the present paper phenomenological modelling has been adopted for modelling diesel engine processes considering heat losses, and variable specific heats using double-Weibe function for the heat release. High speed diesel fuel C10.8H18.7 is considered for calculations. Fuel injection timing engine speed, compression ratio, inlet charge pressure and temperature are observed to be pertinent parameters affecting diesel engine performance. Effect of exhaust gas recirculation on the formation and emission of oxides of nitrogen and soot density are also studied. Numerical experiments are performed by writing a computer code in C++ and heat release (both pre-mixed and diffusion phases), in-cylinder pressure and temperature histories and emissions are predicted. It was found that early injection timing leads to higher levels of pressure and temperature in the cylinder. It is observed that fuel injection timing and fraction of exhaust gas recirculation as critical factors in affecting engine performance as well as emissions.

K EYW ORDS: Diesel engine, double –Wiebe function, EGR, Performance, Emissions

I.

INTRODUCTION

Direct injection diesel engines exhibit better performance as far as fuel economy is concerned compared to gasoline engines. Of late, due to stringent emission norms researchers and leading manufacturers are aiming for the development of clean diesel engines. In this regard, computer simulations are found to be prominent tools for arriving at the optimum designs and to make the diesel technology be more competitive. Though there are various models such thermodynamics and fluid dynamics based models available, phenomenological models are attractive in the light of less computational complexities involved. These models can be made more attractive by imposing all possible practical conditions the diesel engine experiences and to predict performance near to actual cycle simulations. Typical direct injection diesel engine combustion process comprises of four phases viz; ignition delay, pre-mixed, diffusion and late burning. Abu-Nada et al. [1-3] carried out engine simulations taking into account the effect of heat transfer, friction, and temperature dependent specific heats on the overall engine performance. Miyamoto et al. [4] the model was originally developed for spark ignition (SI) engines; they claimed that it could be extended and modified to simulate compression ignition (CI) engines as well. This results in a significant shift in the rate of heat release model from the simple Weibe function commonly used for SI engines. A double peak heat release model becomes more representative CI engines [4].Arregle et al. [6] studied the influence of injection parameters and running conditions on heat release in a Diesel engine. Galindo et al. [7] used four different Weibe functions to account for pilot injection, premixed, diffusion and late combustion in the heat release model. Chemla et al. [8] used a zero-dimensional rate of heat release model for the simulation of direct injection diesel engine. Aithal [9] studied effect of EGR fraction on diesel engine performance considering heat loss and temperature dependent properties of the working fluid. The objective of the present work is to analyze the performance of a CI engine using a phenomenological-thermodynamics based model considering double-Weibe function for diesel

262

Vol. 3, Issue 1, pp. 262-269

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
combustion process. The developed model predicts in-cylinder temperatures and pressures as functions of the crank angle, variation of premixed, diffusion, heat release pattern by varying the engine operating and design parameters. The novelty of present model lies in considering effects of heat losses (both convection and radiation) and temperature-dependent specific heats in addition to ignition delay and EGR fraction. These features can be very useful in providing more realistic estimations for the performance and emission parameters in comparison to the existing models for CI engines.

II.

PHENOMENOLOGICAL MODELLING
(1)

The governing equation for calculating variation of in-cylinder pressure with respect to crank angle is:

dp k − 1  dQin dQloss  P dV P dk = − +  −k dθ V  dθ dθ  V dθ k − 1 dθ dQloss In Eq. (1), the rate of heat loss is expressed as: dθ dQloss 1 = hA(θ )(Tg − Tw ) dθ ω
The convective heat transfer coefficient is given by the Woschni model as [10]: −0.55 0.8 h = 3.26 D −0.2 P 0.8Tg w , The mean piston speed is related as

(2)

(3) (4)

Up =

2NS 60

Based on the Woschni’s formula calculate the convective mode of heat losses, assume it as 70% of total losses, based on that calculate 30% of heat losses, assume this 30% of heat losses is due to radiation. On the other hand, the rate of the heat input function [5, 7 and 8]:
m p −1

dQin (heat release) can be modeled using a dual Weibe dθ
mp

 Qp   θ dQin m  = a  θ  p θ dθ p    p

   

 θ exp − a  θ   p

   

 Q  + a d  θ   d

 θ m d   θ   d

   

md −1

 θ exp − a  θd  

   

md

  (5)  

Where p and d refers to premixed and diffusion phases of combustion. The parameters θp and θd represent the duration of the premixed and diffusion combustion phases. Also, Qp and Qdrepresent the integrated energy release for premixed and diffusion phases respectively. The constants a, mp, md are selected to match experimental data. For the current study, these values are selected as 6.9, 4, and 1.5 respectively [5]. It is assumed that the total heat input to the cylinder by combustion for one cycle is: (6) Qin = m f LHV Equations for ignition delay and fraction of fuel burned in premixed combustion are taken from Heywood [10]. Eq. (1) is discretized using a first order finite difference method to solve for the pressure at each crank angle (θ). Once the pressure is calculated, the temperature of the gases in the cylinder can be calculated using the equation of state as:

Tg =

P (θ )V (θ ) mR g

(2.7) The instantaneous cylinder volume, area, and displacement are taken from literature [10]. The initial NO formation rate written as

d [ NO] 6 × 1016  − 69,090  1/ 2 = exp  [O2 ]e [ N 2 ]e mol/cm3.s 1/ 2 dt T T  
The formation rate of soot particles is expressed in the Arrhenius form

(8)

263

Vol. 3, Issue 1, pp. 262-269

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

 Ef  dmsoot = A f m fuel p 0.5 exp−   RT   dt  
Initial temperature T [0] =

(9)

m a Ta + megr Tegr ( m a + m egr )

(10) (11)

P[0] V [0] = (ma + megr ) Rg T[0]

The following assumptions were made in modeling the compression and power strokes: i. All thermodynamic variables were assumed to be spatially uniform throughout the engine at any instant of time (zero-dimensional model) and variables varied temporally during the engine cycle. ii. Compression and expansion were assumed to take place in a series of quasi-equilibrium processes. iii. Compression stroke began at θ = 0 0 and expansion stroke ended at θ = 360 0 .Fuel injection began at θ = θ i and ended at θ = θ f . iv. v. vi. vii. For θ i ≤ θ ≤ θ ig , fuel was injected but not burned due to the ignition delay. Fuel combustion took place for θ ≥ θ ig . For θ ≥ θ ig , only a prescribed fraction of the instantaneous fuel mass available in the cylinder was burned. EGR fraction was defined as the ratio of the mass of exhaust gas to the mass of fresh air m egr m a at BDC.
0

0

0

(

)

Effect of soot formation on the reduction of the flame temperature (due to radiation) was considered. viii. EGR was assumed to consist of CO2 , H 2 O, O2 , N 2 (effect of residual gases neglected). The energy equation expressing the relationship between pressure and crank angle was solved using Euler’s method (First order finite difference expression), to obtain the work output and cycle efficiency. The specifications of engine and other pertinent parameters are given Table 1.
Table 1. Specifications of Engine and fuel Fuel Cetane number 45 Lower heating value (kJ/kg) Molecular weight (kg/kmol) Stoichiometric air fuel ratio Compression ratio Bore X Stroke (m X m) Connecting rod length (m) Swept volume (m3) Engine speed range (rpm) Equivalence ratio,Φ Injection timing Duration of combustion Wall temperature (K) 44.3×103 148.6 14.36:1 18:1 0.105 X 0.125 0.1 1.082×10−3 1000–5000 0.6 −12° to −8° 70° 400

III.

RESULTS AND DISCUSSION

Numerical experiments are performed, making use of the equations and imposing relevant conditions for the chosen diesel engine configuration, for predicting engine performance and emissions.

264

Vol. 3, Issue 1, pp. 262-269

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
3.1. Engine performance modeling: To begin with, heat release calculations are done for different crank angles;the pattern is shown in Figure.1, taking into consideration the heat transfer in the engine and variable specific heats of the air, using a dual Weibe function. This result is treated as base line data; it can be observed that there are three dominant combustion stages, namely premixed, diffusion and late combustion phases. The predicted combustion phases are in good agreement with the Lyn and Ways model [10].

Figure.1. Heat release pattern for the diesel engine at N=2500 rpm, Φ=0.6, and FIB 8°bTDC

The predicted heat release pattern is utilized to derive the in-cylinder pressure history to obtain engine performance. In-cylinder pressure variation with crank angle is obtained as shown in Figure.2, it can be observed that a deviation of combustion curve from the motoring pressures and peak pressure reaching about 90 bar occurring very near to TDC.

Figure 2. In-cylinder pressure for the diesel engine at N=2500 rpm, Φ=0.6, and FIB 8°bTDC.

Engine heat transfer from hot gases to walls comprises both convection, radiation and affects its performance, and emissions. The predominant portion of heat is lost through convection and radiation losses are significant in diesel engine due to soot formation accounting to about 25 to 35 per cent of the total heat transfer. For a given mass of fuel within the cylinder, higher heat transfer to the combustion chamber walls will lower average combustion gas temperature and simultaneous reduction in pressure decreases the work per cycle transferred to the piston. Figure3. 3represents rate of heat loss versus crank angle for the Diesel engine running at 2500 rpm and Φ=0.6.

Figure 3 Diesel engine heat loss history for N=2500 rpm and Φ=0.6 and FIB 8°bTDC.

265

Vol. 3, Issue 1, pp. 262-269

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
By advancing the fuel injection timing, ignition delay will increase which causes increase in the fraction of fuel burned in pre-mixed combustion phase and thus increases premixed combustion stage as illustrated in Figure 4. In early injection timing, most of the heat release taking place before piston reaches the TDC and for the case of late injection timing heat release still continues even after piston crosses the TDC. However, an interesting observation is that 20 degrees after TDC, the gas temperature and pressure in the cylinder is still highest for late injection timing since heat release continues even after piston crosses the TDC. Also, the early injection timing develops higher pressures and temperatures leading to higher power output compared to delayed injection timing. The effect is illustrated in Figures 5 and 6.

Figure 4 Variation of premixed combustion heat profiles for different injection timings.

Figure 5. In-cylinder pressures for different injection timings.

Figure 6.In-cylinder temperatures for different injection timings.

With increase in engine speed, the absolute time in milliseconds gets reduced. The actual time for accumulation of fuel will be reduced with lesser amount of fuel injected. This effect reduces the ignition delay in milliseconds, causing decrease in the fraction of fuel burned in pre-mixed combustion. However, the duration of pre-mixed combustion increases with increase in speed. As the pre-mixed heat release decreases with speed, the combustion phasing shifts to diffusion with relatively higher values. The speed has significant effect on heat loss as the speed decreases, the

266

Vol. 3, Issue 1, pp. 262-269

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
absolute time taken for heat exchange increases resulting in more heat loss to the cylinder walls. These effects are well predicted through the present numerical experiments. With the consequent reduction in heat losses, eventually the energy is utilized in developing more work. This can be observed with increase in in-cylinder pressures for the given conditions. Also, increase in engine speed decreases time for exchange of heat with cylinder walls resulting in reduced heat losses and finally yielding higher temperatures within the cylinder. The ultimate effect of speed is that it increases engine efficiency. 3.2. Engine emissions modeling: Though direct injection diesel engines are known to be fuel efficient compared gasoline engines, challenges still exist with regards to emission of higher levels of NOx and PM. Thus researchers and leading manufactures are resorting to limit these harmful emissions. Numerical experiments are performed to observe the emission formation phenomenon. Dilution of the intake air with cooled recirculated exhaust gas limits the production high temperatures with subsequent reduction of NOx due to a lowering of the adiabatic flame temperature, reduction in oxygen content of the intake mixture and reduction of heat input inside the combustion chamber. The popular technique to accomplish this task is exhaust gas recirculation (EGR). EGR also reduces the mixture-averaged ratio of specific heats (k) of the charge, pressure, temperature leading to a reduction in the thermodynamic cycle efficiency. Also, with increase in EGR fraction heat release in both premixed and diffusion combustion phase reduces, as shown in Figs.3.7 and 3.8 respectively. Utilizing Zeldovich mechanism related equations for NOx formation with in-cylinder temperature history the appropriate equations are solved for predicting NOx emissions. Apart from, EGR retarding of fuel injection beginning is also known one of the techniques for NOx reduction methods. The effect of EGR alone is plotted as Figure 9. Thus, it can be noticed that as the EGR fraction increases, there appears to substantial reduction in NOx formation and emission and also with retarding timing there is reduction in NOx emission [10].

Figure 7. Variation of pre-mixed combustion with EGR fraction

Figure 8. Variation of diffusion combustion with EGR fraction

267

Vol. 3, Issue 1, pp. 262-269

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Figure 9. Variation of NOx emissions with EGR fraction

In engines, the formation mechanisms for NOx and soot (or PM) are opposite to each other. High levels of EGR can also lead to high levels of PM due to a reduction in the oxygen levels and poor combustion quality. Additionally, EGR leads to higher specific fuel consumption and engine noise while adversely affecting the lubricating oil quality and engine durability. Also, early injection timing leads to higher levels of pressure and temperature which causes drop in the net soot density. Effect of EGR is shown in Figure 10. From Figures 9 and 10, it can be observed clearly that the factors which decrease the NOx would increase soot and vice-versa. Therefore, a trade-off should be obtained without emitting either harmful pollutants or in deterioration of engine gross performance [10].

Figure 10. Variation of soot emissions with EGR fraction

IV.

CONCLUSIONS
In the present work, DI diesel engine flow processes for performance and emissions are modelled using dual Wiebe function for heat release analysis. Based on the work, the following conclusions are drawn: Advancing of fuel injection timing, ignition delay increases which causes increase of premixed combustion with a minor effect on diffusion combustion. Early injection timing leads to higher levels of pressure and temperature inside the combustion chamber. Early injection timing leads to the formation of higher levels of NOx emission and lower levels of soot emission. Increase in engine speed, ignition delay increases in crank angle degrees, but decreases in milliseconds causing drop in premixed combustion however, with increased diffusion phase. Thermal efficiency, pressure and temperature will increase by increasing the engine speed. By increasing the fraction of EGR supply to the engine, heat input, pressure, temperature, adiabatic flame temperature, and efficiency will decrease. Increase in EGR fraction reduces the formation of NOx emissions and increase the formation of soot emission.

• •

ACKNOWLEDGEMENTS
The authors wholeheartedly thank the authorities of NIT Warangal for their cooperation in carrying out the work and permitting to publish work.

268

Vol. 3, Issue 1, pp. 262-269

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

REFERENCES
[1] E. Abu-Nada, I. Al-Hinti, B. Akash, Al-Sarkhi, Thermodynamic analysis of spark ignition engine using a gas mixture model for the working fluid, International Journal of Energy Research 31 (2007) 1031– 1046. E. Abu-Nada, Sakhrieh ,I. Al-Hinti , A. Al-Ghandoor , B. Akash Computational thermodynamic analysis of compression ignition engine International Communications in Heat and Mass Transfer 37 (2010) 299–303. E. Abu-Nada, I. Al-Hinti, A. Al-Sarkhi, B. Akash, Effect of piston friction on the performance of SI engine: a new thermodynamic approach, ASME Journal of Engineering for Gas Turbines and Power 130 (2) (2008) 022802-1. J I Ghojel Review of the development and applications of the Wiebe function: a tribute to the contribution of Ivan Wiebe to engine research Int. J. Engine Res. Vol. 11, 2010. N. Miyamoto, T. Chikahisa, T. Murayama, R. Sawyer, Description and analysis of diesel engine rate of combustion and performance using Weibe's functions, SAE paper 850107, 1985. J. Arrègle, J.M. Garcia, J.J. Lopez, C. Fenollosa, Development of a zero-dimensional Diesel combustion model. Part 1: analysis of the quasi-steady diffusion combustion phase, Applied Thermal Engineering 23 (2003) 1301–1317. J. Galindo, J.M. Lujan, J.R. Serano, L. Hernandez, Combustion simulation of turbocharger HSDI Diesel engines during transient operation using neural networks, Applied Thermal Engineering 25 (2005) 877–898. F.G. Chemla, G.H. Pirker, A. Wimmer, Zero-dimensional ROHR simulation for DI diesel engines — a generic approach, Energy Conversion and Management 48 (2007) 2942–2950. AithalSM .Impact of EGR fraction on diesel engine performance consideringheat loss and temperature dependent properties of the working fluid.Int J Energy Res 2008; 33:415-30. Heywood JB. Internal Combustion Engine Fundamentals, New York: McGraw Hill; 1988.

[2]

[3]

[4] [5] [6]

[7]

[8] [9] [10]

Authors Biographies B.VenkateswaraRao is the Graduate Student Dept. of Mechanical Engineering, NIT Warangal, India. His areas of interest are IC Engines, Engine simulation

G.Amba Prasad Rao is working as Associate Professor in Dept. of Mechanical Engineering, NIT Warangal-506 004, India. His total teaching experience is 22 years. His areas of interest are IC Engines, Alternate Fuels, Emissions and its control, Engine simulation.

269

Vol. 3, Issue 1, pp. 262-269

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

MULTI-OBJECTIVE OPTIMIZATION OF CUTTING PARAMETERS FOR SURFACE ROUGHNESS AND METAL REMOVAL RATE IN SURFACE GRINDING USING RESPONSE SURFACE METHODOLOGY
M. Janardhan1 and A. Gopala Krishna2
1

Department of Mechanical Engineering, Abdul Kalam Institute of Technological Sciences, Kothagudem- 507120, A.P, India. 2 Department of Mechanical Engineering, University College of Engineering, JNTU, Kakinada-533 003, AP, India.

ABSTRACT
Surface grinding is the most common process used in the manufacturing sector to produce smooth finish on flat surfaces. Surface quality and metal removal rate are the two important performance characteristics to be considered in the grinding process. The economics of the machining process is affected by several factors such as abrasive wheel grade, wheel speed, depth of cut, table speed and material properties. In this work, empirical models are developed for surface roughness and metal removal rate by considering wheel speed, table speed and depth of cut as control factors using response surface methodology. In this paper, Response surface methodology (RSM) has been applied to determine the optimum machining parameters leading to minimum surface roughness and maximum metal removal rate in Surface grinding process operation on EN24 steel. The second order mathematical models in terms of machining parameters were developed for metal removal rate (MRR) and Surface roughness on the basis of experimental results. The model selected for optimization has been validated with F-test. The adequacy of the models is tested on output responses have been established with analysis of variance (ANNOVA). An attempt has also been made to optimize cutting parameters using multiobjective characteristics for the developed prediction models using Response surface methodology (RSM)

K EYW ORDS: Surface grinding, MRR, Surface roughness, RSM, Optimization.

I.

INTRODUCTION

Grinding is a complex machining process with lot of interactive parameters, which depend upon the grinding type and requirements of products. The surface quality produced in surface grinding is influenced by various parameters given as follows .(i) Wheel parameters: abrasives, grain size, grade, structure, binder, shape and dimension, etc.(ii) Work piece parameters: fracture mode, mechanical properties, and chemical composition, etc.(iii) Process parameters: wheel speed, depth of cut, table speed, and dressing condition, etc.(iv) Machine parameters: static and dynamic characteristics, spindle system, and table system, etc.[1] surface roughness is an performance index to meet the technical standards and customer satisfaction this performance index depends on various machining parameters. The selection of proper combination of machining parameters yields the desired surface finish and metal removal rate [2] the proper combination of machining parameters is an important task as it determines the optimal values of surface roughness and metal removal rate. It is necessary to develop mathematical models to predicate the influence of the operating conditions [3]. In the present work mathematical models has been developed to predicate the surface roughness and metal removal rate with the help of Response surface methodology, Design of experiments [30]. The Response surface methodology (RSM) is a practical, accurate and easy for implementation. The study of most important variables effecting the quality characteristics and a plan for conducting such experiments is called design of experiments(DOE) [31].The experimental data is used to develop

270

Vol. 3, Issue 1, pp. 270-283

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
mathematical models for second order models using regression methods. Analysis of variance is employed to verify the validity of the model. RSM optimization procedure has been employed to optimize the output responses, surface roughness and metal removal rate subjected to grinding parameters namely wheel speed, table sped and depth of cut using multi objective function model.

II.

METHODOLOGY

In this work, experimental results were used for modeling using Response surface methodology, is a practical, accurate and easy for implementation. The experimental data was used to build first order and second order mathematical models by using regression analysis method. This developed mathematical models were optimized by using the RSM optimization procedure for the output responses by imposing lower and upper limit for the input machining parameters namely table speed, wheel speed, and depth of cut.

2.1 Design of Experiments (DOE)
The study of most important variables affecting quality characteristics and a plan for conducting such experiments is called the Design of Experiments. G.Taguchi (1959) of Japan, by developing the associated concept of linear graph, was able to device numerous variants based on the OA design, which can easily be applied by an engineer or a scientist without acquiring advanced statistical knowledge for working out the design and analysis of even complicated experiments (Ross J. Philip, 1989).These methods have the advantage of being highly flexible and readily enable allocation of different levels of factors, even when these levels are not the same in number for all the factors studied [5]. The beauty of these methods lies in cutting to the bare minimum the size of experimentation. At the same time yielding results with high precision, thus, by a mere 27experiments, we may be able to evaluate all the main effects. The Design layout in Taguchi’s Method explained below: 1. List down the Response, Factors and levels along with the desired interactions. 2. Find the Degrees of Freedom for each factor and for each interaction. 3. Compute the Total Degrees of Freedom (TDF). 4. The minimum number of trials (MNE) is equal to total degrees of freedom Plus one (+1). 5. Choose the nearest orthogonal array series like: L4, L8, L16 or L9, L27, etc. 6. Draw the required Linear Graph (LG). 7. Number the linear Graph by starting with the Number 1 for factor A and Number 2 for factor B. Then check whether any interaction exists. If not, proceed with the Number 3 for factor C. If there is an interaction, check with the interaction Table, which Column is to be allotted to the interaction? Then Proceed with the next number for the next factor. 8. Complete the numbering as described until the following is achieved. All the factors and interactions are numbered. There is no repetition of numbers. The interaction numbers are as per the Interaction table. The numbers used do not exceed the number of columns permitted for the orthogonal array table. 9. Write the column numbers against each factor. That is the Design Assignment. Rewrite the OA Table with only those columns represented by factors and all the rows as per the OA Table. Replace the 1, 2 &3 in the Table with the Physical value of the level from the Factors and Levels identified. This completes the Design Layout.

2.2 Response Surface Methodology (RSM)
Response Surface Methodology is combination of mathematical and statistical technique [30-31], used develop the mathematical model for analysis and optimization. By conducting experiment trails and applying the regression analysis, the output responses can be expressed in terms of input machining parameters namely table speed, depth of cut and wheel speed. The major steps in Response Surface Methodology are 1. Identification of predominate factors which influences the surface roughness, Metal removal rate. 2. Developing the experimental design matrix, conducting the experiments as per the above design matrix.

271

Vol. 3, Issue 1, pp. 270-283

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
3. 4. 5. 6. 7. Developing the mathematical model. Determination of constant coefficients of the developed model. Testing the significance of the coefficients. Adequacy test for the developed model by using analysis of variance (ANNOVA). Analyzing the effect of input machining parameters on output responses, surface roughness and metal removal rate

III.

MATHEMATICAL FORMULATION

The first order and second order Mathematical models were developed using multiple regression analysis for both the output responses namely surface roughness and metal removal rate. Multiple regression analysis [17-22] is a statistical technique, practical, easy to use and accurate. The aim of developing the mathematical models is to relate the output responses with the input machining parameters and there by optimization of the machining process. By using these models, optimization problem can be solved by using Taughis optimization procedure as multi objective function model. The mathematical models can be represented by Y= f (V,N,d) (1) Where Y is the output grinding response, V, N, d are the table speed, wheel speed, depth of cut respectively In this work the following mathematical models were formulated. Metal removal rate, MRR=K1Va1Nb1dc1e (2) a2 b2 c2 (3) Surface roughness, Ra =K2V N d e To determine the above constants and exponents, this mathematical model have to be linearisied by performing the logarithmic transformation which as follows ln MRR= lnk1+a1lnV+b1lnN+c1lnd (4) (5) ln Ra= lnk2+a2lnV+b2lnN+c2lnd The constant and exponents can be determined by the method of least squares. The first order and second order linear models, develop from the above functional relationship using the least square regression analysis can be represented as follows Y1= Y-e = b0x0+b1x1+b2x2+b3x3 (6) Where Y1 is first order output response of metal removal rate, Y is the measured metal removal rate, x1x2x3 are the logarithmic transformations of table speed, wheel speed, depth of cut, respectively. The second order polynomial of output response will be given as Y2=Y-e= b0x0+b1x1+b2x2+b3x3+b12x1x2+b13x1x3+b23x2x3+b11x12+b22x22+ b33x32 (7) Where Y2 is second order output response of metal removal rate Y is the measured metal removal rate,b0,b1, b2, b3, b12, b13, b23, b11, b22 b33 are estimated by the method of lest squares. The validity of this mathematical model will be tested using F test, Chi-Square test before going for optimization.

IV.

EXPERIMENTAL DETAILS

A set of experiments were conducted on surface grinding machine to determine effect of machining parameters namely table speed(m/min),wheel speed(RPM),depth of cut(mm) on output responses namely surface roughness and metal removal rate. The machining conditions were listed in table 1.Three levels and three factors used to design the orthogonal array by using design of experiments(DOE) and relevant ranges of parameters as shown in Table 2.Grinding wheel used for the present work is the aluminum oxide abrasives with vitrified bond, WA 60K5V is used. The selected L27 orthogonal array to conduct the experiments is shown in the Table 3 along with the output responses, MRR and surface roughness. MRR was calculated as the ratio of volume of material removed from the work piece to the machining time. The surface roughness, Ra was measured in perpendicular to the cutting direction using with Surface Roughness tester SJ-201 at 0.8mm cutoff value. An average of six measurements was taken at six different places to record the output response, surface roughness. These results will be further used to analyze the effect of input machining parameters on output responses with the help of RSM and design expert software.

272

Vol. 3, Issue 1, pp. 270-283

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Table 1 Machining conditions (a) (b) (c) (d) Work piece material: EN 24 steel Chemical composition: Carbon 0.35-0.45/ Silicon 0.10-0.35/ Manganese 0.45-0.70/ Nickel 1.30-1.80 / Chromium 0.90-1.40/ Moly 0.20-0.35/ Sulphur 0.050 (max)/ Phosphorous 0.050(max) and balance Fe Work piece dimensions: 155mm x 38mm x 38mm Physical properties: Hardness-201BHN, Density-7.85 gm/cc, Tensile Strength-620 Mpa (e) (f) Grinding wheel: Aluminum oxide abrasives with vitrified bond wheel WA 60K5V Grinding wheel size :250 mm ODX25 mm widthx76.2 mm ID Table 2 Levels of independent control factors S.No Control Factor Symbol Levels of factors -1 0 +1 1 Wheel speed N 1250 1650 2050 2 Table speed V 7.5 10 12.5 3 Depth of cut d 5 10 15 Unit RPM m/min µm

Trail no 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

Table 3 Experimental observations Surface Depth Roughness Wheelspeed Table speed of cut (Ra)(µm) (N)(RPM) (V)(m/min) (d)(µm) 1250 1250 1250 1250 1250 1250 1250 1250 1250 1650 1650 1650 1650 1650 1650 1650 1650 1650 2050 2050 2050 2050 2050 2050 2050 7.50 7.50 7.50 10.0 10.0 10.0 12.5 12.5 12.5 7.50 7.50 7.50 10.0 10.0 10.0 12.5 12.5 12.5 7.50 7.50 7.50 10.0 10.0 10.0 12.5 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 1.034 1.440 1.624 1.324 1.591 1.721 1.38 1.679 1.940 1.180 1.56 1.684 1.490 1.641 1.716 1.501 1.697 1.826 1.361 1.582 1.703 1.460 1.632 1.805 1.513

Metal Removal rate (gm/min) 5.510 10.28 15.505 7.350 13.28 22.07 9.190 16.89 26.09 6.260 13.13 17.68 11.63 15.61 21.17 12.42 18.17 25.87 7.760 13.21 19.40 10.35 15.16 24.16 12.64

273

Vol. 3, Issue 1, pp. 270-283

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
26 27 2050 2050 12.5 12.5 10 15 1.734 2.072 22.44 30.44

V.

DEVELOPMENT OF EMPIRICAL MODELS

In the present study, Empirical models of second order for the output responses, Surface roughness (Ra), Metal removal rate (MRR) in terms of input machining parameters in actual factors were developed by using the RSM [23-27]. The developed models are further used for optimization of the machining process. To determine the regression coefficients of the developed model, the Statistical analysis software, MINITAB, 16V is used. The second order models were developed for output responses due to lower predictability of the first order model to the present problem. The following equations were obtained in terms of uncoded factors Ra = - 0.4485 + 0.0005N + 0.1236V + 0.0975d - 0.0022V2 - 0.0017d2 - 0.00002NV - 0.000013Nd + 0.000053Vd (8)

MRR = - 0.9022 + 0.0023N - 0.3760V - 0.20041d - 0.000001N2 + 0.0118V2 + + 0.0203d2 + 0.00036 NV + 0.00007Nd + 0.1006Vd (9)

Analysis of variance (ANNOVA) is employed to test the significance of the developed models. The multiple regression coefficients of the second order model for surface roughness and metal removal rate were found 0.9325 and 0.9781 respectively. The R2 values are very high,close to one,it indicates that the second order models were adequate to represent the machining process. The "Pred RSquared" of 0.8027 is in reasonable agreement with the "Adj R-Squared" of 0.8967 in case of surface roughness. The "Pred R-Squared" of 0.9498 is in reasonable agreement with the "Adj R-Squared" of 0.9666 in case of MRR. Similarly,The Model F-value of 26.09 for surface roughness and The Model F-value of 84.51 for metal removal rate implies the model is significant. There is only a 0.01% chance that a "Model F-Value" this large could occur due to noise. The analysis of variance (ANNOVA) of response surface quadratic model for surface roughness and metal removal rate were shown in Table 4 and Table 5 respectively. Adeq Precision" measures the signal to noise ratio. A ratio greater than 4 is desirable.S/N ratio of 18.415 &32.54 for surface roughness and MRR indicates an adequate signal. This model can be used to navigate the design space. The P value for both the models is lower than 0.05 (at 95% confidence level) indicates that the both the models were considered to be statistically significant.The normal probability plots of residuals for Ra and metal removal rate are shown in the fig 1 and fig 2 respectively. From these plots, it can be concluding that the residuals lies on a straight line which implies that the errors are distributed normally and the developed regression models are well fitted with the observed valves. The Plot of Predicted versus actual response for surface roughness and MRR are shown in the fig 3 and fig 4 respectively and show that the models are adequate without any violation of independence or constant assumption.
Table 4 ANOVA for Response Surface Quadratic Model of Ra p-value Sum of Squares df Mean Square F Value 1.18 0.071 0.26 0.82 4.332E-003 7.550E-003 5.333E-006 4.603E-007 1.157E-003 9 1 1 1 1 1 1 1 1 0.13 0.071 0.26 0.82 4.332E-003 7.550E-003 5.333E-006 4.630E-007 1.157E-003 26.09 14.09 52.24 163.66 0.86 1.50 1.061E-003 9.211E-005 0.23 Prob>F 0.0001 Significant 0.0016 < 0.0001 < 0.0001 0.3662 0.2371 0.9744 0.9925 0.6374

Source Model N V d NV Nd Vd N2 V2

274

Vol. 3, Issue 1, pp. 270-283

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
d2 Residual Cor.Total Std.Dev. Mean C.V PRESS 0.011 0.085 1.27 0.071 1.59 4.46 0.25 1 17 26 0.011 5.026E-003 R-Squared Adj R-Squared Pred R-Squared Adeq Precision 2.23 0.1537

0.9325 0.8967 0.8027 18.415

Table 5 ANOVA for Response Surface Quadratic Model of MRR Source Model N V d NV Nd Vd N2 V2 d2 Residual Cor.Total Std.Dev. Mean C.V PRESS Sum of df Squares 1098.50 48.00 237.73 790.36 1.52 0.22 18.99 0.086 0.033 1.55 24.55 1123.05 9 1 1 1 1 1 1 1 1 1 17 26 1.2 15.69 7.66 56.37 Mean Square 122.06 48 237.73 790.36 1.52 0.22 18.99 0.086 0.033 1.55 1.44 F Value 84.51 33.24 164.61 547.25 1.05 0.15 13.15 0.060 0.023 1.07 p-value Prob>F < 0.0001 significant < 0.0001 < 0.0001 < 0.0001 0.3189 0.6994 0.0021 0.8101 0.8825 0.3144

R-Squared Adj R-Squared Pred R-Squared Adeq Precision

0.9781 0.9666 0.9498 32.524

Figure 1. Normal probability plot of residuals for Ra

275

Vol. 3, Issue 1, pp. 270-283

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Figure 2. Normal probability plot of residuals for MRR

Figure 3. Comparison of Predicted and actual values for Ra

Figure 4. Comparison of Predicted and actual values for MRR

VI.

INTERPRETATION OF DEVELOPED MODELS

The detailed main effects and interaction effects for both the outputs are discussed in the following sections. It should be noted that if a particular parameter does not influence the output during the course of evaluation, it gets eliminated.

276

Vol. 3, Issue 1, pp. 270-283

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963 6.1 Effect of process parameters on surface roughness (Ra)
6.1.1 Direct effects The direct effect of process parameters on output response, surface roughness is shown in figs 5 to 7. From Fig. 5, it is observed that increase in wheel speed tends to improve the finish. With carbide tools particularly, slow speed is not at all desirable since it means wastage of time and money and tools wear out faster. Fig. 6 shows the effect of table speed on roughness. As the table speed increases, finish gets poorest because the tool marks show on the work piece. The effect of depth of cut on surface roughness is shown in Fig. 7. It is noted from Fig. 7, that the increase in depth of cut makes the finish poor. Hence smaller values of table speed and depth of cut and larger value of wheel speed must be selected in order to achieve better surface roughness during the process.

Figure 5. Direct effect of wheel speed on Ra

Figure 6. Direct effect of table speed on Ra

Figure 7. Direct effect of depth of cut on Ra

277

Vol. 3, Issue 1, pp. 270-283

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
6.1.2 Interaction effects The three dimensional surface plots for the surface roughness are shown in figs 8-10.In each of these graphs, two cutting parameters are varied while the third parameter is held as its mid value. From fig 8, it is observed that best surface roughness was obtained at the lowest depth of cut and low wheel speed combination. The surface roughness results also indicate a poor surface finish for a lower depth of cut at a lower wheel speed. This behavior is due to the plugging action of the tool on the work piece surface at lower depth of cut. It is seen from these graphs that there is significant amount of curvature indicating non –linearity in the variation. From fig 9, it is observed that best surface roughness is obtained at low depth of cut and low table speed. The 3d surface graph of Ra at constant depth of cut of 10 microns is shown in fig 10. From these graphs it is observed that there is switching of the curvature effect. It indicates that the reversal in behavior depending on the combination of the machining parameters. It also points towards significant contribution from the interaction of the machining parameters.

Figure 8. Interaction effect of wheel speed and depth of cut on Ra

Figure 9. Interaction effect of depth of cut speed and table speed on Ra.

Figure 10. Interaction effect of wheel speed and table speed on Ra.

278

Vol. 3, Issue 1, pp. 270-283

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963 6.2 Effect of Process parameters on MRR
6.2.1 Direct effects The direct effect of process parameters on output response, surface roughness is shown in Figs 11 to 13. From Fig. 11, it is observed that increase in wheel speed tends to increase the MRR; where as the other two machining parameters are kept at its mid value. It is observed from the direct effects, depth of cut plays more vital role on MRR than other two parameters. Material removal rate in machining process is an important factor because of its vital effect on the industrial economy. Increasing the table speed, wheel speed and depth of cut leads to an increase in the amount of Material removal rate. But the most influential factors are table speed, and depth of cut. The highest value of MRR is obtained at the extreme range of the input parameters in all the interaction plots. Also the MRR increases gradually with the depth of cut.

Figure 11. Direct effect of wheel speed on MRR

Figure 12. Direct effect of Table speed on MRR

Figure 13. Direct effect of depth of cut on MRR

6.2.2 Interaction effects The 3D surface graphs for metal removal rate are shown in the fig 14 to 16 and shows that the graphs are curvilinear profile as the empirical model developed is quadratic. In each of these graphs, two cutting parameters are varied while the third parameter is held as its mid value. From fig 14, it is

279

Vol. 3, Issue 1, pp. 270-283

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
observed that best surface roughness was obtained at the lowest depth of cut and low wheel speed combination. The surface roughness results also indicate a poor surface finish for a lower depth of cut at a lower wheel speed. This behavior is due to the plugging action of the tool on the work piece surface at lower depth of cut. It is seen from these graphs that there is significant amount of curvature indicating non –linearity in the variation. From these graphs it is observed that there is switching of the curvature effect. It indicates that the reversal in behavior depending on the combination of the machining parameters. It also points towards significant contribution from the interaction of the machining parameters.

Figure 14. Interaction effect of wheel speed and table speed on MRR

Figure 15. Interaction effect of wheel speed and depth of cut on MRR

Figure 16. Interaction effect of table speed and depth of cut on MRR

VII.

FORMULATION OF THE PROBLEM

In the process of optimization, the aim is to maximize the MRR and minimize the surface roughness (Ra), which forms the multi objective optimisation problem and these two are conflicting in nature [24]. The optimization problem for MRR and surface roughness (Ra) with feasible limits of control variables are represented in the equations (9) and (10) respectively after eliminating the insignificant terms.

280

Vol. 3, Issue 1, pp. 270-283

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Minimize Ra = - 0.4485 + 0.0005N + 0.1236V + 0.0975d - 0.0022V2 - 0.0017d2 - 0.00002NV 0.000013Nd + 0.000053Vd (10) Maximize MRR = - 0.9022 + 0.0023N - 0.3760V - 0.20041d - 0.000001N2 + 0.0118V2 + 0.020d2 + 0.00036 NV + 0.00007Nd + 0.1006Vd (11) Subjected to 1250 RPM≤N≤2050 RPM 7.5 m/min ≤V≤12.5 m/min 5 µm ≤ d ≤15 µm Once the optimization problem is formulated, then it is solved using a Response surface optimisation.

VIII.

OPTIMIZATION OF THE PROBLEM

Optimization of machining parameters increases the utility for machining economics, a response surface optimization is attempted using Minitab software for individual machining parameters in surface grinding. Table 6 shows the RSM optimization results for the surface roughness and MRR parameters in surface grinding. It also includes the results from confirmation experiments conducted with the optimum conditions.
Table 6 RSM optimization for output responses Parameter Objective Function Min Max Optimum combination N 1250 2050 V 7.5 12.5 d 5 15 Predicted response 1.128 29.48 Expected Value 1.034 30.44 % of error

Ra MRR

8.3 3.25

IX.

RESULTS

The optimum results for the output responses namely surface roughness and Metal removal rate interms of machining parameters namely wheel speed, table speed and depth of cut on EN 24 steel on CNC surface grinding machine using Minitab software were determined and presented in Table 6.The confirmation experiments were conducted and there is an good agreement between predicted and experimental values. It is found that the error in prediction of the optimum conditions is about 3 to8 %. Thus the response optimization predicts the optimum conditions fairly well

X. Conclusions
In this study an experimental investigation performed to evaluate the surface roughness and MRR parameters of EN 24 steel in surface grinding operation has been presented. A plan of experiments has been prepared in order to test the influence of cutting speed, feed rate and depth of cut on the output parameters. The obtained data have been statistically processed using response surface method. The empirical models of output parameters are established and tested through the analysis of variance to validate the adequacy of the models. It is found that the surface roughness and MRR parameters greatly depend on work piece materials. A response surface optimization is attempted using Minitab software for output responses in surface grinding.

REFERENCES
[1] Anne Venu gopal, P.V.Rao,Selection of optimum conditions for maximum material removal rate with surface finish and damage as constraints in SiC grinding, International Journal of Machine Tools & Manufacture 43 (2003) 1327-1336. [2] Jae –Seob Kwak, Application of Taguchi and response surface methodologies for geometric error in Surface grinding process, international journal of machine tools &manufacture 45 (2005) 327-334. [3] P.V.S.Suresh, P.V.Rao, etc, A genetic algorithmic approach for optimization of surface roughness prediction model, international journal of machine tools &manufacture 42 (2002) 675-680.

281

Vol. 3, Issue 1, pp. 270-283

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
[4] I.A.Choudhury Surface roughness prediction in the turning of high strength steel by factorial design of experiments journal of materials processing technology 67 (1997) 55-61. [5] Anne Venu gopal, P.V.Rao, The optimization of the grinding of silicon carbide with diamond wheels using genetic algorithms, international journal of advanced manufacturing technology 22(2003) 475480. [6] P.G.Benardos, Predicting surface roughness in machining international journal of machine tools &manufacture 43 (2003) 833-844. [7] P.Krajnik, J.Kopac, A.Sluga, Design of grinding factors based on response surface methodology, journal of materials processing technology 162 (2005) 629-636. [8] L.P.Khoo and C.H.Chen, Integration of response surface methodology with genetic algorithm, international journal of advanced manufacturing technology 18(2001) 483-489. [9] N. Suresh Kumar Reddy and P. V. Rao, Selection of an optimal parametric combination for achieving a better surface finish in dry milling using genetic algorithms, International Journal of Advanced Manufacturing Technology, v 28, n 5-6, March, 2006, p 463-473. [10] T.S.Lee, T.O.Ting, Y.J.Lin, Than Htay, A particle swarm approach for grinding process optimization analysis, International journal of advanced manufacturing technology DOI. [11] Radu Pavel.Anil Srivastava, An experimental investigation of temperatures during conventional and CBN grinding, International journal of advanced manufacturing technology 2006. [12] X.M.Wen, A.A.O.Tay and A.Y.C.Nee, Micro-computer-based optimization of the surface grinding process, Journal of Material Processing Technology, 29 (1992) 75-90. [13] K.Ramesh, S.H.yeo, S.Gowri&L.Zhou ,Experimental Evaluation of super high speed grinding of advanced ceramics, International journal of advanced manufacturing technology,17(2001)87-92. [14] K.Palanikumar, Modeling and analysis of surface roughness in machining glass fibre reinforced plastics using response surface methodology, journal of materials and design, 2006. [15] Yusuf Sahin,A.Riza Motorcu, Surface roughness model for machining mild steel with coated carbide tool, journal of materials and design,26,(2005) 321-326. [16] Jae –Seob Kwak, An analysis of grinding power and surface roughness in external cylindrical grinding of hardened SCM 440 Steel using the response surface method., international journal of machine tools &manufacture 46 (2006) 304-312. [17] M.Y.Noordin , V.C.Venkatesh, S.Sharif, S.Elting, A.Abdullah Application of response surface methodology in describing the performance of coated carbide tools when turning AISI 1045 steel,Journal of material processing technology145 (2004) 46-58. [18] Tugrul Ozel, Yigit karpat Predictive modeling of surface roughness and tool in hard turning using regression and neural networks. International journal of machine tools & manufacture 45 (2005) 467479. [19] Sanjay Agarwal, P.Venkateswar rao , Experimental investigation of surface /subsurface damage formation and material removal mechanisms in sic grinding. International journal of machine tools &manufacture 48(2008) 698-710. [20] N.Alagumurthi, K. palaniradja and V.Soundararajan,cylindrical grinding – a review on surface integrity. International journal of precision engineering and manufacturing vol. no 3. [21] Stephen malkin and changsheng guo, Model based simulation of grinding process department of mechanica l& industrial engineering, university of Massachusetts Amherst, MA 01003; united technologies research center 411 silver lanes East Hartford, ct 06108. [22] Shaji, S. and Radhakrishnan, V., 2003, “Analysis of process parameters in surface grinding with graphite as lubricant based on the Taguchi method”, Journal of Materials Processing Technology, 141:51-59. [23] Dhavlikar, M.N., Kulkarni, M.S. and Mariappan, V., 2003, “Combined Taguchi and dual response method for optimization of a centerless grinding operation”, Journal of Materials Processing Technology, 132:90-94. [24] Hecker, R.L. and Liang, S.Y., 2003, “Predictive modeling of surface roughness in grinding”,International Journal of Machine Tools &Manufacture, 43:755-761. [25] Zhong, Z.W., Khoo, L.P. and Han, S.T., 2006,“Prediction of surface roughness of turned surfaces using neural networks”, International Journal of Advanced Manufacturing Technology, 28:688-693. [26] Sun, X., Stephenson, D.J., Ohnishi, O. and Baldwin,A., 2006, “An investigation into parallel and cross grinding of BK7 glass”, Precision Engineering,30:145-153. [27] Atzeni, E. and Iuliano, L., 2008, “Experimental study on grinding of a sintered friction material”, Journal of Materials Processing Technology, 196:184-189. [28] Choi, T.J., Subrahmanya, N., Li, H. and Shin, Y.C., 2008, “Generalized practical models of cylindrical plunge grinding process”, International Journal of Machine Tools and Manufacture, 48:61-72.

282

Vol. 3, Issue 1, pp. 270-283

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
[29] Liu, C.H., Chen, A., Chen, C.A. and Wang, Y.T., 2005, “Grinding force control in an automatic Surface finishing system”, Journal of Materials Processing Technology, 170:367-373. [30] Montegomery. D. C., 1991, Design and Analysis of experiments, Wiley, India. [31] Response Surface Methodology, Raymonds.H.Mayres & Douglas.E.Montgomery, Second Edition,John Whiely Publishers. Authors M. Janardhan obtained his B.Tech (Mechanical Engineering) & M.Tech (Mechanical Engineering) from S.V University in 1989 and 1999 respectively. He is at the finishing stage of PhD degree in JNT University, Kakinada, India. His employment in teaching includes at the levels of lecturer, Assistant professor, Associate professor and Professor at various engineering colleges over a span of more than 22 years. Presently, He is working as a principal at Abdul Kalam Institute of Technological Sciences, Kothagudem- 507120, A.P, India. He published 6 research papers in international conferences & International Journals. He authored a textbook entitled" dynamics of machinery" published by Hi-Tech Publishers, Hyderabad. He is life member of ISTE. A. Gopalakrishana obtained his B.Tech (Mechanical Engineering) from S.K University in 1994, M.Tech (Mechanical Engineering) from S.V University in 1996 respectively. He received his doctorate degree (PhD) in Mechanical Engineering from JNT University, Kakinada, India in 2006. His teaching experience includes at the levels of lecturer, Assistant professor and Associate professor at JNT University, Kakinada over a span of more than 12 years. Presently, He is working as a Associate professor at JNT University, Kakinada A.P, India. He published 80 research papers in national /international conferences & Journals. He is life member of ISTE and Institution of Engineers.

283

Vol. 3, Issue 1, pp. 270-283

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

HARMONIC REDUCTION IN CASCADED MULTILEVEL INVERTER WITH REDUCED NUMBER OF SWITCHES USING GENETIC ALGORITHMS
C. Udhaya Shankar1, J.Thamizharasi1, Rani Thottungal1, N. Nithyadevi2
1

Department of EEE, Kumaraguru College of Technology, Coimbatore, India 2 Department of Mathematics, Bharathiyar University, Coimbatore, India

ABSTRACT
In this paper, a new topology of cascaded multilevel inverter using a reduced number of switches is proposed. The new topology has the advantage of reduced number devices compared to traditional configurations and can be extended to any number of levels. This topology results in reduction of installation area, cost, computational time and has simplicity of control system. This structure consists of series connected sub-multilevel inverter blocks. The GA technique finds the optimal solution set of switching angles, if it exits, for each required harmonic profile. Both simulation results and experimental verification of the proposed inverter topology for different number of levels and different harmonic profiles are presented.

KEYWORDS:

Multilevel inverter, Cascaded multilevel inverter, H –bridge, Full-bridge, Sub-multilevel inverter, Selective harmonic elimination, Programmed PWM, Genetic algorithms.

I.

INTRODUCTION

A Multilevel inverter is a power electronic system that synthesizes a desired output voltage from several DC voltages as inputs. The concept of utilizing multiple small voltage levels to perform power conversion was presented by a MIT researcher [1,2]. Advantages of this approach include good power quality, good electro-magnetic compatibility, low switching losses and high voltage capability. The first introduced topology is the series H-bridge design [1]. This was followed by the diodeclamped inverter [2–4] which utilizes a bank of series capacitors to split the dc bus voltage. The flying-capacitor (or capacitor clamped) [5] topology uses floating capacitors to clamp the voltage levels. Another multilevel design, involves parallel connection of inverter phases through interphase reactors [6]. One particular disadvantage of multilevel inverter is the great number of power semi-conductor switches needed. So, in practical implementation, reducing the number of switches and gate driver circuits is very important. Genetic algorithms (GAs) are stochastic optimization techniques. Genetic Algorithms are applied in this to compute the switching angles in a cascaded multilevel inverter to produce the required fundamental voltage while, at the same time, 3rd and 5th harmonics are reduced. It is shown in [7–9] that the problem of harmonic elimination is converted into an optimization task using binary coded genetic algorithms (GA).Various components of GAs such as chromosomes, fitness function, reproduction, crossover and mutation are illustrated as applied to the present work.

II.

CONVENTIONAL CASCADED MULTILEVEL INVERTER

The cascaded multilevel inverter consists of series connections of n full bridge topology fig.1shows the configuration of cascaded multilevel inverter.

284

Vol. 3, Issue 1, pp. 284-294

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Fig. 1. Configuration of cascaded multilevel inverter

III.

SUGGESTED TOPOLOGY

Fig. 2 shows the suggested basic unit for a sub-multilevel inverter. This consists of a capacitor (with dc voltage equal to Vdc) with two switches S1 and S2. Table 1 indicates the values of Vo for states of switches S1 and S2. It is clear that both switches S1 and S2 cannot be on simultaneously because a short circuit across the voltage Vdc would be produced. It is noted that two values can be achieved for Vo.

The basic unit shown in Fig. 2 can be cascaded as shown in Fig. 3.

285

Vol. 3, Issue 1, pp. 284-294

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Fig. 3. Cascaded basic unit

The output voltage of conventional multilevel inverter for all times has zero or positive value. In the following, we propose a new method for determination of magnitudes of dc voltage sources which are used in the proposed multilevel inverter. The number of maximum output voltage steps of the n series basic units can be evaluated by, Nstep=n+1 The maximum output voltage is given by, Vo,max=n*Vdc

IV.

EXTENDED STRUCTURE

Fig 4. The proposed structure for generating both positive and negative voltages

286

Vol. 3, Issue 1, pp. 284-294

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

V.

COMPARISON OF THE SUGGESTED STRUCTURE WITH THE CONVENTIONAL MULTILEVEL INVERTER

Tables 4 and 5 compare the power component requirements among the conventional and suggested multilevel inverters for the same number of the output voltage steps, respectively.

287

Vol. 3, Issue 1, pp. 284-294

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

VI.

SIMULATION DIAGRAM OF 11-LEVEL (CONVENTIONAL) CASCADED MULTILEVEL INVERTER

Fig 5. 11-level Cascaded Multilevel inverter(Cascaded) Table 5: Switching sequence for Conventional topology

VII.

OUTPUT WAVEFORMS (CONVENTIONAL)

Fig 6.Output Voltage Waveform

288

Vol. 3, Issue 1, pp. 284-294

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Fig 7.Output Current waveform

Fig 8.THD Analysis(Conventional)

VIII.

SIMULATION DIAGRAM OF 11-LEVEL (PROPOSED) CASCADED MULTILEVEL INVERTER

Fig 9. 11-level Cascaded Multilevel inverter(proposed)

289

Vol. 3, Issue 1, pp. 284-294

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Table 6: Switching sequence for proposed topology

IX.

OUTPUT WAVEFORMS (PROPOSED)

Fig 10.Output Voltage Waveform

Fig 11.Output Current waveform

290

Vol. 3, Issue 1, pp. 284-294

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Fig 12.THD Analysis (Proposed)

X.

MATHEMATICAL METHOD OF SWITCHING

The control of the new family of multilevel inverters is to choose a series of switching angles to synthesize a desired sinusoidal voltage waveform synthesized by a 2s + 1 level inverter, where s is the number of switching angles, which also equals the number of dc sources. To reduce 3rd and 5th order harmonics in the 11-level inverter, three nonlinear equations can be set up as follows: Cos(α(1))+Cos(α(2))+Cos(α(3))+Cos(α(4))+Cos(α(5))=M Cos (3α(1))+Cos(3α(2))+Cos(3α(3))+Cos (3α (4))+Cos(3α(5))=0 Cos(5α(1))+Cos(5α(2))+Cos(5α(3))+Cos(5α(4))+Cos(5α(5))=0 Where, Modulation index, M = Vm/5Vdc

XI.

SOLUTION USING GENETIC ALGORITHMS

A GA for optimization is different from ‘‘classical “optimization methods in several ways: random versus deterministic operation, population versus single best solution and selecting solutions via ‘‘survival of the fittest’’.The solution to the harmonic elimination problem is five switching angles α1, α2, α3, α4, α5. Each switching angle is called a gene. A chromosome consists of all the genes and in this case there are five genes in one chromosome. Thus, each chromosome represents a possible solution to the problem

A. Encoding of a Chromosome
The population size remains constant throughout the whole process.. The most used way of encoding is a binary string. Indeed, there are many other ways of encoding. The encoding depends mainly on the problem considered. In this study, a binary coding system is used. A string then could look like this (in the binary case): String 1 1101100100110110 String 2 1101111000011110 A string in GAs may be divided into a number of substrings. The number of sub-strings, usually, equals the number of problem variables.

B. Fitness Function

291

Vol. 3, Issue 1, pp. 284-294

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
The fitness function is the function responsible for evaluation of the solution at each step. The objective here is determining the switching angles such that some selected harmonics are minimized or put equal to zero.

C. Selection
The GA performs a selection process in which the ‘‘most fit’’ members of the population survive, and the ‘‘least fit’’ members are eliminated.

D. Crossover
Crossover operates on selected genes from the parent chromosomes and creates new offspring. Crossover can be illustrated as follows :(j is the crossover point): Chromosome 1 11011j00100110110 Chromosome 2 11011j11000011110 Offspring 1 11011j11000011110 Offspring 2 11011j00100110110

E. Mutation
After performing crossover, mutation takes place. Mutation is used to prevent all the solutions in the population falling into a local optimum of the solved problem.. In case of binary encoding, we can switch a few randomly chosen bits from 1 to 0 or from 0 to 1. Mutation can be illustrated as follows: Original offspring 1 1101111000011110 Original offspring 2 1101100100110110 Mutated offspring 1 1100111000011110 Mutated offspring 2 1101101100110110 The technique of mutation (as well as crossover) depends mainly on the encoding of the chromosomes.

XII.

SIMULATION DIAGRAM WITH GA CONTROLLER

Fig .13 11-level Cascaded Multilevel inverter with GA Controller

292

Vol. 3, Issue 1, pp. 284-294

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Fig 14.THD Analysis (with GA)

XIII.

RESULTS OBTAINED
Table 7: THD Comparison
TYPE CONVENTIONAL PROPOSED PROPOSED WITH GA THD 18.75% 13.27% 7.86% 3rd HARMONIC 13.57% 11.8% 0.6% 5th HARMONIC 4.9% 2.8% Eliminated

In Conventional Cascaded multilevel inverter, the observed THD is 17.2 %.In our proposed topology of multilevel inverter with reduced number of switches ,the observed THD value is 13.2% and then with GA control techniques, the THD is further reduced to 7.86% and the 3rd and 5th Order harmonics has been eliminated.

XIV.

CONCLUSION

The selective harmonic elimination of a new family of multilevel inverters using GA has been presented. The new configuration has the advantage of a reduced number of switching devices compared to traditional configurations of the same number of levels. The GA technique usually produces more than one possible solution set for each harmonic profile and a given specific modulation index. For multiple solutions, the solution that gives the lowest THD is selected. Both simulation and experimental results show that the algorithm can be effectively used for selective harmonic elimination of the new family of multilevel inverters and results in a dramatic decrease in the output voltage THD.

REFERENCES
[1] Baker RH. Electric power converter. US Patent 03-867-643; February 1975. [2] Baker RH. High-voltage converter circuit. US Patent 04-203-151; May 1980. [3] Nabae A, Takahashi I, Akagi H. A new neutral-point clamped PWM inverter. In: Proceeding of the industry application society conference; 1980.

293

Vol. 3, Issue 1, pp. 284-294

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
[4] Fracchia M, Ghiara T, Marchesini M, Mazzucchelli M. Optimized modulation techniques for the generalized N-level converter. In: Proceeding of the IEEE power electronics specialist conference, vol. 2; 1992. p. 1205–13. [5] Meynard TA, Foch H. Multi-level conversion: high voltage choppers and voltage source inverters. In: Proceedings of the IEEE power electronics specialist conference, vol. 1; 1992. p. 397–403. [6] Ogasawara S, Takagali J, Akagi H, Nabae A. A novel control scheme of a parallel current-controlled PWM inverter. IEEE Trans Ind Appl 1992;28(5):1023–30. [7] B. Ozpineci, L.M. Tolbert, J.N. Chaisson, Harmonic optimization of multilevel converters using genetic algorithms, IEEE Power Electronics Lett. (2005) 1–4. [8] B. Ozpineci, L.M. Tolbert, J.N. Chaisson, Harmonic optimization of multilevel converters using genetic algorithms, in: Proceeding of IEEE Power Electronics Specialist Conference Letters, 2004, pp. 3911–3916. [9] K. Sundareswaran, A.P.Kumar, Voltage harmonic elimination in PWMA.C. chopper using genetic algorithm, IEE Pro-Electr. Power Appl. 151 (1) (2004) 26–31.

Authors Biography
C. UDHAYA SHANKAR MIEEE. received his B.E. degree in Electrical and Electronics Engineering from Bharathiyar University, Coimbatore India in 2001 and ME degree in Power electronics and drives from Vellore Institute of Technology, India in 2002. Recently he is a PhD candidate at Anna university of Technology, Coimbatore, India. He is currently working as Senior grade Assistant professor in Kumaraguru college of Technology, Coimbatore, India. His main interest research is optimization techniques and its application to Power Electronics, Power quality, FACTS devices and their control.

J.THAMIZHARASI was born in salem India, in 1989. She received the B.E (Electronics and Instrumentation Engineering) degree in Vivekanandha College of Engineering for women, Thiruchengode, India in 2010. Now she is doing M.E (Power Electronics and Drives) in Kumaraguru College of Technology, Coimbatore, India. Her areas of interest are Power Electronics and Drives, Power Quality, and Renewable Energy. N. NITHYADEVI received her U.G degree in Mathematics, Kongunadu Arts and Science College, Coimbatore, India in 2000 and M.Sc degree in Mathematics from Kongunadu Arts and Science College, Coimbatore in 2002 and M.Phil from Kongunadu Arts and Science College, Coimbatore in 2003. She received her PhD for Applied Mathematics from Bharathiyar University in 2006 and she got the Post Doctoral Fellowship from National Cheng Kung University Taiwan, ROC in 2008. She is having Research and Teaching Experience of Five Years. She has presented many papers in different International Journals. She is currently working as Assistant professor in Bharathiyar University, Coimbatore, India.

294

Vol. 3, Issue 1, pp. 284-294

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

FINGERPRINT BASED GENDER IDENTIFICATION USING FREQUENCY DOMAIN ANALYSIS
Ritu Kaur1 and Susmita Ghosh Mazumdar2
2

M. Tech Student, RCET Bhilai, India Reader, Department of Electronics & Telecom, RCET Bhilai, India

1

ABSTRACT
Although Fingerprints are one of the most mature biometric technologies and are considered legitimate proofs of evidence in courts of law all over the world, relatively little machine vision method has been proposed for gender identification. Few researchers addressed the use of fingerprint for gender identification which will be more helpful in short listing the suspects. In this paper, a novel method is proposed to estimate gender by analysing fingerprints using fast Fourier transform (FFT), discrete cosine transform (DCT) and power spectral density (PSD). A dataset of 220 persons of different age and gender is collected as internal database. Initially the fingerprints of the subject were tested and after the manual analysis threshold is specified. Frequency domain calculations are compared with predetermined threshold and gender is determined. Of the samples tested, 99 samples identified exactly out of 110female samples and 87 samples identified exactly out of 110 male samples.

KEYWORDS:

Discrete cosine transform, frequency domain, fast Fourier transform, gender identification, and power spectral density.

I.

INTRODUCTION

Within today’s environment of increased importance of security and organization, identification and authentication methods have developed into a key technology. Such requirement for reliable personal identification in computerized access control has resulted in the increased interest in biometrics. Fingerprints are one of the most mature biometric technologies and are considered legitimate proofs of evidence in courts of law all over the world. Based on the varieties of the information available from the fingerprint we are able to process its identity along with gender, age and ethnicity. Fingerprint is an impression of friction ridges, from the surface of the finger-tip. Fingerprints have been used for personal identification for many decades; more recently becoming automated due to advancements in the computing capabilities Fingerprints have some important characteristics that make them invaluable evidence in crime scene investigations: 1. A fingerprint is unique to a particular individual, and no two fingerprints possess exactly the same set of characteristics. 2. Fingerprints do not change over the course of person’s lifetime (even after superficial injury to the fingers). 3. Fingerprint patterns can be classified, and those classifications then used to narrow the range of suspects. In this paper, we proposed a method that detects the gender of a person using fingerprints by frequency domain analysis. Here we obtain the fundamental frequency of various transforms and use them for gender Classification. This application is helpful in short listing the suspects and victims from crime and to boost the performance of a system which is used for person recognition and human computer interfaces. The remainder of this paper is organized as follows: brief literature of various gender recognition algorithms using fingerprint is discussed in section 2. Frequency domain analysis is discussed in section 3. The proposed system is discussed in section 4. The dataset and experimental results are described in section 5.

295

Vol. 3, Issue 1, pp. 295-299

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

II.

RELATED WORKS

Although the fingerprint plays vital role in the identification and verification, relatively few machine vision method has been proposed for gender identification. In this section we briefly review and summarize the prior researches in gender classification. M.D. Nithin et al [1] has applied baye's theorem on the rolled fingerprint images belonging to south Indian population and found fingerprint possessing ridge density < 13 ridges/25 mm2 is most likely to be of male origin and ridge count > 14 ridges/25mm2 are most likely to be of female. Similar results is obtained by Dr. Sudesh Gungadin MBBS [2] using Ridge density by counting the ridges in the upper portion of all fingers which shows that a finger print ridge of < 13 ridges/25 mm2 is more likely of male origin and finger print ridge of > 14 ridges/25mm2is more likely of female origin. Acree MA [6] used Ridge density in a particular space to classify gender using fingerprint. He showed that female have a higher ridge density compared with male. Kralik M and Vladimir Novoiny [5] showed that the males have higher ridge breadth defined as the distance between the centers of two adjacent valleys, than females. Ahmed Badawi et al [7], used ridge thickness to valley thickness ratio (RTVTR), and white lines count features for the classification. According to them, the female's fingerprint is characterized by a high RTVTR; while the male's fingerprint is characterized by low RTVTR Dr. A. Bharadwaja et al [8] correlated relation between the fingerprint pattern and blood groups between male and female. In this paper, instead of traditional ridge related analysis, we proposed a frequency domain analysis of fingerprint using FFT, 2D-DCT and PSD.

III.

FREQUENCY DOMAIN ANALYSIS

The gender identification is made through the frequency domain instead of the traditional spatial domain. The transforms FFT, DCT and PSD are chosen for the fingerprint analysis. Fourier transform plays a vital role in image processing applications. It contains most of the information of the spatial domain image. DCT transforms an image from the spatial domain to the frequency domain and provide better approximation of image. DCT transforms a set of data which is sampled at a given sampling rate to its frequency components. The fundamental frequencies of these transforms are used for gender identification. The 2D FFT pair is given by
, , = = 1 1 , ,

Where, 0 ≤ m, k ≤M-1, 0 ≤n, l ≤N-1

The discrete cosine transform (DCT) is a Fourier-related transform similar to the discrete Fourier transform (DFT), but using only real numbers. The equation for the two-dimensional DCT is
, = 2 , cos 2 +1 2 cos 2 +1 2

Where C(m),C(n)= 1/√2 for m,n=0 and C(m),C(n)=1 otherwise The 2D DCT is computed by applying 1D DCT (vertically) to columns and the resulting vertical DCT is applied with 1D DCT (horizontally).Let F(signal) is Fourier transform of the signal and the PSD is found by , PSD=|abs(F(signal))|^2/N where N is Normalization factor.

IV.

PROPOSED SYSTEM LEVEL DESIGN

A fingerprint based gender identification system constitutes of digital images of fingerprint as its input which is then transformed into frequency domain, compared with the predetermined thresholds

296

Vol. 3, Issue 1, pp. 295-299

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
and finally, gender is declared. The figure below shows the block diagram of the proposed gender identification system by frequency domain analysis of fingerprints.

Figure-1: Block Diagram of the proposed Gender identification scheme. The proposed gender identification system follows the following steps: 1. Input from the database is given to the gender identification system. 2. FFT transforms the given input and generates the output. Threshold is set to TH1. Rule is set in such a way that if the fundamental frequency (FF) is greater than TH1 the decision is female and if the FF is less than TH1 the decision will be male. 3. DCT transforms the given input and generates the output. Threshold is set to TH2. Rule is set in such a way that if the fundamental frequency (FF) is greater than TH2 the decision is female and if the FF is less than TH2 the decision will be male. 4. PSD transforms the given input and generates the output. Threshold is set to TH3. Rule is set in such a way that if the fundamental frequency (FF) is less than TH3 the decision is female and if the FF is greater than TH3 the decision will be male. 5. Comparing the decisions by all the transforms, if two decisions are male, the result is announced as male and if two decisions are female, the result is announced as female.

V.

EXPERIMENTAL RESULTS

5.1 Data Set
The database is basic requirement for any research work An internal dataset of fingerprints (left and right forefinger) for 220 persons of different ages and gender (110 males, and 110 females) were obtained from different colleges that used biometric fingerprint sensor for marking the attendance and were analysed using frequency domain analysis. The internal database is of 8 bit gray level with a size of 109 x 108 each. The developed algorithm has been tested using the MAT LAB 7.1 image processing tool.

Figure- 2: Sample Fingerprint images from our database

297

Vol. 3, Issue 1, pp. 295-299

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963 5.2 Threshold Setting Setting the threshold for each transform is an important part of the gender identification process. Initially 50 fingerprints of both male and female are examined with FFT, DCT and PSD and the fundamental frequencies were obtained for each case. After the manual analysis, proper threshold was set for the gender classification. Table below gives the details of the transform, threshold and the threshold condition for the identification of gender.
Table-1: Threshold setting of FFT, DCT and PSD Transforms and Threshold FFT DCT PSD FF<1500000 FF<17000 FF>9000000000 FF>1500000 FF>17000 FF<9000000000

Gender Male Female

5.3 Results Table-2 shows the FFT, DCT and PSD transform results of fingerprints of 10 female subjects. The proposed system performs well and successfully classifies the subjects as females. Few readings were found to have deviations from the desired results.
Table 2: Results of FFT, DCT and PSD for female samples
Fingerprint Sample FFT Threshold> 150000 DCT Threshold> 17000 PSD Threshold<9000000000

1 2 3 4 5 6 7 8 9 10

2279661 1570108 1454314 2330334 2258674 1580512 2284175 2000917 2182986 1848986

21010.92356 14471.19513 13403.958 21477.96078 20817.49294 14567.08555 21052.52769 18441.82716 20119.90027 17041.52657

16050830074 7648114668 6569115014 16606900769 15758103976 7749077120 16114143245 12248922996 14578942394 10583366169

Table-3 shows the FFT, DCT and PSD transform results of fingerprints of 10 male subjects. The proposed system performs well and successfully classifies the subjects as males. Few readings were found to have deviations from the desired results.
Table 3: Results of FFT, DCT and PSD for male samples
Fingerprint Sample FFT Threshold<150 000 DCT Threshold< 17000 PSD Threshold>9000000000

1 2 3 4 5 6 7 8 9 10

1720734 1010376 1817095 1447936 2249503 1736023 1666175 1246556 1369293 1777807

15859.46794 9312.320079 16747.59719 13345.17396 20732.96669 16000.38188 15356.61467 11489.11739 12620.34599 16385.49196

9059365136 3124565177 10097352413 6512066008 15480756894 9337090299 8489722112 4838737388 5737451302 9665436481

Of the total samples tested, the performance efficiency of the proposed system was found to be 90% for female (99 samples identified exactly out of 110female samples) and 79.09% (87 samples identified exactly out of 110male samples) for male.

298

Vol. 3, Issue 1, pp. 295-299

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

VI.

CONCLUSIONS

In this paper, instead of traditional ridge related analysis, we proposed a frequency domain analysis of fingerprint to identify the gender. From the internal database, 110 male samples and 110 female samples were tested. Optimal threshold for each transform is chosen for better results. It is found that the proposed algorithm produces an accurate decision of 90% for female and 79.07% for male. The performance can be enhanced further by using good quality fingerprint images. In future, more work can be done in frequency domain to find different parameters and different transforms that can be applied in gender identification which will more accurate and suitable for all types of applications. Gender identification results using frequency domain analysis showed that this method could be considered as a prime candidate for use in forensic anthropology in order to minimize the suspects search list and give a likelihood probability value of the gender of a suspect.

REFERENCES
[1.] M.D. Nithin, B. Manjunatha, D.S. Preethi and B.M. Balaraj, "Gender differentiation by finger ridge count among South Indian population," Journal of Forensic and Legal Medicine, vol 18 (2), pp. 79-81, Feb 2011. [2.] Dr. SudeshGungadin MBBS, MD "Sex Determination from Fingerprint Ridge Density," Internet Journal of Medical Update, Vol. 2, No. 2, Jul-Dec 2007. [3.] G. G. Reddy, Finger dermatoglyphics of the Bagathas of Araku Valley (A.P.), American Journal of Physical Anthropology Volume 42, Issue 2, pages 225-228, March 1997. [4.]Yvonne K. Dillon, Julie Hayne S and Macie J Hennerberg [8]," The relationship of the number of Meissner's corpuscles to dermatoglyphic characters and finger size," journal of anatomy, vol 199, pp. 577-584, November 2001. [5.] KRALIK M., NOVOTNY V. 2003. Epidermal ridge breadth: an indicator of age and sex in paleodermatoglyphics. Variability and Evolution, Vol. 11: 5-30, Adam Mickiewicz University, Faculty of Biology, Institute of Anthropology, Poznan. [6.] Acree, M. Is there a gender difference in fingerprint ridge density? Forensic Science International 1999 May; 102 (1): 35-44. [7.] A. Badawi, M. Mahfouz, R. Tadross, and R. Jantz.Fingerprint-based gender classification.The International Conference on Image Processing, Computer Vision, and Pattern Recognition, June 2006. [8.] Dr. A.Bharadwaja , Dr.P.K.Saraswat, Dr.S.K.Aggarwal, Dr.P.Banerji, and Dr.S.Bharadwaja, "Pattern of finger prints in different ABO blood groups." Journal of Indian Academy of forensic medicine, vol 26(1), pp 69, March 2004. [9]. Anil K. Jain, Sarat C. Dass, and KarthikNandakumar “Soft Biometric Traits for Personal Recognition Systems” Proceedings of International Conference on Biometric Authentication, LNCS 3072, pp. 731-738, Hong Kong, July 2004 [10]. S.M.E. Hossain , G. Chetty “Next Generation Identity Verification Based on Face-Gait Biometrics” 2011 International Conference on Biomedical Engineering and Technology IPCBEE vol.11 (2011) © (2011) IACSIT Press, Singapore [11]. Michael D. Frick, Shimon K. Modi, Stephen J. Elliott, Ph.D., and Eric P. Kukula, Member IEEE “Impact of Gender on Fingerprint Recognition Systems” 5th International Conference on Information Technology and Applications ICITA 2008 ISBN: 978-0-9803267-2-7

Authors
Ritu Kaur is currently pursuing Masters Degree program in Digital Electronics in Chhattisgarh Swami Vivekananda Technical University, India.

Susmita Ghosh Mazumdar is Reader in Rungta College of Engineering and Technology in Chhattisgarh Swami Vivekananda Technical University, India.

299

Vol. 3, Issue 1, pp. 295-299

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

APPLYING THE GENETIC ALGORITHMS OF SORTING THE ELITIST NON-DECISIVE SOLUTIONS IN THE CASE STUDY "RESOURCE ALLOCATION"
Khashayar Teimoori
Department of Mechanical and Aerospace Engineering, Science and Research Branch, Islamic Azad, University, Tehran, Iran

ABSTRACT
To achieve the purposes of the permanent development, a good energy Policy should take into account the various range of the political, economical, social and environmental considerations. On the other hand, the energy resources and technologies at hand are limited. Due to the complexity of the elements affecting the decision-making process, the usage of modelling methods in the energy sector has major importance. Unfortunately, the old methodologies of the energy systems Modelling are not suitable for multi-objective purposes, and they don’t have the capability to lead the systems to the desirable optimum point. Given the development of multi-objective evolutionary algorithms and the increasing usage of them in engineering problem solving, the motivation for this research in optimizing energy systems based on multi-objective purposes, have been inconsistent and not co-scaled. In this paper, using genetic algorithms of sorting the elitist non-decisive solutions, the resource allocation was considered as a sample, which was modulated and analyzes. The present study shows that applying post-research decision-making methods based on multi-objective optimization, besides developing the possibility of analyzing the balance multi-objective, inconsistent and nonco-scaled purposes in energy systems, will also increase the power of policy-maker and politicians in predicting and improving the results from different decisions and support them in making better decisions. Applying other new optimizing methods in analyzing energy systems is recommended.

KEYWORDS:
Allocation.

Energy Policy, Multi-Objective Purpose Optimization, Genetic Algorithms, Resource

I.

INTRODUCTION

According to the recent studies in the framework of a general equilibrium model can be calculated 1 CGT; works carrying energy prices on economic sectors and inputs to put the experience and analysis. Developed general equilibrium pattern contains 6 cantons: 1-production, 2-energy, 3commercial sector, 4-production factors, 5-block prices, 6-market clearing block. The result shows soaring energy prices caused less relative price deviation and decrease indiscriminate energy utilization in terms of productivity and household's energy. On the other hand increasing productivity prices, inflation in economy, economic prosperity decreased in society by soaring energy prices, increase every county’s incomes by growth energy aspects. Hence they can compensate for reduced welfare and developing atonement infrastructure; by astronomical prices. In short period of time by putting the people with exiguous incomes and directly paying subsides, compensated the low level of convenience.

CGT: Compated General Equilibrium 1 CES: Constant Elasticity Substitution 2

300

Vol. 3, Issue 1, pp. 300-305

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Hence they can compensate for reduced welfare and developing atonement infrastructure; by astronomical prices. In short period of time by putting the people with exiguous incomes and directly paying subsides, compensated the low level of convenience. At the time, they have to put the efficient investment as an infrastructure for transportation and remarkably increasing public transportation, according to the recent studies, has got the high test fee, production expenditure and welfare caused it to scale down.

I.1

Calculated Relative Equilibrium Pattern Framework (CGE):

Developed general equilibrium pattern contains 6 blocks which every individuals of them covering several blocks; such as: 1-production, 2-energy, 3-commercial, 4- Production factors, 5- price and 6- market clearing.

I.1.1

Production block:

Manufacturing block in aimed pattern is well known as the three nested production function. Assuming an imperfect replacement of production factors from the production function of Constant Elasticity Substitution. [First Diagram] At the first level, production (ADI) that each part of the function is the two relative inputs (QINTI) and (QVAEI) is the total value added. [1] A D i = a i [ δ i Q V A E i− ρ i + (1 − δ i ) Q IN T a− ρ i ] − 1 / ρ i (1)
Production Experiences

Goods (n) Internal bazaar

Rudimentary Productions

Goods (1)
Internal bazaar Finance Work Force

Exports

Rudimentary product Set one

Export

Synthetic Set of Energy

Energy (1)

Import Internal Energy (n) Import Internal
Fig.1 Structure of the production section

Standard reduced form for maximum benefit is the fixed proportion of inputs:

Q VAEi δi P IN T i 1 + ρ i ( 2) =[ * ] Q IN T i (1 − δ i ) P IN T i
Wherein (PVAEI) and (PINTI) is the total value added price –energy inputs and relative input prices for “I” section. It is worth to mention that the second formula (2) titled optimal condition CGS inputs function, and CGS followed the Oiler's laws: [2]

1

( P A D i ) A D i = ( P V A E i ) Q V A E i + ( P IN T i ) Q IN T i (3)
Which PDAI is the total production value, Therefore this is the demand for input function: [3]

301

Vol. 3, Issue 1, pp. 300-305

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

j = c1 .....cn

Q IN T j , i = a in, it × Q IN Ti (4) j

Wherein AJI is the technical coefficient data and output, QINTJI: inputs produced by sector J and consume by sector I. Total value added-energy input(QVAE) assuming an imperfect replacement between initial factors & energy inputs in subdominant CES figure, function of the total value added (QVAI) and the whole energy inputs (QVEI).[4]

Q V A E i = a iva e [δ iva e Q V Ai− ρ i

vac

+ (1 − δ iva e ) Q V E i− ρ i ] − 1 / ρ i

va c

vae

( 5)

According to the first production function, optimal established input and following the function of CES 2 from the Oiler's law shall specify the demand and price: [5]

Q V Ai δ iva e P V E i 1 + ρ iva e =[ × ] ( 6) Q VEi (1 − δ iva c ) P V Ai

1

( P V A E i )( Q V A E i ) = ( P V Ai )( Q V Ai ) + ( P V E i )( Q V E i ) (7)
Also, the total value added (QVAI), in subdominant form CES the workforce and capital (HSJ). [6]
2

from (QFFI) inputs which contains

Q V A i = a iv a

δ iv a e Q F f− ρ
va f ,i
va

va i

( 8)

And optimal input Equation (the first law), is obtained from the final factor prices and final production benefits Equality:

W F f , i = P V Ai a iva Q V Ai [ ∑ δ
f

Q F f−, iρ i ] − 1 δ fvai Q F f−, iρ i ,

va

−1

( 9)

Wherein (WFFI), is the input prices for ‘F’ which defined section ‘I’. [7, 8]

I.1.2

Energy section:

Six energy carries petrol, kerosene, fuel oil, LPG and electricity. Considering that the aim of this study is about investigating on mentioned carrier prices on various economic sections. Thus the CGE pattern has been developed to determine the prices. For this purpose the whole energy input function (QVEI), is a CES function from 6 inputs energy. [9]

Q VEi = a

ve i

 − ρ iv e  ve  ∑ [δ i Q F E i , e ]   e 

−1

ρ iv e

(10)

Wherein (QFEei) is the energy carriers and e=1, 2.........6 indices are for every individual carrier. The first step for choosing a beneficial energy carrier: δ Q V Ei (11) P D E i ,e = P E E i ⋅ δ Q F E i,e Where in (PDEie), the price of every carrier and (PEEi) the whole energy input prices.

δ Q VEi = Q F E i−, eρ δ Q F E i ,e
δ Q VEi = Q F E i−, eρ δ Q F E i ,e

Ve i

−1

Ve   a iv e δ iv e  ∑ δ iv e Q F E i−, eρ i   e 

ρ i V e

 −1  −1   

(12)
(− 1)

By replacing above equation in equation number 10 we would have:
Ve   δ i Q V E i  ∑ δ iv e Q F E i−, eρ i   e  And as a result of the first step we would have: Ve i

−1

ve

(13)

302

Vol. 3, Issue 1, pp. 300-305

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
 Q VEi  (14) ⋅δ i Q VEi  P D E i ,e = P D E i ⋅ Q F E  Ve  ai  And by calculating equation number 13 which is by the (QFEie), demand for carriers will be determined: [10, 11, and 12]
− ρ Ve −1 i ,e ve
Ve  P D E i a iρ i  ( ρ iV e + 1 ) Q FEi  Ve   P E E i ⋅δ i    And the whole energy carrier prices are defined as follows: P E E i Q V E i = ∑ P D E i ,e Q F E i ,e

ρ iV e

−1

(15)

(16)

e

I.2

Foreign Trade Block:

I.2.1

Export section:

In this section by equation of internal production (Xdi) to the internal market (XXDi) and export Qui, utilize the Constant Elasticity of transformation of equation [13]. One of the crucial properties of this function is the conversion between the production for foreign and internal market.

xd i = B i θ i q e i
I.2.1.1

(

ρ iT

+ (1 − θ i ) xxd

i

ρT

h

)

1

ρ iT

(17)

Export Supply Equation:

Indeed, equation for export is implying, each section can produce two kinds of goods for the domestic market and foreign market. Thus the combination of manufactured goods and for domestic market and foreign for maximum sale:

p x i ⋅ xd i = p e i ⋅ q e i + pd i ⋅ xxd i
To limit the export function (4-16 equation) will become:

(18)

 pe 1 − θi  qe i = xxd i ⋅  i ⋅ pd i θ  i  

1 1 − ρ iT

(19)

Wherein (Pxi) is the whole manufactured goods prices, Xdi is the amount of domestic production, (Quei) amount of export, (Pdi) prices of goods supplied have been manufactured and sold, XXDi is the domestic sales. [14, 15, 16]

I.2.1.2

Export Prices:

A unit for Export prices according to the definition of foreign export prices PWEI and TEI is for the rates and taxes for exports: [18, 19]

p e i = p w e i ⋅ e x r ⋅ (1 + te i )

(20)

This is very important to mention this point about the last equation that is assumed that the price of the export and the finance rate (related to the financial rules in every country) in flexible and the prices of the external export is constant. The main of the non-flexibility of the recent prices is related to be small economy compare with the world Economy that it has very far reaching. According to the absorption equation, the price of composite commodities equal to the sum of production:

pi ⋅ xi = p d

i

⋅ xxd

i

+ pm

i

⋅qm

i

(21)

Wherein PI is price of composite commodities, XI is the amount of domestic manufacturing production and export, QMI is the total amount of export. Imperfect replacement between internal and exporting production is one of the aspects that modern theories considered them. KURGMAN and HELPMAN’s Researches is in the frame work of various production, therefore implies on a specified manufacturing section which can export or import the production as they produce. [20]

303

Vol. 3, Issue 1, pp. 300-305

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

II.

ANALYSIS OF RESULTS AND DISCUSSION

The present study attempted to show in the general form of equating (CGE), the effects of the increasing the carrier energy prices into the sections and securitize this important factors in every sections of each country that is demonstrated in Fig.2. In this figure, 12 sections are showed to the effects of the accounting matrix in every year and we can see how the sections of each country can influence in the accounting of one year. [21]

Fig.2 Effects of each section into the accounting of financial economy

III.

CONCLUSIONS AND FUTURE SCOPE

Many are faced with staggering capital requirements for new plants, significant fluctuations in demand and energy growth rates, declining financial performance and political or regulatory and consumer concern about rising prices. Results shows that increasing of the energy carriers price with prepare of decreasing in the aberrance of relating prices, decreased the waste consume of the households. Until the beginning of the industrial revolution several centuries ago, this was the major environmental impact of human activities. Today, we are approaching the limit of available land for agricultural purposes, and only more intensive use of it can provide food for future increases of world population.

REFERENCES
[1]. Bhattacharyya, S.C. (1996). Applied General Equilibrium Models for Energy Studies: A Survey .Energy Economics 18: 145-164. [2]. Blitzer, C. R. (1986). Analyzing Energy-Economy Interactions in Developing Countries, Energy Journal pp.471-501. [3]. Boyd, R and N.D. Uri,(1993), The economic impact of taxes on refined petroleum products in the Philippines , Energy : The International Journal ,18,31-47. [4]. Boyd, R. and N.D. Uri, (1999). A note on the Economic Impact of Higher Gasoline and Electricity Prices in Mexico, Journal of Policy Modeling , Vol 21 , no.4,pp527-534.

304

Vol. 3, Issue 1, pp. 300-305

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
[5]. Boyd, Roy and Noel D. Uri (1991). The Impact of a Broad Based Energy Tax on the U.S. Economy. Energy Economics,13(4),258-273. [6]. Clarete, R.L. and J.A. Roumasset, (1986), CGE models and development policy analysis: problems, pitfalls, and challenges, American Journals of Agricultural Economics, 68, 1212-1216.[5] [7]. De Melo, j. (1988). Computable general equilibrium models for trade policy analysis in developing countries: A surve .Journal of Policy Modeling. 10:4.469-503. [8]. Dervis, IC,J.De melo and S. Robinson, (1982),General Equilibrium Models for Development Policy, A World Bank research publication, Cambridge University Press. [9]. Dixon,P.B.,Parmenter B. R,.Sutton,J. and Vincent,D.P.(1997),ORANI:A Multisectoral Model of the Australian Economy . North Holland press. [10]. Hazilla, M.and R.J. Kopp (1990), Social Coast of Environmental Quality Regulations: AgeneralEquilibrium Analysis , Journal of Political Economy, vol.98,no.4,853-873. [11]. Hudson, E, A. and Jorgenson D .W.(1974). U.S energy policy and economic growth, 1975-2000.Bell Journal of Economics and Management Science, 5:2, pp.461-514. [12]. Jorgenson, D.W. (1984). Economrtric Methods for Applied General Equilibrium Analysis, in H. Scarf and J. Shoven (eds.), Applied General Equilibrium Analysis, in H. Scarf and J. Shoven (eds), Applied General Equilbrium Analysis, Cambridge: Cambridge University press, 139-207. [13]. Jorgenson,D.W. and P.J. Wilcoxen, (1993a), Energy Prices, Productive and Economic Growth ,Annual Review of Energy and Environmen,18,343-395. [14]. Jorgenson. D.W. and P.J. Wilcoxen, (1993b),Reducing US carbon emissions: An econometric general equilibrium assessment, Resource and Energy Economics, 15,7-25. [15]. Lenjosek, G. and j. Whalley, (1985),A small open Economy Model Applied to an Evaluation of Canadian Energy Policies Using 1980 Data, Journal of Policy Modeling 8(1):89-110. [16]. Washington,D.C.:International Food Policy Research Institute. Manne, A.S. and R.G. Richels, (1977), ETA-MACRO: A model of energy economy [17] Interaction, in J. Hitch, (ed), Modeling Energy-Economy Interactions; Five Approaches, Research paper. No.5. Resources for the Future, Washington, DC. [18] Naqvi, Farzana(1998). A computable General Equilibrium Model of Energy , Economy and Equity Interaction in Pakistan. Energy Economics,20(4),347-373.National University. Mimeo. [19] Nornam, V.D. and J. Haaland (1987),VEMOD- a Ricardo –Viner-Hechscher-Ohlin-Jones Model of Factor Price Determination, Scandinavian Journal of Economics, 89(3),251-70. [20] Resosudarmo, B. (2003). Computable General Equilibrium model on air pollution abatement policies with Indonesia as a case study. Economic Record 79 (0),63-73. [21] Robinson, Sherman. (1989). Multisecotral models in H.B. Chenery and T.N. Srinivasan, eds., Handbook of Development Economics Amsterdam, North Holland. Savings. Oxford University Press, London. Author Khashayar Teimoori was born in Tehran, Iran in 1992. Currently he is pursuing B.Tech-final year degree in the field of mechanical engineering from Islamic Azad University, Science and Research Branch of Tehran-Iran. Currently he is a manager of the Backstretch-team (In the fields of Robotics) in the address of www.backstretch-team.info, and he is a manager of webdesigning in the Rheosociety Group researches that it can be see in the address of www.rheosociety.com . Now he is a member in the technical society as ASME, ISME (Iranian society of mechanical Engineering), and IMS (Iranian Mathematical Society). His special interests are Computational Mechanics, Rheology, Robotics and special field of energy considerations.

305

Vol. 3, Issue 1, pp. 300-305

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

POWER CONDITIONING IN BATTERY CHARGERS USING SHUNT ACTIVE POWER FILTER THROUGH NEURAL NETWORK
1

P.Thirumoorthi, M IEEE, 2Jyothis Francis and 3 N.Yadaiah, SM IEEE
1&2

Dept. of Electrical and Electronics Engg., Kumaraguru College of Technology, Coimbatore, India 3 Dept. of Electrical and Electronics Engg., JNTUH College of Engg., Hyderabad, India

ABSTRACT
In this paper, single layer perceptron control algorithm is presented for a single phase shunt active power filter to improve the power factor and to reduce the source harmonics. This control scheme is based on a neural harmonic estimator. The reference current signal generated by this neural harmonic estimator is used to generate gating control pulses for active power filter switches. The control scheme have two control loops namely dc voltage regulation loop and current control loop. The performance of the proposed neural harmonic estimator is evaluated and compared with a linear reference generator based control scheme, incorporating the same voltage and current control loops as in the proposed controller. The proposed APF controller forces the supply current to be sinusoidal, with low current harmonics, and to be in phase with the voltage. Simulations are carried out using Matlab Simulink and the results show that the proposed system is capable of compensating the harmonic current to acceptable level.

KEYWORDS: Active power filter, Harmonics, single layer Perceptron, feedfoward Neural network, Selective
compensation.

I.

INTRODUCTION

Harmonic compensation have become increasingly important due to the intensive use of power converters and other nonlinear loads which results in the deterioration of power system voltages and current waveforms. Thus the current wave form can become quite complex depending up on the type of load and its interaction with other components in the system. One of the major effects of power system harmonics is to increase the current in the system. It also causes other problems like greater power losses in distribution and operation failure of protection devices. Due to these problems the quality of electrical power is an object of great concern. Power line conditioner like Active Power filter (APF) can be used to minimize the harmonic distortion current [1]-[3]. The main purpose of shunt active power filter is to supply the harmonic current absorbed by the system. To this end the control of APF systems has great importance. For the reference-signal generation, by means of direct method consists of sensing the load current and extracting the harmonic content [4]- [5]. As an alternative, the indirect method generates a sinusoidal reference signal by means of grid-voltage sensing. In that case, the grid current is forced to follow this sinusoidal signal, and thus, the load harmonics are indirectly given by the APF inductor current [6]–[8]. The linear reference generator based control strategy consists of two control loops and a resonant selective harmonic compensator. Reference signal is generated by indirect method [6]. That is by means of sensing the grid voltage. Resonant selective harmonic compensator is used for generating

306

Vol. 3, Issue 1, pp. 306-314

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
reference signal. It consists of several generalized integrators like second order band pass filter with high gain and low band width. Thus it will not affect the dynamics of control loop. The reference signal tracking is performed by the inner current loop. Outer voltage loop is accountable to regulate the capacitor voltage. The main advantages of this method are that good dynamic and transient performance obtained. By this approach most harmful harmonics from load current can be eliminated. In this method harmonics are processed individually. In order to improve the performance of the inner current loop, optimal, neural, and model reference adaptive controls have been used recently [13]–[15]. Other approaches utilize nonlinear regulators, such as sliding-mode control and hysteretic control [16], [17]. All the previously mentioned controls attenuate the current harmonics only to a certain level. In this paper, the design of shunt active power filter based on single layer feedforward neural network is presented. The proposed ANN control scheme consists of three control blocks namely voltage control, current control and a neural reference generator. The reference current signal generated by this neural reference generator contains the harmonic components that will be eliminated. This reference signal is controlled by means of a PI controller, which in turn, controls the pulse width modulation (PWM) switching pattern generator [5]-[8]. The output of the PWM generator controls the power switches. The neural reference generator and the other two control block play an important role in the dynamic response of the system. These blocks determine the accuracy and order of the harmonics to be injected. Neural reference generator has inherent learning capability that can give improved precision by interpolation unlike the standard look up table method and space vector modulation. The reference current can be determined using the distorted source current of the system [16]. This paper is organized as follows. Section II describes the active power filter topology. Section III presents the estimation of compensating current. Section IV explains the control of active filter. Section V verifies the expected features of both controllers by means of matlab simulation results. Section VI is conclusion.

II.

ACTIVE POWER FILTER FOR BATTERY CHARGER

Battery charger is used as the nonlinear load in the system shown in fig. 1. It draws a non sinusoidal current from supply. In this work, the load consists of a four diodes full bridge rectifier with a capacitor in parallel with a resistor in the dc side. Active power filters are the best known tool for the current harmonic compensation. Fig.1 shows the schematic diagram of a single phase active power filter in a closed loop manner. Diode bridge rectifier with resistor and capacitor connected in parallel act as a nonlinear load The APF can be controlled in a proper way to attain high resistance against higher order harmonics produced by the load. The active power filter through its control mechanism shapes the grid current to sinusoidal. The compensation principle of shunt active power can be explained as follows. Under normal condition the supply voltage can be represented as vs(t)= vmsinwt (1) But when the non linear load is connected to the supply, it will draw non sinusoidal current. Thus the load current will contain fundamental component and all other higher order harmonics. It can be represented as iL(t)=

n sin(nwt+θn)

(2)

The shunt connected active power filter will generate a harmonic current iF(t) which compensate the harmonics present in the source current and make source current purely sinusoidal in nature.
s(

)=

F(

) +

L(

)

(3)

The compensation current iF is exactly equals to the harmonic content of the load current iL. Hence APF needs to calculate iF accurately and instantaneously.

307

Vol. 3, Issue 1, pp. 306-314

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Fig. 1. Active power filter circuit

Fig. 2. Linear Reference generator

III.

ESTIMATION OF COMPENSATING CURRENT

This section deals with the reference current generating schemes, including the linear resonant selective harmonic compensator and proposed neural harmonic estimator.

3.1 Linear Reference Generator
In the basic approach of reference generation grid voltage vs is multiplied with k(t) [6]-[8]
Isref = vs.k(t). (4)

In this method reference signal will follow the sensed grid voltage. It will result in the reflection of distortions in the grid voltage. In order to overcome this drawback, a controller with linear harmonic compensator can be used which is as shown in Fig 2. This generator uses a band pass filter G1(s) and a harmonic compensator G2(s). Grid voltage vs and supply current is is processed through these filters and produces the reference signal. Band pass filters of G2(s) have closed loop operation G1(s) =
( )

(5)

G2(s) =

2

(

)

(6)

Where δ is the damping factor, ω = 2πf, and k is the gain at the fundamental frequency f. n can take the values of 3, 5, . . . N, where N is the highest current-harmonic component to be attenuated, and kn is the band pass gain of each filter.

3.2 Neural Harmonic Estimator
In this method, the linear reference generator scheme is replaced by artificial neural network (ANN) made up of single layer perceptron. The ANN is trained offline, using a set of training data generated

308

Vol. 3, Issue 1, pp. 306-314

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
by Fourier analysis of the source current. In neural networks, there are two main processes involvedtraining and testing. In the training process, the network is trained with suitable input and output patterns which is called data set, so that the outputs of the neural network approximate the target values for various input training patterns in the training set. In the testing process, the performance of the network is verified by using the data outside the training data set. Neural network for a harmonic component detection is consisted of 2-layers network which input layer =49 units, and a single output layer. Before feeding data to ANN, the source current signals are sampled at a uniform rate ∆ in a half cycle of voltage source as shown in Fig 3. So time values are discrete, k t with k=0, 1, 2….. and then given to the ANN for its training, together with the expected output. The input is given as a continuous variable. The input signal flows through a gain or weight. The weights can be positive or negative corresponding to acceleration or inhibition of the flow of signals. The summing node produces a weighted sum of signals with a bias value which is initially taken as zero. It then passes to the output through the transfer function which is usually nonlinear, such as sigmoid, inverse-tan, hyperbolic or Gaussian type. When a set of input values are presented to the ANN, step by step calculations are made in the forward direction to drive the output pattern. A cost functional given by the squared difference between the net output and the desired net output for the set of input patterns is generated and this is minimized by gradient descent method altering the weights one at a time starting from the output layer After training next is the testing phase. In this period we have value of weighing vectors and best output. Using these data we can find out the best value of reference current generated. Current regulation is performed using a PI controller Fig.3 shows the block diagram of APF control scheme based on neural reference generator. The neural reference generator will perform the task of current harmonic computation and generate reference current signal. This current will send to current control block which can be realized using a PI controller. The output of PI controller is a control voltage signal which produces gating pulses for inverter control through PWM generator.

Fig.3. APF control scheme based on neural harmonic estimator

IV.

CONTROL OF ACTIVE FILTER

Indirect control method includes an outer voltage control loop and inner current control loop. In the indirect control the reference signal is generated by sensing the grid voltage and grid current is forced to follow this sinusoidal signal. This will reduce the harmonics present in the grid current.[7]-[8].. Indirect current control scheme attenuate the current harmonics to a high level, while maintaining the stability. By this method more harmful harmonics can be easily attenuated. The reference current

309

Vol. 3, Issue 1, pp. 306-314

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
generated by this method contain information regarding dominating harmonic content that need to be eliminated.

4.1 DC Voltage Regulation Loop
Outer voltage loop is responsible for the capacitor voltage regulation. Fig 4 shows the block diagram of outer voltage loop. In the outer voltage loop, square of the capacitor voltage is compared with squared reference voltage. The output is regulated with the help of PI compensator [13]-[14]. Squared values are used in order to make the design of loop simpler. In order to reduce the ripple at the output of PI compensator, a LPF can be added. The cutoff frequency of LPF can be made less than twice the grid frequency

4.2 Current Control Loop
Inner current control loop will track the reference signal generated by the reference generator. Fig.5 shows the block diagram of current control loop. Through this current control, grid current is forced to follow the reference current created. The error signal is regulated using a PI compensator and control signal is generated. This control signal is used to generate gating pulses to shunt active power filter through PWM technique

Fig.4. Outer voltage loop

Fig.5 Current control loop. System Parameters are specified in Table I. Table I Symbol Vs f Rs R C L C1 System Parameter Grid Voltage Grid frequency Nonlinear load series resistance Nonlinear load resistance Nonlinear load capacitance Active filter inductance Active filter capacitance Value 230V 50Hz 4 ohm 50 ohm 65 uF 15mH 1mF

V.

SIMULATION RESULTS

The active power filter circuit using neural reference generator and linear reference generator are established in Matlab Simulink environment. Simulink model using neural network is shown in Fig. 6. The system parameters used in these simulations are provided in Table 1. A system with supply voltage 230 V, 50 Hz is used. The DC reference voltage is set at 350V. The filter inductor is 15mH

310

Vol. 3, Issue 1, pp. 306-314

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
and the DC bus capacitor is 1mF. This control system is able to detect and eliminate most of the harmonics present in the source current

Fig. 6 simulation model of APF based on neural harmonic estimator

. Fig. 7 grid current waveform and FFT analysis without filter

311

Vol. 3, Issue 1, pp. 306-314

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Fig. 8 Grid current waveform and FFT analysis with filter using linear reference generating control technique.

Fig.9 Grid current waveform and FFT analysis with filter using neural reference generator.

The nonlinear load consists of a series resistor Rs with an uncontrolled bridge rectifier connected to a capacitor C and a resistive load R. A proportional proportional-integral (PI) voltage loop has been used to set the proper magnitude of the line current. The proposed neural network based control strategy and linear controller have been tested in Matlab Simulink and results are compared. These results reveal, with the use of neural controller, the magnitude of the harmonic components is considerably reduced in the grid current. The proposed neural control system is able to detect the largest load harmonics and to compensate them properly. and Fig. 7 shows the distorted current waveform when a nonlinear load is connected across the grid. Through the analysis of the waveforms given it is clear that THD of grid current 26.28% when load is connected. It is reduced to 9.88% by the use of APF with linear control scheme. It i shown in Fig 8. % is Fig 9 shows that the neural controller is able to reduce this harmonics further, to 7.82%. Compared to the linear controller proposed neural controller has the advantage of increased overall efficiency with its high learning rate. It can adapt itself to compensate for variations in nonlinear current or nonlinear rning loads. Higher order harmonics are attenuated very efficiently using neural controller controller.

VI.

CONCLUSIONS

In this paper, an active power filter has been designed and the control methods have been presented. Proposed controller has been designed to mitigate the selective harmonic currents The shunt active currents. filter has been used to compensate a nonlinear load harmonic current. The filter’s parameters are made adaptive versus the grid current fluctuations. The inverter of the shunt active filter is current controlled by a PI controller whose reference is given by a neural filter. Compared to linear current control scheme, neural controller has high attenuation against high order harmonics. In linear current

312

Vol. 3, Issue 1, pp. 306-314

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
control conventional PI control loop that regulates the average level of the filter capacitor voltage and a resonant selective harmonic compensator are used. This controller is compared with neural network which shows the better performance in compensation of current harmonics.

ACKNOWLEDGEMENTS
The authors would like to thank the management of Kumaraguru College of Technology and JNTUH College of Engineering for providing infrastructure and support.

REFERENCES
[1] M. El-Habrouk, M. K. Darwish, and P. Mehta, “Active power filters:A review,” Proc. Inst. Elect. Eng.— Elect. Power Appl., vol. 147, no. 5,pp. 403–413, Sep. 2000. [2] T. C. Green and J. H.Marks, “Control techniques for active power filters,”Proc. Inst. Elect. Eng.—Elect. Power Appl., vol. 152, no. 2, pp. 369–381,Mar. 2005. [3] J. C.Wu and H. L. Jou, “Simplified control method for single-phase active power filter,” Proc. Inst. Elect. Eng.—Elect. Power Appl., vol. 143, no. 3,pp. 219–224, May 1996. [4] C. Y. Hsu and H. Y. Wu, “A new single-phase active power filter with reduced energy-storage capacity,” Proc. Inst. Elect. Eng.—Elect. Power Appl., vol. 143, no. 1, pp. 25–30, Jan. 1996. [5] H. Komurcugil and O. Kukrer, “A new control strategy for single-phase shunt active power filters using a Lyapunov function,” IEEE Trans. Ind. Electron., vol. 53, no. 1, pp. 305–312, Feb. 2006 [6] B. N. Singh, “Sliding mode control technique for indirect current controlled active filter,” in Proc. IEEE Region 5 Annu. Tech. Conf., New Orleans, LA, Apr. 2003, pp. 51–58. [7] D. A. Torrey and A. M. A. M. Al-Zamel, “Single-phase active powerfilters for multiple nonlinear loads,” IEEE Trans. Power Electron., vol. 10,no. 3, pp. 263–272, May 1995. [8] V. F. Corasaniti, M. B. Barbieri, P. L. Arnera, and M. I. Valla, “Hybrid active filter for reactive and harmonics compensation in a distribution network,” IEEE Trans. Ind. Electron., vol. 56, no. 3, pp. 670– 677,Mar. 2009 [9] P. Kumar and A. Mahajan, “Soft computing techniques for the control of an active power filter,” IEEE Trans. Power Del., vol. 24, no. 1,pp. 452–461, Jan. 2009. [10] L. Asiminoaei, F. Blaabjerg, S. Hansen, and P. Thogersen, “Adaptive compensation of reactive power with shunt active power filters,” IEEETrans. Ind. Appl., vol. 44, no. 3, pp. 867–877, May/Jun. 2008 [11] M. Cirrincione, M. Pucci, G. Vitale, and S. Scordato, “A single-phase shunt active power filter for current harmonic compensation by adaptive neural filtering,” in Proc. 12th IEEE EPE-PEMC, Portoroz, Slovenia,Aug. 30–Sep. 1, 2006, pp. 1830–1835. [12] M. Cirrincione, M. Pucci, and G. Vitale, “A single-phase DG generation unit with shunt power filter capability by adaptive neural filtering,” IEEE Trans. Ind. Electron., vol. 55, no. 5, pp. 2093–2110, May 2008 [13] G. Bhuvaneswari and M. G. Nair, “Design, simulation, and analog circuit implementation of a three-phase shunt active filter using the I cosphi algorithm,” IEEE Trans. Power Del., vol. 23, no. 2, pp. 1222–1235,Apr. 2008. [14] D. Casadei, G. Grandi, R. K. Jardan, and F. Profumo, “Control strategy of a power line conditioner for cogeneration plants,” in Proc. IEEE PES C,Charleston, SC, Jun. 1999, pp. 607–612 [15]. Niebur D, Germond AJ. Unsupervised Neural Net Classification of Power System Static Security States. Int. Journal of Electrical Power and Energy System.1992; 14(2-3): 233-242. [16]M. Rukonuzzaman and M. Nakaoka, “Single-phase shunt active power filter with adaptive neural network method for determining compensating current,” in Proc. 27th Annu. IEEE IECON, Nov. 29–Dec. 2 2001, vol. 3, pp. 2032–2037. [17] D. Gao and X. Sun, “A shunt active power filter with control method based on neural network, [18] W.M. Grady, M.J. Samotyj, A.H. Noyola, “Survey of Active Power Line Conditioning Methodologies,” IEEE Trans. on Power Delivery, vol. 5, no. 3, 1990, pp. 1536-1542. [19] M. El-Habrouk, M.K. Darwish, “Design and Implementation of Modified Fourier Analysis Harmonic Current Computation Technique for Power Active Filters Using DSPs,” IEE Proc. Electr. Power Appl., vol. 148, no. 1, 2001, pp. 21-28. [20]Selective Harmonic-Compensation Control for Single-Phase Active Power Filter With High Harmonic Rejection Jaume Miret, Member, IEEE, Miguel Castilla, José Matas, Josep M. Guerrero, Senior Member, IEEE, and Juan C. Vasquez, IEEE Transactions On Industrial Electronics, Vol. 56, No. 8, August 2009. [21] Mohd Amran Mohd Radzi, Member, IEEE, and Nasrudin Abd. Rahim, Senior Member,IEEE ”Neural Network and Bandless Hysteresis Approach to Control Switched Capacitor Active Power Filter for Reduction of Harmonics” ieee Transactions On Industrial Electronics, Vol. 56, No. 5, May 2009

313

Vol. 3, Issue 1, pp. 306-314

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
[22] Avik Bhattacharya, Student Member, IEEE, and Chandan Chakraborty, Senior Member, IEEE ‘A shunt active power filter with enhanced performance using ann-based predictive and adaptive controllers’ ieee transactions on industrial electronics, vol. 58, no. 2, february 2011

Authors
P. Thirumoorthi received B.E. in Electrical and Electronics Engineering from Madras University, India, 1993, M. Tech. in Power Electronics and Drives from SASTRA University, India, 2002. He is working as Associate Professor in Kumaraguru college of Technology, India and currently he is pursuing Ph.D. in Electrical Engineering at J. N. T. University, Hyderabad, India. His research interest includes Power Quality and Power Filters.

Jyothis Francis received B.Tech. Degree in Electrical and Electronic Engineering from Calicut University, Kerala, India in 2010. She is currently pursuing M.E. (Power electronics and drives) in Kumaraguru College of technology, Coimbatore, Tamil Nadu, India. Her area of interest includes Power Electronics, Industrial drives and power quality improvement. N. Yadaiah received B.E. in Electrical Engineering from College of Engineering, Osmania University, Hyderabad, India, 1988, M. Tech. in Control systems from I.I.T. Kharagpur, India, 1991 and Ph.D. in Electrical Engineering from J. N. T. University, Hyderabad, India,2000. He received Young Scientist Fellowship (YSF) of Andhra Pradesh State Council for Science and Technology, in 1999. Currently he is Professor and Head in Electrical & Electronics Engineering at JNTUH College of Engineering, Hyderabad and holding two research projects. He has 75 publications to his credit in International journals/conferences. He has visited as Visiting Professor to University of Alberta, from May to July 2007. He is Fellow of Institution Engineers (India), Fellow of IETE (India), Senior Member of IEEE, and Life Member of Systems Society of India and ISTE. He is editorial board member to Journal of Computer Science (India). His research interest includes: Adaptive Control, Artificial Neural Networks, Fuzzy logic, Nonlinear Systems and Process Control.

314

Vol. 3, Issue 1, pp. 306-314

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

DATA LEAKAGE DETECTION
Archana Vaidya, Prakash Lahange, Kiran More, Shefali Kachroo & Nivedita Pandey
1

Department of Computer Engineering, S.V.I.T., Nashik, M.H., India

A BSTRACT
Modern business activities rely on extensive email exchange. Email leakages have become widespread, and the severe damage caused by such leakages constitutes a disturbing problem for organizations. We study the following problem: A data distributor has given sensitive data to a set of supposedly trusted agents (third parties). If the data distributed to third parties is found in a public/private domain then finding the guilty party is a nontrivial task to distributor. Traditionally, this leakage of data is handled by water marking technique which requires modification of data. If the watermarked copy is found at some unauthorized site then distributor can claim his ownership. To overcome the disadvantages of using watermark [2], data allocation strategies are used to improve the probability of identifying guilty third parties. The distributor must assess the likelihood that the leaked came from one or more agents, as opposed to having been independently gathered by other means. In this project, we implement and analyze a guilt model that detects the agents using allocation strategies without modifying the original data. The guilty agent is one who leaks a portion of distributed data. We propose data allocation strategies that improve the probability of identifying leakages. In some cases we can also inject “realistic but fake” data record to further improve our changes of detecting leakage and identifying the guilty party. The algorithms implemented using fake objects will improve the distributor chance of detecting guilty agents. It is observed that by minimizing the sum objective the chance of detecting guilty agents will increase. We also developed a framework for generating fake objects.

K EYW ORDS: Sensitive Data, Fake Objects, Data Allocation Strategies

I.

INTRODUCTION

Demanding market conditions encourage many companies to outsource certain business processes (e.g. marketing, human resources) and associated activities to a third party. This model is referred as Business Process Outsourcing (BPO) and it allows companies to focus on their core competency by subcontracting other activities to specialists, resulting in reduced operational costs and increased productivity. Security and business assurance are essential for BPO. In most cases, the service providers need access to a company's intellectual property and other confidential information to carry out their services. For example a human resources BPO vendor may need access to employee databases with sensitive information (e.g. social security numbers), a patenting law firm to some research results, a marketing service vendor to the contact information for customers or a payment service provider may need access to the credit card numbers or bank account numbers of customers. The main security problem in BPO is that the service provider may not be fully trusted or may not be securely administered. Business agreements for BPO try to regulate how the data will be handled by service providers, but it is almost impossible to truly enforce or verify such policies across different administrative domains. Due to their digital nature, relational databases are easy to duplicate and in many cases a service provider may have financial incentives to redistribute commercially valuable data or may simply fail to handle it properly. Hence, we need powerful techniques that can detect and deter such dishonest. We study unobtrusive techniques for detecting leakage of a set of objects or records. Specifically, we study the following scenario: After giving a set of objects to agents, the distributor discovers some of

315

Vol. 3, Issue 1, pp. 315-321

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
those same objects in an unauthorized place. (For example, the data may be found on a web site, or may be obtained through a legal discovery process.) At this point the distributor can assess the likelihood that the leaked data came from one or more agents, as opposed to having been independently gathered by other means. We develop a model for assessing the “guilt” of agents. We also present algorithms for distributing objects to agents, in a way that improves our chances of identifying a leaker. Finally, we also consider the option of adding “fake” objects to the distributed set.

II.

PROBLEM DEFINITION

Suppose a distributor owns a set T = {t1 ,tm } of valuable data objects. The distributor wants to share some of the objects with a set of agents U1,U2 ,… ,Un but does wish the objects be leaked to other third parties. An agent Ui receives a subset of objects Ri which belongs to T, determined either by a sample request or an explicit request, Sample Request Ri = SAMPLE ( T,mi ) : Any subset of mi records from T can be given to Ui. Explicit Request Ri = EXPLICIT ( T,condi ) : Agent Ui receives all the T objects that satisfy condi . The objects in T could be of any type and size, e.g., they could be tuples in a relation, or relations in a database. After giving objects to agents, the distributor discovers that a set S of T has leaked. This means that some third party called the target has been caught in possession of S. For example, this target may be displaying S on its web site, or perhaps as part of a legal discovery process, the target turned over S to the distributor. Since the agents U1,U2 ,… ,Un, have some of the data, it is reasonable to suspect them leaking the data. However, the agents can argue that they are innocent, and that the S data was obtained by the target through other means.

2.1 Agent Guilt Model
Suppose an agent Ui is guilty if it contributes one or more objects to the target. The event that agent Ui is guilty for a given leaked set S is denoted by Gi | S. The next step is to estimate Pr {Gi | S }, i.e., the probability that agent Gi is guilty given evidence S. To compute the Pr {Gi | S}, estimate the probability that values in S can be “guessed” by the target. For instance, say some of the objects in t are emails of individuals. Conduct an experiment and ask a person to find the email of say 100 individuals, the person may only discover say 20, leading to an estimate of 0.2. Call this estimate as Pt, the probability that object t can be guessed by the target. The two assumptions regarding the relationship among the various leakage events. Assumption 1: For all t, t ∈ S such that t ≠ ť the provenance of t is independent of the provenance of ť. The term provenance in this assumption statement refers to the source of a value t that appears in the leaked set. The source can be any of the agents who have t in their sets or the target itself. Assumption 2: An object t ∈ S can only be obtained by the target in one of two ways. A single agent Ui leaked t from its own Ri set, or The target guessed (or obtained through other means) t without the help of any of the n agents. To find the probability that an agent Ui is guilty given a set S, consider the target guessed t1 with probability p and that agent leaks t1 to S with the probability 1-p. First compute the probability that he leaks a single object t to S. To compute this, define the set of agents Vt = { Ui | t Ri } that have t in their data sets. Then using Assumption 2 and known probability p, we have Pr {some agent leaked t to S} = 1- p ………..…1.1

Assuming that all agents that belong to Vt can leak t to S with equal probability and using Assumption 2 obtain, , ∈ Pr { leaked to } = | | …1.2 0, ℎ Given that agent Ui is guilty if he leaks at least one value to S, with Assumption 1 and Equation 1.2 compute the probability Pr { Gi | S}, agent Ui is guilty,

316

Vol. 3, Issue 1, pp. 315-321

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
…..1.3

2.2 Data Allocation Problem
The distributor “intelligently” gives data to agents in order to improve the chances of detecting a guilty agent. There are four instances of this problem, depending on the type of data requests made by agents and whether “fake objects” [4] are allowed. Agent makes two types of requests, called sample and explicit. Based on the requests the fakes objects are added to data list. Fake objects are objects generated by the distributor that are not in set T. The objects are designed to look like real objects, and are distributed to agents together with the T objects, in order to increase the chances of detecting agents that leak data.

The Fig. 1 represents four problem instances with the names EF, E , SF an S , where E stands for explicit requests, S for sample requests, F for the use of fake objects, and for the case where fake objects are not allowed. The distributor may be able to add fake objects to the distributed data in order to improve his effectiveness in detecting guilty agents. Since, fake objects may impact the correctness of what agents do, so they may not always be allowable. Use of fake objects is inspired by the use of “trace” records in mailing lists. The distributor creates and adds fake objects to the data that he distributes to agents. In many cases, the distributor may be limited in how many fake objects he can create. In EF problems, objective values are initialized by agent’s data requests. Say, for example, that T = {t1,t2 } and there are two agents with explicit data requests such that R1 = {t1 ,t2 } and R2 = {t1}. The distributor cannot remove or alter the R1 or R2 data to decrease the overlap R1 \ R2 . However, say the distributor can create one fake object (B = 1) and both agents can receive one fake object (b1 =b2 = 1). If the distributor is able to create more fake objects, he could further improve the objective.

2.3 Optimization Problem
The distributor’s data allocation to agents has one constraint and one objective. The distributor’s constraint is to satisfy agents’ requests, by providing them with the number of objects they request or with all available objects that satisfy their conditions. His objective is to be able to detect an agent who leaks any portion of his data. We consider the constraint as strict. The distributor may not deny serving an agent request and may not provide agents with different perturbed versions of the same objects. The fake object distribution as the only possible constraint relaxation The objective is to maximize the chances of detecting a guilty agent that leaks all his data objects. The Pr {Gj |S =Ri } or simply Pr { Gj | Ri} is the probability that Uj agent is guilty if the distributor discovers a leaked table S that contains all Ri objects. The difference functions ∆ , is defined as: ∆ , = Pr {Gi|Ri}- Pr {Gj|Ri} ………..1.4 1) Problem definition: Let the distributor have data requests from n agents. The distributor wants to give tables R1, …..Rn to agents U1 , . . . ,Un respectively, so that

317

Vol. 3, Issue 1, pp. 315-321

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Distribution satisfies agents’ requests; and Maximizes the guilt probability differences ∆ (i, j ) for all i, j = 1. . . n and i = j. Assuming that the Rt sets satisfy the agents’ requests, we can express the problem as a multi-criterion 2) Optimization problem: Maximize (. . . , ∆ ( i, j ), . . . ) i ! = j ……….… 1.5 (over R1 , . . . ,Rn ) The approximation [3] of objective of the above equation does not depend on agent’s probabilities and therefore minimize the relative overlap among the agents as | ∩ | , . . . ) i != j ..…..1.6 Minimize (. . . , (over R1 , . . . ,Rn ) | ∩ | This approximation is valid if minimizing the relative overlap, maximizes ∆ ( i, j ).

2.4 Objective Approximation
In case of sample request, all requests are of fixed size. Therefore, maximizing the chance of detecting | ∩ | is equivalent to minimizing (|Ri∩ Rj|). The a guilty agent that leaks all his data by minimizing, minimum value of (|Ri∩ Rj|) maximizes Π(|Ri∩ Rj|) and ∆ ( i, j ) , since Π(|Ri |) is fixed. If agents have explicit data requests, then overlaps |Ri∩ Rj| are defined by their own requests and |Ri∩Rj| are fixed. Therefore, minimizing | Ri | j is equivalent to maximizing | Ri | (with the addition of fake objects). The maximum value of | Ri | minimizes Π(Ri ) and maximizes ∆( i, j ), since Π( Ri∩ Rj) is fixed. Our paper focuses on identifying the leaker. So we propose to trace the ip address of the leaker. The file is send to the agents in the form of email attachments which need a secret key to download it. This secret key is generated using a random function and send to the agent either on the mobile number used at registration or to the other global email service account such as gmail. Whenever the secret key mismatch takes place the fake file gets downloaded. To further enhance our objective approximation ip address tracking [5][6][7] is done of the system where fake object is downloaded. Various commands are available for getting ip address information. Ping, tracert, nslookup, etc anyone may be used to get it. The ip address is traced with the time so as to overcome problem of dynamic ip addressing. But as we are doing this for an organisation there is no problem of dynamic ip. Or else if looking for the ip address universally it is unique for that period of time therefore it can be traced to unique system of the leaker.

III.

ALLOCATION STRATEGIES

In this section the allocation strategies [1] solve exactly or approximately the scalar versions of Equation 1.7 for the different instances presented in Fig. 1. In Section A deals with problems with explicit data requests and in Section B with problems with sample data requests.

3.1 Explicit Data Request
In case of explicit data request with fake not allowed, the distributor is not allowed to add fake objects to the distributed data. So Data allocation is fully defined by the agent’s data request. In case of explicit data request with fake allowed, the distributor cannot remove or alter the requests R from the agent. However distributor can add the fake object. In algorithm for data allocation for explicit request, the input to this is a set of request R1,R2,……,Rn from n agents and different conditions for requests. The e-optimal algorithm finds the agents that are eligible to receiving fake objects. Then create one fake object in iteration and allocate it to the agent selected. The e-optimal algorithm minimizes every term of the objective summation by adding maximum number bi of fake objects to every set Ri yielding optimal solution. Step 1: Calculate total fake records as sum of fake records allowed. Step 2: While total fake objects > 0 Step 3: Select agent that will yield the greatest improvement in the sum objective i.e. i =

318

Vol. 3, Issue 1, pp. 315-321

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Step 4: Create fake record Step 5: Add this fake record to the agent and also to fake record set. Step 6: Decrement fake record from total fake record set. Algorithm makes a greedy choice by selecting the agent that will yield the greatest improvement in the sum-objective.

3.2 Sample Data Request
With sample data requests, each agent Ui may receive any T from a subset out of different ones.

Hence, there are different allocations. In every allocation, the distributor can permute T objects and keep the same chances of guilty agent detection. The reason is that the guilt probability depends only on which agents have received the leaked objects and not on the identity of the leaked objects. Therefore, from the distributor’s perspective there / |T| are different allocations. An object allocation that satisfies requests and ignores the distributor’s objective is to give each agent a unique subset of T of size m. The s-max algorithm allocates to an agent the data record that yields the minimum increase of the maximum relative overlap among any pair of agents. The s-max algorithm is as follows. Step 1: Initialize Min_overlap ← 1, the minimum out of the maximum relative overlaps that the allocations of different objects to Ui Step 2: for k ∈ {k |tk ∈ Ri} do Initialize max_rel_ov ← 0, the maximum relative overlap between Ri and any set Rj that the allocation of tk toUi Step 3: for all j = 1,..., n : j = i and tk ∈ Rj do Calculate absolute overlap as abs_ov ← |Ri ∩Rj | + 1 Calculate relative overlap as rel_ov ← abs_ov / min (mi ,mj ) Step 4: Find maximum relative as max_rel_ov ← MAX (max_rel_ov, rel_ov) If max_rel_ov ≤ min_overlap then min_overlap ← max_rel_ov ret_k ← k Return ret_k It can be shown that algorithm s-max is optimal for the sum-objective and the max-objective in problems where M ≤ |T| and n < |T|. It is also optimal for the max-objective if |T| ≤ M ≤ 2 |T| or all agents request data of the same size. It is observed that the relative performance of algorithm and main conclusion do not change. If p approaches to 0, it becomes easier to find guilty agents and algorithm performance converges. On the other hand, if p approaches 1, the relative differences among algorithms grow since more evidence is needed to find an agent guilty. The algorithm presented implements a variety of data distribution strategies that can improve the distributor’s chances of identifying a leaker. It is shown that distributing objects judiciously can make a significant difference in identifying guilty agents, especially in cases where there is large overlap in the data that agents must receive.

IV.

RELATED WORK

The presented guilt detection approach is related to the data provenance problem [8]: tracing the lineage of S objects implies essentially the detection of the guilty agents. Suggested solutions are domain specific, such as lineage tracing for data warehouses [9], and assume some prior knowledge on the way a data view is created out of data sources. Our problem formulation with objects and sets is more general and simplifies lineage tracing, since we do not consider any data transformation from Ri sets to S. As far as the allocation strategies are concerned, our work is mostly relevant to watermarking that is used as a means of establishing original ownership of distributed objects. Watermarks were initially

319

Vol. 3, Issue 1, pp. 315-321

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
used in images, video and audio data [2] whose digital representation includes considerable redundancy. Our approach and watermarking are similar in the sense of providing agents with some kind of receiver-identifying information. However, by its very nature, a watermark modifies the item being watermarked. If the object to be watermarked cannot be modified then a watermark cannot be inserted. In such cases methods that attach watermarks to the distributed data are not applicable. Finally, there are also lots of other works on mechanisms that allow only authorized users to access sensitive data. Such approaches prevent in some sense data leakage by sharing information only with trusted parties. However, these policies are restrictive and may make it impossible to satisfy agents’ requests.

V.

CONCLUSION

In doing a business there would be no need to hand over sensitive data to agents that may unknowingly or maliciously leak it. And even if we had to hand over sensitive data, in a perfect world we could watermark each object so that we could trace its origins with absolute certainty. However, in many cases we must indeed work with agents that may not be 100% trusted, and we may not be certain if a leaked object came from an agent or from some other source. In spite of these difficulties, we have shown it is possible to assess the likelihood that an agent is responsible for a leak, based on the overlap of his data with the leaked data and the data of other agents, and based on the probability that objects can be “guessed” by other means. Our model is relatively simple, but we believe it captures the essential trade-offs. The algorithms we have presented implement a variety of data distribution strategies that can improve the distributor’s chances of identifying a leaker. We have shown that distributing objects judiciously can make a significant difference in identifying guilty agents, especially in cases where there is large overlap in the data that agents must receive. Our future work includes the investigation of agent guilt models that capture leakage scenarios that are not studied in this paper. For example, what is the appropriate model for cases where agents can collude and identify fake tuples? A preliminary discussion of such a model is available in. Another open problem is the extension of our allocation strategies so that they can handle agent requests in an online fashion (the presented strategies assume that there is a fixed set of agents with requests known in advance).

ACKNOWLEDGMENTS
We would like to thank Prof. S. M. Rokade, Head of Department, Computer Engineering Department, SVIT, Nashik for his encouragement and motivation to write this paper. Last but not least, we would like to thank all our friends and classmates for giving us timely suggestions.

REFERENCES
[1]. Rudragouda G Patil, “Development of Data leakage Detection Using Data Allocation Strategies International Journal of Computer Applications in Engineering Sciences [VOL I, ISSUE II, JUNE 2011, [ISSN: 2231-4946]. [2]. S. Czerwinski, R. Fromm, and T. Hodes. Digital music distribution and audio watermarking. [3]. L. Sweeney. Achieving k-anonymity privacy protection using generalization and suppression, 2002. [4]. S. U. Nabar, B. Marthi, K. Kenthapadi, N. Mishra, and R. Motwani. Towards robustness in query auditing. In VLDB ’06. [5]. Stevens Le Blond, Chao Zhang Arnaud Legout, Keith Ross, Walid Dabbous, Exploiting P2P Communications to Invade Users’ Privacy [6]. The best tools and methods to track down suspect IP addresses and URLs http://www.techrepublic.com/blog/networking/the-best-tools-and-methods-to-track-down-suspect-ipaddresses-and-urls/3456 [7]. How to Trace an IP Address http://www.wikihow.com/Trace-an-IP-Address [8]. P. Buneman, S. Khanna, and W. C. Tan. Why and where: A characterization of data provenance. In J. V. den Bussche and V. Vianu, editors, Database Theory - ICDT 2001, 8th International Conference, London, UK, January 4-6, 2001, Proceedings, volume 1973 of Lecture Notes in Computer Science, pages 316-330. Springer, 2001

320

Vol. 3, Issue 1, pp. 315-321

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
[9]. Y. Cui and J. Widom. Lineage tracing for general data warehouse transformations. In The VLDB Journal, pages 471–480, 2001. Authors Archana S. Vaidya is currently working as an Assistant Professor in the Computer Engineering Department of “Sir Visvesvaraya Institute of Technology (S.V. I.T), Chincholi, Nasik (INDIA)”. She received her Masters’ Degree in Computer Engineering from V. J. T. I., Mumbai University (INDIA) in 2010 & Bachelors’ Degree in Computer Engineering from Walchand College Of Engineering Sangli Shivaji University (INDIA) in 2002 .She has an academic experience of 10 years (since 2002). She has taught Computer related subjects at undergraduate level. Her areas of interest are Data Structures ,Database Management Systems ,Operating Systems & Distributed Systems . She is life member of ISTE (Indian Society Of Technical Education) She has published 06 papers in National Conferences & 4 papers in International Conferences & Journals. She has guided 15 B. E. projects. Kiran More was born in Ahmednagar, Maharashtra, India in year 1989. He is currently pursuing degree in Computer Engineering from Sir Visvesvaraya Institute of Technology, Nashik

Shefali Kachroo was born in Jammu & Kashmir, India in year 1990. She is currently pursuing degree in Computer Engineering from Sir Visvesvaraya Institute of Technology, Nashik

Nivedita Pandey was born in Mumbai, Maharashtra, India in year 1990. She is currently pursuing degree in Computer Engineering from Sir Visvesvaraya Institute of Technology, Nashik

Prakash Lahange was born in Nashik, Maharashtra, India in year 1990. He is currently pursuing degree in Computer Engineering from Sir Visvesvaraya Institute of Technology, Nashik

321

Vol. 3, Issue 1, pp. 315-321

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

EXTRACTION OF VISUAL AND ACOUSTIC FEATURES OF THE DRIVER FOR REAL-TIME DRIVER MONITORING SYSTEM
Sandeep Kotte
Department of Computer Science & Engineering, Dhanekula Institute of Engineering & Technology, Vijayawada, India.

ABSTRACT
Driving is one of the most dangerous tasks in our everyday lives. Statistics show that over the past couple of decades the majority of the accidents are not only due to the poor vehicle technical conditions but also due to the driver's inattentiveness. The major cause for the inattentiveness includes aspects such as drowsiness (sleepy), fatigue (lack of energy) and emotions/stress (for example sad, angry, joy, pleasure, despair and irritation). In order to improve attentiveness of the driver, the Indian Government has introduced regulations amongst others concerning driving time and rest periods for the drivers. Even though these regulations helped partially in improving the driver attentiveness, they largely ignore the real-time dynamic ergonomics which is influenced by diverse factors such as the traffic control strategies, road geometry, vehicle characteristics, changing traffic scenarios, weather, etc. Different approaches have been proposed for monitoring the driver states, especially drowsiness and fatigue. Fatigue is traditionally measured by observing the eyelid movements. The drowsiness is generally measured by analyzing either head movements patterns or eyelid movements or face expressions or all the lasts together. Concerning emotion/stress recognition visual sensing of face expressions is helpful but generally not always sufficient. Therefore, one needs additional information that can be collected in a non-intrusive manner in order to increase the robustness of the emotion/stress measurement in the frame of a non-intrusive monitoring policy. We choose and find acoustic information emanating from the driver to be appropriate in the analysis of the emotions of driver, provided the driver generates some vocal signals by speaking, shouting, crying, etc., which is not un-common during the driving process. From these acoustic signals, this work extracts the spatial and temporal acoustic features and correlates it to the emotions of the driver. In this paper, a demonstration on how one can distinguish the emotion based on acoustic features (or combination of features) by testing them over Berlin emotion database.

KEYWORDS: Feature Extraction, Fatigue, Classifier, LDA, Berlin Database

I.

INTRODUCTION

Driver monitoring plays a major role in order to assess, control and predict the driver behavior. The research concerning driver monitoring systems was started nearly from the 1980's. In the first stages of this research, researchers developed driver monitoring systems based on inferring both driver behavior and state from the observed/measured vehicle performance. In a following generation, driver state and behavior has been directly assessed by intrusive systems measuring the physiological characteristics. But these techniques require driver cooperation since they are intrusive. Further, they may also disturb the driver behavior. And more recently, a significant research has been focusing on developing non-intrusive techniques, generally based on machine vision, directly measure in a nonintrusive manner the driver state.

1.1. Theoretical background
Due to increased number of speech driven applications in the recent years the automatic assessment of emotions from the drivers speech signal has become a research interest [1][2]. Such assessment

322

Vol. 3, Issue 1, pp. 322-333

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
provides information about the driver's satisfaction with the cars infotainment system and in particular, it improves efficiency and friendliness of human-machine interfaces. Moreover it reflects the driver's perception of the traffic situation and thus reveals his/her stress level. Therefore the emotional state also affects the driving capability which is of utmost importance for the safety of all occupants [4][3]. It allows for monitoring of physiological state of individuals in several demanding work environments, can be used to augment automated medical or forensic data analysis systems [5]. The performance of such a system is studied in the acoustically demanding environment of vehicular noise while driving. These systems mainly focus on (a) electro physiological data (e.g. EEG: [7][6]), and (b) behavioural expression data (gross body movement, head movement, mannerism, and facial expression; [8]) in order to characterize the user state. But these electrode-(EOG/EEG reaching 15% error rate for fatigue detection; [9]) or video based instruments still do not fulfill the demands of an everyday life measurement system. They have some shortcomings like (a) lack of robustness against environmental and individual-specific variations (e.g. bright light, wearing correction glasses, and angle of face or being of Asian race) and (b) lack of comfort and longevity due to electrode sensor application. In contrast to these electrode or video-based instruments, the utilization of voice communication as an indicator for emotional states matches the demands of everyday life measurement. Contact free measurements as voice analysis are non-obtrusive (not interfering with the primary driving task) and favorable for emotion detection, since an application of sensors would cause annoyance, additional stress and often impairs working capabilities and mobility demands. In addition, speech is easy to record even under extreme environmental conditions (temperature, high humidity and bright light), requires merely cheap, durable, and maintenance free sensors and most importantly, it utilizes already existing communication system hardware. Furthermore, speech data is omnipresent in many professional driver settings. Given these obvious advantages, the renewed interest in computational demanding analyses of vocal expressions has been enabled just recently by the advances in computer processing speed [12][13][10][11]. The first investigations to emotion detection from speech were conducted around the mid of the 1980s using statistical properties of certain acoustic features [15][14]. Later, the evolution of computer architectures introduced the detection of more complicated emotions from the speech. The research towards detecting human emotions is increasingly attracting the attention of the research community [15]. Nowadays, the research is focused on finding powerful combinations of classifiers that increase the classification efficiency in real-life speech emotion detection applications. Some of these techniques are used to recognize the frustration of a user and change their response automatically. Speech based emotion detection has lots of useful applications. Some of them are human-robotic interfaces [16], smart call-centers [18][19][17], intelligent spoken tutoring systems [20], spoken dialog research. The emotions can be observed from information about the language, what we say and how we say it. How we say something is more important than what we say. In the literature [21], the main focus is on the phonetics and acoustic properties of the affective spoken language. The emotions like anger, joy and sadness affect the driver attentiveness and are therefore relevant for the driving process. Thus we will devote particular attention in this research to those emotions that are badly/negatively affecting the driver behaviour [22]. The important voice features to consider for emotion classification are: Fundamental frequency (F0) or Pitch, Intensity (Energy), Speaking rate, Voice quality and many other features that may be extracted/calculated from the voice information are the formants, the vocal tract cross-section areas, the MFCC (Mel Frequency Cepstral Coefficient), Linear frequency cepstrum coefficients (LFCC), Linear Predictive Coding (LPC) and the teager energy operator-based features [15]. Certain features in the voice of a person can be used to infer the emotional state of the particular speaker. The real-time extracting the voice characteristics conveys emotion and attitude in a systematic manner and it is different from male and female.

323

Vol. 3, Issue 1, pp. 322-333

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Table 1: Some variations of acoustic variables observed in relation to emotions [21] Emotion Anger Joy Pitch High mean Increased mean Normal or lower Decreased than normal mean Intensity Wide increased range Increased range Speaking rate Increased Increased Voice quality Breathy; blaring timbre Sometimes breathy; moderately Blaring timbre Resonant timbre

Sadness

Narrow range

Slow

Feature extraction from visual and acoustic information of the driver is an important and basic task to know the driver behavior/emotions. Generally, a feature is a set of measurements. Each measurement contains a piece of information, and specifies the property or characteristics of the object present in the given input. In the daily life, humans are able to guess and understand the state of the other persons by observing multiple features such as body action, voice information and the interpreted knowledge of understanding what he is saying and how he says it. Observing and extracting the multiple features is less complex task (in some cases it is obvious) for humans, for machines it is much more complex. This work mainly focuses on (a) identifying the useful features in the acoustic information (b) extracting the features in real time without missing the frames and (c) correlate the features to parameters/dimensions of the “extended ergonomics status” vector.

II.

NON-INTRUSIVE FEATURE EXTRACTION APPROACHES

Feature extraction is used to reduce the large input data into smaller data and it converts the data into small features sets or feature vectors (n-dimensional vector to store numerical features which represents an object). Feature extraction is defined as the process of extracting the feature from a source data, where the data can be embedded from high dimensional data set [23]. We calculate different feature sets for different applications. For computer vision applications edges, corners are calculated as features for images. Features like data and noise ratio, length of sound and relative power are calculated for pattern recognition applications.

2.1. Feature Extraction from the Visual Information
In 1990’s, researchers introduced appearance based linear subspace techniques, statistics related techniques, to reduce the dimensionality and to extract the useful visual features. The introduction of the linear subspace techniques is a milestone in the visual feature extraction concept. The performance of appearance based techniques heavily depends upon the quality of the extracted features from image [23]. The appearance based linear subspace techniques extract the global features, as these techniques use the statistical properties like the mean and variance of the image [24]. The major difficulty in applying these techniques over large databases is that the computational load and memory requirements for calculating features increase dramatically for large databases [24]. In order to increase the performance of the feature extraction techniques, the nonlinear feature extraction techniques are introduced. In order to improve the performance of the emotion recognition systems, we have to extract both linear and nonlinear features. We have many nonlinear feature extraction techniques, such as radon transform and wavelet transform. The radon transform based nonlinear feature extraction gives the direction of the local features. This process extracts the spatial frequency components in the direction of radon projection is computed [25]. When features are extracted using radon transform, the variations in this facial frequency are also boosted [25]. The wavelet transform gives the spacial and frequency components present in an image. The performance of these feature extraction approaches are systematically evaluated in our previous work over FERET database for face recognition application as shown in Figure. 1.

324

Vol. 3, Issue 1, pp. 322-333

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Figure 1: On Performance comparison of different face recognition approaches with profile right images [25].

2.2. Feature Extraction from the Acoustic Information
Speech is easy to record even under extreme environmental conditions (temperature, high humidity and bright light), requires merely cheap, durable and maintenance free sensors and most importantly, it utilizes already existing communication system hardware. Furthermore, speech data is omnipresent in many professional driver settings. Given these obvious advantages, the renewed interest in computational demanding analyses of vocal expressions has been enabled just recently by the advances in computer processing speed. The first investigations to emotion recognition from speech were conducted around the mid of the 1980s. Due to increased number of speech driven applications in the recent years the automatic assessment of emotions from the drivers speech signal has become a research interest [26][27]. Such assessment provides information about the driver’s satisfaction with the cars infotainment system and in particular, it improves efficiency and friendliness of humanmachine interfaces. For the machine based state estimation, we will not focus on the voice content but rather on voicesignal features that are relevant for an emotional state inference. In this regard, to make the system more robust in predicting the driver state we will analyze the acoustic information such as pitch, intensity, speaking rate, voice quality, etc. in order to extract appropriate features [27]. Acoustic features extraction is challenging in many aspects as it highly depends on the age and gender of the person. The acoustic features are quite varying for different age groups and different gender. Angry males show higher levels of energy than angry females. It is found that males express anger with a slow speech rate as opposed to females who employ a fast speech rate under similar circumstances [27]. Acoustic information will however be precious, whenever available, to better assess and understand the effect of driving process ergonomics on the driver state and mood. The real-time extraction of acoustic characteristics from the voice signal conveys emotion and attitude in a systematic manner and it is different from male and female. Acoustic features are used to recognize the frustration of the driver (i.e. recognize vocal signals by shouting, crying, etc which are not un-common during the driving process). The performance also depends upon the given (input) acoustic information. In this work, input data is taken from audio sensors like microphone and the feature extraction algorithms are executed to extract the features in real time. Features are extracted from the real time data by performing time and frequency domains algorithms [23]. These algorithms extract temporal, spectral features and cepstral coefficients. These features are extracted based on the amplitude and spectrum analyzer of the audio data. The basic approach to the extraction of acoustic features is frame blocking such that a stream of given input audio signal is converted to a set of frames. And the time duration of each frame is about 10~30ms. If the frame size is shorter than 10ms we may miss some important information and sometimes we cannot extract valid acoustic features. If the frame size is longer than 30ms redundancy may occur and we cannot catch the time-varying characteristics of the audio signals. The important voice features to consider for emotion classification are: Fundamental frequency (F0) or Pitch, Intensity (Energy), Speaking rate, Voice quality and many other features that may be

325

Vol. 3, Issue 1, pp. 322-333

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
extracted/calculated from the voice information are the formants, the vocal tract cross-section areas, the MFCC (Mel Frequency Cepstral Coefficient), Linear frequency cepstrum coefficients (LFCCs), Linear Predictive Coding (LPC) and the teager energy operator-based features [27]. Pitch is the fundamental frequency of audio signals, which is equal to the reciprocal of the fundamental period. It can also be defined as the highness or lowness of a sound. Generally for pitch estimation wavelet transforms is used. The shape of the vocal track is modified depending up on the emotion [28]. The MFCC is “spectrum of the spectrum” used to find the number of voices in the speech. It has been proven beneficial in speech emotion detection, and speech detection tasks. The teager energy operator is used to find the number of harmonics due to nonlinear air flow in the vocal track [29][30]. The LPC provides an accurate and economical representation of the envelope of the short-time power spectrum. One of the most powerful speech coding analysis techniques providing very accurate estimates of speech parameters and is known as being relatively efficient for computation at the same time. The LFCC is similar to MFCC but without the perceptually oriented transformation into the Mel frequency scale; emphasize changes or periodicity in the spectrum, while being relatively robust against noise. These features are measured from the mean, range, variance and transmission duration between utterances [26][27]. To calculate these voice features, different techniques are used. The variation in the feature set for Happy is shown in Figure. 2 and Figure. 3 respectively.

Figure 2: Extracted features from the joy emotional audio file.

Figure 3: Detected emotions in 58 seconds representing in a pie.

326

Vol. 3, Issue 1, pp. 322-333

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

III.

EMOTION CLASSIFICATION

Emotion detection has been implemented on a variety of classifiers including Fisher’s linear discriminant analysis (FLDA), maximum likelihood classifier (MLC), neural network (NN), k-nearest neighbor (k-NN), and Gaussian mixture model (GMM) [27]. The main criterion in evaluating the effectiveness of the classification algorithm in this work is the scalability of the classifier. By considering the importance of scalability of the classification algorithm, the algorithms are evaluated over the open source Berlin database. For validation purposes, we create the test and training set databases using Berlin Database [31]. A training set database contains • 535 utterances speech recordings • 10 actors: 5 males, 5 females • 10 utterances (in German) by each voice • Seven different emotional speeches (anger, joy, neutral, boredom, fear, sadness and disgust)[31]. The test database contains different emotions of the persons present in the training database. The backend classifier is working with a fixed length feature vector and is used for classification as shown in Figure. 4.

Figure 4: The overall feature extraction approach from acoustic information.

Linear discriminant analysis (LDA) classifier is used for feature selection (i.e. converts the variable large dimensional feature vectors into the fixed small dimensional feature vector). LDA uses information about the class [23]. A class contains one person with different emotions. LDA tries to maximize the between class variance and minimize the within class variance. In other words, it decreases the distance between same class files and increases the distance between different class files [23]. Because of that LDA easily recognizes the emotions among large databases. The success rate of the different popular classifiers is compared in Table 2.
Table 2: Major Classification techniques and their Success rates Classification techniques Success rate Linear discriminant analysis (LDA) k-nearest neighbor (k-NN) neural network (NN) maximum likelihood classifier (MLC) Gaussian mixture model (GMM) 67.22% 63.33% 57.78% 55% 53.33%

3.1. Proposed Classifier
The LDA performs considerably better when compared to above classifiers for Berlin database. But the performance of LDA is also not sufficient for real world applications.

327

Vol. 3, Issue 1, pp. 322-333

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
The scalability of the real world emotion recognition system is limited, as the computational load and memory requirements increase dramatically with the large data sets. So scalability is major issue here. Up to now, in LDA we are using linear metric (Euclidean distance). Linear metric cannot compare different dimensions of the acoustic feature vectors accurately. To improve the performance (success rate and process speed), we propose the nonlinear Hausdorff metric based LDA. A special nonlinear metric Hausdorff distance which is able to compute the distance between different sized matrices having a single common dimension, like the acoustic matrices representing our acoustic feature vectors [32][33].
100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% Gaussian mixture model (GMM) maximum likelihood classifier (MLC) neural k-nearest Linear network (NN) neighbor (k- discriminant NN) analysis (LDA) LDA using Hausdorff distance Success rate

Figure 5: The graphical representation of the success rate for different classifiers.

Hausdorff distance measure is often used in content-based retrieval applications [32][33]. Hausdorff distance is meant as a measure between two point collection A and B in a metric space S (whose distance is d), it can be viewed as dissimilarity measure between two feature vectors A and B. By considering the hausdorff distance measure instead of the linear eculidean distance measure, the success rate of the LDA algorithm is increased by around 20 % as shown in Figure. 5.

3.2. Summary
In this chapter, Linear Discriminant Analysis (LDA) is explained in detailed. LDA only uses a second order statistics. LDA is using any information about the class. It tries to maximize the between class variance and minimize the within class variance.

IV.

SYSTEM ARCHITECTURE & IMPLEMENTATION RESULTS

The overall architecture of the system is shown in below Fig: 6. The architecture depicts the process of transforming given input speech signals to driver emotions.

328

Vol. 3, Issue 1, pp. 322-333

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Figure 6: The overall architecture of the emotion detection system. (a)Training

4.1. Signal Preprocessing
As the input data is recorded using audio sensors like microphone, the recorded data may be affected by noise due to the weather conditions or any other disturbances. To reduce the noise affect, we performed filter operations which also optimize the class separability of features. This filter operation is performed with pre-emphasis high pass filter. The main goal of pre-emphasis is to boost the amount of energy in the high-frequencies. Mainly boosting is used to get more information from the higher formants available to the acoustic model and to improve the phone recognition performance. Lower frequencies have more energy in voiced segments compared to higher frequencies and this is called spectral tilt [34]. The Figure.7 is a plot of the frequency response of before & after filtering.

Figure 7: Frequency Response of Signal Preprocessing High Pass Filter (a) original

4.2. Frame Blocking and Windowing
In the window operation, the large input data is divided into small data sets and stored in sequence of frames. While dividing, some of the input data may be discontinuous. So to achieve the continuity the sequences of frames are overlapped. This window operation is performed using hamming window method to reduce the spectral leakage in the input data.

329

Vol. 3, Issue 1, pp. 322-333

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963 4.3. Zero Crossing Rate and Short Time Energy
In time domain, the input data is recorded for every instance of time. The audio features in time domain are calculated based on the amplitude of the audio data and the variation of amplitude. These audio features are also known as temporal features. The rate at which zero crossings occur is a simple measure of a signal. Zero-crossing rate is measure of number of times in a given time interval/frame that the amplitude of the speech signals passes through a value of zero. The amplitude of the speech signal varies with time. Generally, the amplitude of unvoiced speech segments is much lower than the amplitude of voiced segments. The energy of the speech signal provides a representation that reflects these amplitude variations [35][36].

4.4. Mel frequency cepstral coefficient (MFCC)
MFCC is based on human hearing perceptions which cannot perceive frequencies over 1Khz. In other words, in MFCC is based on known variation of the human ear's critical bandwidth with frequency. MFCC has two types of filter which are spaced linearly at low frequency below 1000 Hz and logarithmic spacing above 1000Hz. A subjective pitch is present on Mel Frequency Scale to capture important characteristic of phonetic in speech. It turns out that humans perceive sound in a highly nonlinear way. Basic parameters like pitch and loudness highly depend on the frequency, adding weight to components at lower frequencies. In Fig: 8 this behaviour is illustrated, relating the perceived pitch to the physical frequency. The pitch associate with a tone is measured on the so-called mel-scale (By definition 1,000 mels correspond to the perception of a sinusoidal tone at 1000kHz, 40dB above the hearing threshold). The graph clearly shows that the perceived pitch increases all the more slower as we go to higher frequencies. Essentially we observe a logarithmic increase that is illustrated by the almost linear curve (with respect to a logarithmic scale) in Figure: 8 at high frequencies. MFCCs extensively use this property and add weight to lower frequencies, because more discriminative information can be found there.

Figure 8: Relation between the perceived pitch and frequency [38]

4.5. Pitch Extraction
Pitch is the fundamental frequency of audio signals, which is equal to the reciprocal of the fundamental period. This is mainly explained in terms of highness or lowness of a sound. Pitch in reality can be defined as the repeat rate of a complex signal, i.e., the rate at which peaks in the autocorrelation function occur. The three main difficulties in pitch extraction arise due to following factors. • Vocal cord vibration does not necessarily have complete periodicity especially at the beginning and end of the voiced sounds. • From speech wave, vocal cord source signal can be extracted but its extraction is difficult if has to be extracted separately from the vocal tract effects.

330

Vol. 3, Issue 1, pp. 322-333

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
• The fundamental frequency possesses very large dynamic range.

4.6. Berlin Database of Emotional Speech (BDES)
It is developed by the Technical University, Institute for Speech and Communication, Department of Communication Science, Berlin [37]. It has become one of the most popular databases used by researchers on speech emotion recognition, thus facilitating performance comparisons with other studies. 5 actors and 5 actresses have contributed speech samples for this database it mainly has 10 German utterances, 5 short utterances and 5 longer ones and recorded with 7 kinds of emotions: happiness, neutral, boredom, disgust, fear, sadness and anger. The sentences are chosen to be semantically neutral and hence can be readily interpreted in all of the seven emotions simulated. Speech is recorded with 16 bit precision and 48 kHz sampling rate (later down-sampled to 16 kHz) in an anechoic chamber.

4.7. Summary
In this chapter, pitch, zero-crossing rate, short time energy and MFCC is explained in detailed. Zerocrossing rate and short time energy are calculated for the voiced and unvoiced signals. After adding energy, delta, and double delta features to the 12 cepstral features, totally 39 MFCC features are extracted. Again, one of the most useful facts about MFCC features is that the cepstral coefficients tend to be uncorrelated, which turns out to make our acoustic model much simpler.

V.

CONCLUSION

Acoustic information will be precious whenever available, to better assess and understand the effect of driving process ergonomics on the driver state and mood, since these are drawn in an non-intrusive manner. The acoustic features are quite varying for different age groups and different gender. For the machine based state estimation, we will not focus on the voice content but rather on voicesignal features that are relevant for an emotional state inference. In this regard, to make the system more robust in predicting the driver state we analyzed the acoustic information such as pitch, short time energy, zero crossing rate and MFCC etc. in order to extract appropriate features. From the literature emotion recognition based on acoustic information has been implemented on a variety of classifiers including Fisher's linear discriminant analysis (FLDA), maximum likelihood classifier (MLC), neural network (NN), k-nearest neighbor (k-NN), Bayes classifier, support vector classifier, artificial neural network (ANN) classifier and Gaussian mixture model (GMM) etc. An experimental result shows that the LDA classifier with linear metric produces 67.22% recognition rate. To improve the recognition rate later we used a special non linear metric called Hausdorff distance measure. The recognition rate is improved to 85% approximately. The overall system does determine driver's emotions such as anger, despair, pleasure, sadness, irritation and joy.

VI.

FUTURE WORK

The present system already chosen some of the features, but we can try with remaining features like voice quality and speaking rate etc. Although the current MFCC already given very good emotion recognition performance, a further exploitation may contribute to the development of even more powerful features. Moreover, the features will be tested under more complex real-world conditions (e.g. reverberant and noisy speech). To simulate emotion detection in cars, direct interfacing between MATLAB and the TMS board is required. TMS board is a micro-controller which is embedded with a pipelined digital signal processor for real-time signal processing.

REFERENCES
[1]. Christian Martyn Jones and Ing-Marie Jonsson. Automatic recognition of affective cues in the speech of car drivers to allow appropriate responses. In OZCHI '05: Proceedings of the 17th Australia conference on Computer-Human Interaction, pages 1-10, Narrabundah, Australia, Australia, 2005. Computer-Human Interaction Special Interest Group (CHISIG) of Australia.

331

Vol. 3, Issue 1, pp. 322-333

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
[2]. B. Schuller et al. Effects of in-car noise-conditions on the recognition of emotion within speech. In Proc. DAGA, 2007. [3]. Clifford Nass, Ing-Marie Jonsson, Helen Harris, and Ben Reaves, Jack Endo, Scott Brave, and Leila Takayama. Improving automotive safety by pairing driver emotion and car voice emotion. In CHI '05: CHI '05 extended abstracts on Human factors in computing systems, pages 1973-1976, New York, NY, USA, 2005. ACM. [4]. J. Healey and R. Picard. Smartcar: detecting driver stress. Volume 4, pages 218 -221 vol.4, 2000. [5]. J. G. Taylor, K. Scherer, and R. Cowie. Introduction: 'emotion and brain:Understanding emotions and modelling their recognition'. Neural Netw., 18(4):313-316, 2005. [6]. Sommer D. Holzbrecher M. Golz, M. and T. Schnupp. Automatic recognition of affective cues in the speech of car drivers to allow appropriate responses. In RS4C (Eds.), Proceedings 14th International Conference Road Safety on Four Continents, Bangkok, Thailand, 2007. [7]. Batliner A. Hönig, F. and E. Nöth. Fast recursive data-driven multi-resolution feature extraction for physiological signal classification. In In J. Hornegger, et al. (Eds.): 3rd Russian-Bavarian Conference on Biomedical Engineering, pages 47-52, Erlangen, 2007. [8]. Robert Horlings, Dragos Datcu, and Leon J. M. Rothkrantz. Emotion recognition using brain activity. In CompSysTech '08: Proceedings of the 9th International Conference on Computer Systems and Technologies and Workshop for PhD Students in Computing, pages II.1-1, New York, NY, USA, 2008. ACM. [9]. Martin Golz, David Sommer, Mo Chen, Udo Trutschel, and Danilo Mandic. Fusion of state space and frequency domain features for improved microsleep detection. In W. Dutch et al. (Eds.), Proceedings International Conference Artificial Neural Networks (ICANN 2005), pages 753-7592, 2005. [10]. Anton Batliner, Stefan Steidl, Björn Schuller, Dino Seppi, Kornel Laskowski, Thurid Vogt, Laurence Devillers, Laurence Vidrascu, Noam Amir, Loic Kessous, and Vered Aharonson. Combining efforts for improving automatic classification of emotional user states. In Proc. IS-LTC 2006, Ljubliana, pages 240-245, 2006. [11]. P Juslin and K Scherer. Vocal expression of affect. The New Handbook of Methods in Nonverbal Behavior Research, January 2005. [12]. M. J. Owren and J.-A Bachorowski. Measuring emotion-related vocal acoustics. In J. Coan and J. Allen (Eds.). Handbook of emotion elicitation and assessment, pages 239-266. New York: Oxford University Press, 2007. [13]. Seppi D. Steidl S. Vogt T. Wagner J. Devillers L. Vidrascu L. Amir N. Kessous L. Schuller B., Batliner A. and Aharonson V. The relevance of feature type for the automatic classification of emotional user states: Low level descriptors and functionals. In Proceedings of Interspeech, pages 2253-2256, 2007. [14]. R. Van Bezooyen. Characteristics and Recognizability of Vocal Expressions of Emotion. Foris Pubns, USA, June 1984. [15]. Dimitrios Ververidis and Constantine Kotropoulos. Emotional speech recognition: Resources, features, and methods. Speech Communication, 48(9):1162 -1181, 2006. [16]. T. Kanda, K. Iwase, M. Shiomi, and H. Ishiguro. A tension-moderating mechanism for promoting speech-based human-robot interaction. Pages 511-516, aug. 2005. [17]. Englert R. Stegmann J. Burleson W Burkhardt F., Ajmera J. Detecting anger in automated voice portal dialogs. In Proceedings of Interspeech, Pittsburgh, 2006. [18]. Van Ballegooy M. Englet R. Huber R Burkhart, F. An emotion aware voice portal. In Proc. Electronic Speech Signal Processing ESSP, 2005. [19]. Laurence Devillers and Laurence Vidrascu. Real-life emotion detection with lexical and paralinguistic cues on Human-Human call center dialogs. Proc. INTERSPEECH' 06. Pittsburgh, 2006. [20]. Hua Ai, Diane J. Litman, Kate Forbes-riley, Mihai Rotaru, Joel Tetreault, and Amruta Pur. Using system and user performance features to improve emotion detection in spoken tutoring dialogs. In Proceedings of Interspeech, pages 797-800, 2006. [21]. J I. Murray, Arnott. Toward the simulation of emotion in synthetic speech: A review of the literature on human vocal emotion. The Journal of the Acoustical Society of America, 93(2):10971108, 1993. [22]. Thurid Vogt, Elisabeth Andre, and Johannes Wagner. Automatic recognition of emotions from speech: A review of the literature and recommendations for practical realisation. Pages 75-91, 2008. [23]. O. M. E. Wahlstrom, N. Papanikolopoulos, "Vision-based methods for driver monitoring," in Proceedings of the 6th IEEE International Conference on Intelligent Transportation Systems, Shanghai, China, 2003, pp. 903-908.

332

Vol. 3, Issue 1, pp. 322-333

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
[24]. A. E. M. C. Esra Vural, Gwen Littlewort, Marian Bartlett and Javier Movellan, "Drowsy Driver Detection through Facial Movement Analysis” Springer Berlin / Heidelberg, vol. 4796, pp. 618, 2007. [25]. H. D. Vankayalapati and K. Kyamakya, "Nonlinear Feature Extraction Approaches for Scalable Face Recognition Applications," ISAST transactions on computers and intelligent systems, vol. 2, 2009. [26]. R. V. Bezooijen, "The Characteristics and Recognisability of Vocal Expression of Emotions," Foris, Drodrecht, the Netherlands, 1984. [27]. J. W. E. A. e. Thurid Vogt, "Automatic Recognition of Emotions from Speech: a Review of the Literature and Recommendations for Practical Realisation," in Affect and Emotion in HumanComputer Interaction: From Theory to Applications Berlin, 2008. [28]. J. A. I. Murray, "Toward the simulation of emotion in synthetic speech: A review of the literature on human vocal emotion," Journal of the Acoustical Society of America, vol. 93 (2), pp. 1097–1108, 1993. [29]. C. K. D. Ververidis, "Emotional speech recognition: Resources, features, and methods," presented at the Speech Communication, 2006. [30]. L. G. Yongjin Wang, "An investigation of speech-based human emotion recognition," pp. 15 18, 2004. [31]. M. R. A. Paeschke, W. Sendlmeier, B. Weiss, “A Database of German Emotional Speech," Proc. Interspeech, 2005. [32]. T. Barbu, "Discrete speech recognition using a hausdorff based metric," in Proceedings of the 1st Int. Conference of E-Business and Telecommunication Networks, ICETE, Setubal, Portugal, 2004, pp. 363-368. [33]. T. Barbu, "Speech-dependent voice recognition system using a nonlinear metric," International Journal of Applied Mathematics, vol. 18, pp. 501-514, 2005. [34]. James H. Jurafsky, Daniel Martin. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition. Prentice Hall Ser (2ND), May 2008. [35]. Tobias Andersson. Audio classification and content description. Master's thesis, Lulea University of Technology, Multimedia Technology, Ericsson Research, Corporate unit, Lulea, Sweden, March 2004. [36]. Abdillahi Hussein Omar. Audio segmentation and classification. Master's thesis, Informatics and Mathematical Modelling, Technical University of Denmark, DTU, Richard Petersens Plads, Building 321, DK-2800 Kgs. Lyngby, February 2005. [37]. M. Rolfes W. Sendlmeier F. Burkhardt, A. Paeschke and B. Weiss. Berlin database of emotional speech on-line. In Interspeech: http://pascal.kgw.tu-berlin.de/emodb/index-1024.html, pages 1517-1520, 2005. Author Sandeep Kotte Graduated in Information technology from JNTU in 2007 and M.S in Information technology from the University of Klagenfurt, Austria in 2010 specialized in Intelligent Transportation System, pervasive computing and Business Informatics. Currently working as Assistant professor in Dhanekula Institute of Engineering & Technology, India.

333

Vol. 3, Issue 1, pp. 322-333

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

REVIEW ON THEORY OF CONSTRAINTS
CH. Lakshmi Tulasi1, A. Ramakrishna Rao2
Research Scholar, Dept of Mech. Engg, S.V.U.College of Engineering, Tirupati, A.P., India. 2 Professor, Dept of Mech. Engg, S.V.U.College of Engineering, Tirupati, A.P., India.
1

A BSTRACT
Theory of constraints (TOC) is a philosophy of management put forth by Eliyahu M. Goldratt, which claims that each system has at least one constraint. Although initially in manufacturing method, TOC has now developed into a theory of Management: a powerful systemic problem structuring and problem solving methodology which can be used to develop solutions with both intuitive power and analytical rigour. This philosophy is applied in many functional areas of companies, ranging from production flow management, marketing, services and project management to being a tool of logical reasoning. The present work aims at highlighting the extant literature with respect to the application of TOC and summaries important findings on the theory and practice of TOC.

K EYW ORDS:
management.

Theory of constraints, Production flow management, marketing, services, project

I.

INTRODUCTION

The theory of constraints (TOC) was invented by Dr. Eliyahu M. Goldratt in 1984 in his book ‘The Goal’[22], Israeli physicist turned business guru, developed a revolutionary method for production scheduling [21] which was in stark contrast to accepted methods available at the time, such as MRP. TOC is evolved from the OPT (Optimized Production Timetables) system and was later known under the commercial name of Optimized Production Technology (OPT) . Central to the TOC philosophy was that any organization or system has a constraint or a small number of constraints which dominate the entire system. The theory of constraints (TOC) adopts the common idiom "A chain is no stronger than its weakest link" as a new management paradigm. This means that processes, organizations, etc., are vulnerable because the weakest person or part can always damage or break them or at least adversely affect the outcome. The analytic approach with TOC comes from the contention that any manageable system is limited in achieving more of its goals by a very small number of constraints, and that there is always at least one constraint. Hence the TOC process seeks to identify the constraint and restructure the rest of the organization around it, through the use of five focusing steps. The secret to success lies in managing these constraints and the system as it interacts with these constraints, to get the best out of the whole system. The main motivation for the research review reported in this paper was the realization that TOC is growing very rapidly, and we simply did not know what was “out here”; i.e. what had already been tackled. This review paper will first outline the background to TOC, and then report on the application of paradigms of TOC in various fields and the findings. The theory of constraints (TOC) is a management philosophy that has been effectively applied to Manufacturing processes and procedures to improve Organizational effectiveness [25]. In its brief 25-years history, TOC has developed rapidly in terms of both methodology ([9], [12]) and area of applications ([33], [46]).

334

Vol. 3, Issue 1, pp. 334-344

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Most of the work is carried out in the application of Logistics Paradigm in the field of Project Management. The literature concerning about the feeding buffers is well illustrated in ([24], [44], [36], [1], and [50]). The methodology used in the project management is critical chain project management to find the critical chain and to find the project buffers and the feeding buffers. Three TOC Paradigms that have evolved over the last twenty-five years Logistics, Global Performance Measures and thinking process ([2], [14]). More recently [4] have referred to these paradigms as decision making, Performance Measurement system and thinking Process. The review of literature on TOC is spanned from the Logistics Paradigm to the Thinking process Paradigm. In section 2, we reviewed the Logistics Paradigm and application of logistics paradigm in the various fields. Where as in section 3, we reviewed the Global performance Measure Paradigm and application of the paradigm in the services. Section 4, deals with the Thinking process Paradigm and the various fact finding tools in that paradigm and finally in section 5, this paper concludes with some directions for further research on TOC.

II.

LOGISTICS PARADIGM

Originally the Logistics Paradigms are looking for system constraints in order to increase throughput. The Logistics Paradigm of the TOC has evolved from the scheduling software called Optimized Production Technology (OPT) which in turn based on the following nine rules [24]. This included using Drum-Buffer-Rope (DBR) scheduling technique and the Five Focusing steps of TOC. The DBR methodology synchronizes resources and material utilization in an organization. Resources and materials are used only at a level that contributes to the organization’s ability to achieve throughput. Because random disruption is inevitable in any organization, DBR methodology provides a mechanism for protecting total throughput of the system by the use of Time-Buffers (T-Bs).

2.1 Application of Logistics Paradigm
The logistics paradigm was applicable in Manufacturing, supply Chain, Project Management and in services [6]. In this Section we reviewed the application of Logistics Paradigm in Manufacturing, Supply Chain, and Project Management and in services (Table 1).
Table 1: application area of logistics paradigm and references Main area Sub area Manufacturing Logistics Paradigm Supply Chain Project Management Services Reference [34], [15], [10], [11], [18], [53], [62], [49], [19], [39], [55], [70]. [28], [48], [71]. [24], [64], [38], [40], [69],[44], [36],[1],[50], [26], [65], [47], [66], [17], [72], [61], [35], [51], [27],[52]. [31], [6], [67], [63], [30], [46], [16], [54], [57].

Literature concerning TOC in Manufacturing can be broken down into four categories. TOC, Materials requirement Planning (MRP), Manufacturing Resources Planning (MRP II) and JIT Manufacturing. The buffer stock allocation in serial and assembly type of production lines, the key concept in this is the “Long Pull” (Constant Work-In-Process inventory) Kanban System [34]. A comparison is made between the TOC and the JIT manufacturing in [15]. The Drum-buffer-Rope scheduling technique can be used in services as well as in manufacturing ([10], [11], [53], and [19]). The comparison of Traditional JIT and TOC Manufacturing in a Flow shop in [18]. A survey based comparison is made between performance and in change of performance of firms using JIT and TOC and suggested that greatest performance and improvement in performance occurred to adopters of TOC [55]. JIT and TOC buffering philosophies are compared and suggested that improved system performance stems from the strategic placement of buffers in DBR, which maximizes protection of the constraint from variation rather than attempting to protect each individual station is given by [70].The DBR technique can be used to manage supply chains ([28], [48], and [71]) and to improve current production operations [62]

335

Vol. 3, Issue 1, pp. 334-344

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
TOC project management process is a pull process that promotes low WIP level, time buffers to protect the critical chain of tasks and resources, and no multitasking- which is defined as splitting a worker’s time between two or more priority projects, to the detriment of the overall schedules. Literature concerning TOC in Project Management in Logistics Paradigm is that it was first popularized by the novel, ‘The Goal’ [22] that applied the principles to operations management. Since 1997 it has found applications in two areas within project management. The first application is scheduling of a single project to reduce project duration and simply project control. This is the main theme of the novel Critical Chain [24]. A practical approach for implementing critical chain Management is suggested by [38]. The application of TOC in the scheduling of single project is investigated [64]. The Theory of Constraints Management system is investigated [40]. Planning and controlling multiple, simultaneous, independent projects in the resource constrained environment [69]. The TOC Project Management approach is to schedule all non-critical activities as late as possible, but with buffers. The objective of these buffers (called “feeding buffers” because they are placed where non-critical paths feed into the critical chain) is to prevent delay of the execution of work on the critical chain when work on a non-critical path is delayed. The illustration of the literature of these feeding buffers is given by ([24], [44], [36], [1], and [50]).A detail information of how these five steps provide the framework for coordinating and controlling activities in both manufacturing and project environment [26]. The literature review on application of TOC in Project Management is given by [65]. Critical chain scheduling is not only a technique for the development and tracking of project schedules it is coherence and comprehensive approach to project management that encompasses and affect other processes and practices associated with project management as well [47]. The TOC approach can also be applied to the other areas of Project Management such as project cost management and project risk management [66]. The application of theory of Constraints Multi-Project Management at the Boeing Commercial Airline Company, Highlighting project end date predictability, modified scheduling algorithm and balancing resources across multiple projects [17]. The comparison and contrast of the advantages and disadvantages of traditional project management and TOC project management [72]. Several examples of how placing buffers or blocks of unscheduled time to account for delays allows project manager to minimize their risks and achieve project success [61]. A number of sources of bias in performance of project to schedule and cost estimates and provides recommendations to size buffers that ensure your projects come in under your baseline schedule and budget [35]. Although Critical Chain Project Management has a number of valuable concepts, it does not provide a complete solution to project management needs and the organizations should be very careful about the exclusion of conventional project management techniques [51]. A development of five-step innovative approach for the application of the management philosophy Theory of Constraints is suggested by [27]. An algorithm to determine critical sets and critical clouds and applied to a sample project and the results are presented in a condensed, project manager friendly, graphical format [52]. The Logistics Paradigm in services is that the five step focusing process has been applied to processes and procedures within services. The focusing steps have been used to improve logistics functions with the military [67]. The five focusing steps can be used in re-engineering of administrative functions [63]. The five step process has been used to improve information flow ([31], [6], and [16]. It has been used to improve service times [46]. It has been used to improve sales [30]. It has been used in medical settings [54]. The drum- buffer-rope scheduling technique can be used in services as well. While manufacturing DBR uses to schedule machinery, services may use DBR to schedule people within the organization, to set appointments for customers, or to predict lead times for customers. The buffer management can be used to identify pr4oblems and weaknesses that will cause disruption to the system [57].

III.

GLOBAL PERFORMANCE MEASURES PARADIGM

Under Toc all company performance measures are driven by the global goal of making money now and in the future. Throughout the methodology/ philosophy, the three measurements of throughput, inventory/ investment and operating expense serve to focus the improvement activity so as to achieve a global optimum. Throughput is defined as the rate at which the system generates money through

336

Vol. 3, Issue 1, pp. 334-344

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
sales. Inventory/ investment are defined as all the money the system invests in purchasing things the system intends to sell. Operating expense is defined as all the money the system spends in turning inventory into throughput [24]. These changes in definition provide more realistic measurements and targets for the organization’s operations. This is accomplished by increasing throughput, reducing inventory and reducing operational expense. The measurements are also related back to general accounting methods for tax and reporting purpose [61]. All measurements and activity are linked to increasing throughput (T) first, reducing inventory/ investment (I) second and lowering operating expense (OE) third. Net Profit= T – OE (1) Return on Investment= (T –OE)/I (2) Cash Flow= T –OE+/- ∆I (3) The equations govern all activity and show the direct link to organizational profit ([13], [23], [60],]. Net profit is an absolute measure reflecting the company’s ability to make money now and in the future, Return on Investment is a relative measure and finally Cash Flow is a survival Measurement [37]. TOC does not consider value added costs as part of inventory valuation. Similarly in operating expenses, no distinction is made between direct or indirect, long and short- term expenses.
Table 2: application area of global performance measure and references Main Area Sub area References Global Performance Measure Services ([58],[59]), [67], [43], [41],[42], [29], [5], [69], [7],

Services share this goal, defining throughput based on sales. However services may have to be a little more creative when it comes to defining inventory and operating expenses (OE). Application of Global performance measure and its applications and the references are given in Table2. Under TOC the more effective statement of corporate goals are those that lead to more effective measures of inventory and operating expenses within the service is suggested by [41]. The most services will have limited amount of traditional inventory ([7], [42], and [43]), the service is often produced at the time of sale and cannot be carried in inventory, and inventory will be a smaller fraction of the service firm’s assets than it would be for a manufacturing firm. Just because inventory is often a smaller fraction of assets for services, these global performance measures can still be utilized ([5], [41], [69], [29], [58], [59], and [67]).

IV.

THINKING PROCESS PARADIGM

The major component of TOC that underpins all the other parts of the methodology is the TOC Thinking Processes. Goldratt states that managers make three decisions when dealing with constraints: What to change? What to change to? and How to cause the change? The TOC logical thinking process (TP) has evolved to answer these generic questions [63].The past ten to fifteen years have shown that it is often managerial policies are most often the main constraint [50] the thinking process also helps in these situations. The thinking process consists of “trees” or logic diagrams that provide a road map for change, by addressing the three basic questions of What to change, What to change to, and How to cause the change. In much the same way as the 5 Focusing Steps focus on the constraint, the Thinking Processes focus on the factors that are currently preventing the system from achieving its goals. As shown in Fig.1 the Thinking Process follows the cause-effect logic, necessary condition logic and verifying sound logic. The current reality tree (CRT) depicts the current state of affairs, designed to identify the system constraint, link causes and effects within the current operation to reveal root causes of problems. Future reality trees (FRT) are used to test potential solutions by diagramming cause and effect relationship for events in the future. Transition trees (TT), also called a cause-and –effect trees, is a flow diagram describing the states of the system as it changes based on a prescribed action plan; it is an implementation plan that has been time sequenced. Two other tools are slightly different. The

337

Vol. 3, Issue 1, pp. 334-344

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
evaporating cloud (EC) and the prerequisite tree (PRT) are used to identify necessary conditions. These tools complete the sentence “In order to have—we must---” and are used to identify and overcome obstacles to meeting an objective or implementing a solution. The PRT provides a bridge between the future reality tree and the transition tree. As such, the PRT is also time-sequenced. Fig 2 shows the Thinking Process procedure for solving the problem. Three of the trees (Current and Future Reality trees and the Transition Tree) use cause-and –effect logic. They are built up by constructing connections between observed effects and cause on the basis of “sufficient cause”. Sufficiency can be of 3 types: “A is sufficient to cause C” or “If both A and B occurs together, then they will be sufficient to cause C” or “A and B (separately) both contributes to C, and between them is sufficient to cause C”. The Evaporating Cloud and the Prerequisite Tree both use necessary condition thinking: “In order to achieve A we must have B”. The logic rules are called the Categories of Legitimate Reservation ([12], [45]), and have been proposed for use in validating Systems Dynamics models [3]. An excellent straight forward explanation of these building blocks is given in [56]. A fuller description and examples of these logics were given in ([21], [45], [12], and [32]). TOC Systems Thinking Process Techniques

CauseEffect Logic (Sufficient Condition Logic)

Necessary condition Logic

Verifying Sound logic (Categories of Legitimate Reservation)

Fig 1: TOC Thinking Process

The Problem Current Reality Tree Cloud Cloud + Injection Future Reality Tree The Solution Decision Loop

Fig 2: Thinking Process Procedure

4.1. Current Reality Trees (CRT)

338

Vol. 3, Issue 1, pp. 334-344

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
An Existing condition is called as a reality [20]. The tools he has designed are intended to use to analyze and deal with a system condition, or reality, with which the TOC practitioner is unhappy. A Current Reality Tree as a logic structure which has been designed to depict that state of reality as it currently exists in a given system [12]. It is constructed from top-down: from observed undesirable effects, postulating likely causes for those effects, which are then tested via the CLR. One such test is to predict (and check for) other effects that would also arise if this cause did exist- hence the term Effect-Cause-Effect. The CRT is designed to achieve the following objectives [12]: • Provide the basis for understanding complex systems • Identify undesirable effects (UDEs) exhibited by a system • Relate UDEs through a logical chain of cause and effect to root causes • Identify, where possible, a core problem that eventually produces 70% or more of the system’s UDEs. • Determine at what points the root causes and/or core problem lie beyond one’s span of control or sphere of influence • Isolate those few causative factors (constraints) that must be addressed in order to realize the maximum improvement of the system • Identify the one simplest change to make that will have the greatest positive impact on the system. (P.64) Dettmer described the CRT as functional rather than organizational and as such is blind to internal and external system boundaries. CRT’s may also include positive feedback loops: generally there will be at least one feedback loop which constitutes a vicious cycle. The existence of a loop usually opens up more possibilities for the setting of remedial action: a change in or below a loop will have a significant effect.

4.2. Evaporating Clouds (EC)
Once the TOC practitioners have identified what to change, the second step in the process deals with the search for a plausible solution to the root cause; that is, what to change to. This task is accomplished with the aid of the Evaporating Cloud (EC) and the Future Reality Tree (FRT). Unlike the trees, the EC has a set format with 5 boxes. The practitioner identifies two opposing wants, that represent the conflict, the need that each want is trying to satisfy, and a common objective or goal that both needs are trying to fulfill. This direct conflict is often the same as that underlying the CRT. Traditionally in resolving these conflicts, managers have sought compromise solutions [20]. The EC start with an objective, which is the opposite of the core problem. The object has a minimum of two requirements listed. Each requirement has a prerequisite. From the objective, the requirements (minimum of two) are listed. Each requirement will have at least one prerequisite. It is the prerequisites that depict the tug-of-war. What is needed is a set of injections that can be used to break the validity of any one of the assumptions. This is the first step in freeing our self from the binding controversy. In constructing the EC, one injects the ideal answer, which would burst the cloud and thereby remove the problem. The EC is intended to achieve the following purposes [12]: • Confirm that the conflict exists • Identify the conflict perpetuating a major problem • Resolve conflict • Avoid compromise • Create solutions in which both sides win • Create new ‘breakthrough’ solutions to problems • Explain in depth why a problem exists • Identify all assumptions underlying problems and conflicting relationships.

4.3 Future Reality Trees (FRT)
Once a solution, called an injection, has been identified via the EC method practitioners assume for the next exercise that it has been achieved and start to build the Future Reality Tree (FRT). The tree is constructed and scrutinized to test the solution, once again using an effect-cause-effect method. The

339

Vol. 3, Issue 1, pp. 334-344

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
FRT identifies what to change as well as considering its impact on the future of the organization. The Future Reality Tree is the thinking process that enables a person to construct a solution that, when implemented, replaces the existing undesirable effects with desirable effects without creating new ones (Goldratt, 1993) Step-by-step the solution is created, and each stem is scrutinized to logically show that once the injection are implemented, the desirable effects can be accomplished. The resulting tree originates in one or more injections and ends in desirable effects which really reflect the opposite of the UDEs in the CRT. Glodratt’s Categories of Legitimate Reservation (CLR) provide guidelines for communicating any reservations about the validity of the elements and connections within the trees [12, 3]. The FRT serves the following purposes: • Enables effectiveness testing of new ideas befor3e committing resources to implementation • Determines whether proposed system changes will produce the desired effects without creating negative side effects • Reveals through negative branches, whether proposed change will create new or collateral problems as they solve old problems, and what additional actions are necessary to prevent any such negative side effects from occurring • Provides a means of making beneficial effects self-sustaining through deliberate incorporation of positive reinforcing loops • Provides a means of assessing the impacts of localized decisions on the entire system • Provides an effective tool for persuading decision makers to support a desired course of action • Serves as an initial planning tool

4.4. Prerequisite Trees (PRT)
Once the practitioners have identified what to change to, the third step in TOC deals with implementing the solution. The one of the TOC principle is that “ideas are not yet solutions” [20]. It cannot be called a solution until implementation is complete and the system is working as intended. The PRT is intended to identify obstacles that prevent the injection from the EC being implemented. The PRT uses a different logic from the previous trees, both of which use sufficiency logic (which basically asks “Is this enough?”) to establish cause and effect relationships. The PRT uses necessary logic, as does the Evaporating Cloud. Asking the following two questions to check whether a PRT is needed [12]: • Is the objective a complex condition? If so, a PRT may be needed to sequence the intermediate steps to achieve it. • Do I already know exactly how to achieve it? If not, then a PRT will help map out possible obstacles, the steps involved in overcoming them, and the appropriate sequence. The PRT is used to achieve the following objectives [12]: • To identify obstacles preventing achievement of a desired course of action, objective, or injection (solution idea arising from the Evaporating Cloud). • To identify the remedies or conditions necessary to overcome or otherwise neutralize obstacles to a desired course of action, objective, or injection. • To identify the required sequence of actions needed to realize a desired course of action. • To identify and depict unknown steps to a desired end when one does not know precisely how to achieve them.

4.5 Transition Trees
The last tool in the TOC Thinking Process is the Transition Tree, [34] allow practitioners to determine the actions necessary to implement the solution. Practitioners use the effect-cause-effect method to construct and scrutinize the details of the action plan, called the Transition Tree. As in the construction of the FRT, each step is scrutinized using CLRs for negative branches. The FRT as a strategic tool in which major changes can be outlined [12]. The implementation of these, however, will require complex interventions needing greater detail of action to be taken, which is the intended use for the Transition Tree. The Transition Tree as an operational or tactical tool.

340

Vol. 3, Issue 1, pp. 334-344

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
The purpose of a Transition Tree is to implement change [12]. He says that the Transition Tree structure started off as a four-element tree, with a fifth element being added later. The use of the four or five element tree is situational. He states that the five-element tree is the preferred methodology when constructing step by step procedures and there is a need to explain to others exactly why each step is required. [12] outlined the original four elements of the Transition Tree as: 1. A condition of existing reality 2. An unfulfilled need 3. A specific action to be taken 4. An expected effect of the integration of the preceding three Each succeeding level of the Tree is built upon the previous level, with the expected effect taking the place of the unfulfilled need. These build progressively upward to an overall objective or desired effect. The fifth element added to the Transition Tree is: 5. The rational for a need at the next higher level of the tree. The Transition Tree has nine basic purposes [12]: • Provide a step by step method for action implementation • Enable effective navigation through a change process • Detect deviation in progress toward a limited objective • Adapt or redirect effort, should plans change • Communicate the reasons for action to others • Execute the injections developed in the EC or FRT • Attain the intermediate objectives identified in a PRT • Develop tactical action plans for conceptual or strategic plans • Preclude undesirable effects from arising out of implementation. (p. 284)

4.6 Summary of the Thinking Process
The five stage Theory of constraints thinking process begins with a Current Reality Tree, which diagnoses what, in the system, needs to be changed. The Evaporating Cloud is then used to gain an understanding conflict within the system environment or, as Goldratt prefers to call it, the reality that is causing the conflict. The Evaporating Cloud also provides ideas of what can be changed to break the conflict and to resolve the core problem. The Future Reality Tree takes these ideas for change and ensures the new reality created would in fact resolve the unsatisfactory systems conditions and not cause new ones. The Prerequisite Tree determines obstacles to implementation and ways to overcome them and the Transition Tree is a means by which to create a step-by-step implementation plan. All of Glodratt’s tools are designed to overcome resistance to change by creating a logical path which can be followed.

V.

CONCLUSIONS

In this paper we have synthesized large body of knowledge of three paradigms of TOC logistics, global performance measure and thinking process. The review shows that the vast majority of the papers have concentrated on the concept and philosophy enhancement of TOC. Several articles have been published in the production sector also. In the application category a number of articles reports the application of TOC concepts in the area of production and management accounting. Few papers have been published on the comparison of TOC with various Existing theories such as MRP and JIT. This review shows that most of the work is carried in the field of Project Management in the Logistics Paradigm. The review shows some of the applications in the field of services in the Logistics paradigm as well as in the Global Performance Measure paradigm. The Theory of Constraints provides an effective, systematic approach for identifying constraints to the overall business and developing a plan to alleviate these constraints. TOC provides a global system methodology that promotes achieving the organizational goal of making more money both now and in the future.

341

Vol. 3, Issue 1, pp. 334-344

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
From the analysis of these paradigms it is observed that there are some important elements which have not yet been adequately addressed. This analysis shows that some important issues in the Project Management is not adequately addressed Simulation of the case studies of organizations by identifying the bottleneck stations and developing a detail schedule for it. Validation of the network models with the simulation can be carried out in the Project Management. A comparison of the Project management network models with the CPM can be done. REFERENCES
[1]. Barber P., Tomkins C. & Graves A., (1999) “Decentralized site management- a case study”. International Journal of Project Management, vol 17(2), 113-120. [2]. Blackstone, J.H., (2001) “theory of constraints--A Status Report”, International Journal of Production Research, vol 39 (6), 1053-1080. [3]. Bladerstone, S.J.,(1999) “Increasing User Confidence in systems Dynamics Models through Use of an Established Set of Logic Rules to Enhance Forrester and Senge’s Validation Tests, in Systems Thinking for the Next Millennium”, Proceedings of the 17th International Systems Dynamics Conference and 5th Australian and New Zealand Systems Conference, Wellington, 20-23 July. [4]. Boyd. L. & Gupta, M., (2004) “constraints management: what is the theory?” International Journal of Production and Operations Management, vol 24 (4), 350-371. [5]. Bramorski, T., Madan, M.S. & Motwani, J., (1996) “Application of the Theory of Constraints in Banks”, The Bankers Magazine, vol 180(1), 53-59. [6]. Coman, A. & Ronen, B., (1994) “Is Management by Constraints: Coupling IS Effort to Changes in Business Bottlenecks”, Human Systems Management, vol 13(1), 65-70. [7]. Cook, D.P., Goh, C. & Chung, C.H.,(1999) “Service Typologies: A State of the Art Survey”, Production and Operations Management, vol 8(3), 318-338. [8]. Cox, Blackstone & Schleier,.(2003) “Managing Operations: A Focus on Excellence”, [9]. Cox, J. F. & M.S. Spencer, (1998)“The Constraints Management Handbook”, The St Lucie Press/ APICS series on Constraints Management, Boca Raton, FL, [10]. Demmy, W.S. & Petrini, A.B., (1992) “The Theory of constraints: A New Weapon for Depot Maintenance Planning and Control”, Air Force Journal of Logistics, vol 16(3), 6-11. [11]. Demmy, W.S. & Demmy, B.S., (1994) “Drum-Buffer-Rope Scheduling and Pictures for the Yearbook”, Production and Inventory Management Journal, vol 35(3), 45-47. [12]. Dettmer, H. W. (1997) “Glodratt’s Theory of constraints: A system approach to Continuous Improvement”, ASQC Quality press, Milwaukee. [13]. Dettmer, H. William, (1998), “Breaking the Constraints to World-Class Performance”, ASZ quality Press, Milwaukee, WI. [14]. Draman R.H., (1995) “A new approach to the development of business plans: A Cross Fundamental Model Using. The theory of constraints philosophies”. Ph.D dissertation, University of Georgia. [15]. Fawcett, S.E. & Pearson, J.N., (1991) “Understanding and applying Constraint Management in Today’s Manufacturing Environments”, Production and Inventory Management Journal, vol 32(3), 46-55. [16]. Feather, J.J. & Cross, K.F.,(1988) “workflow Analysis, Just-in-time Techniques Simplify Administrative Process in Paperwork Operation”, Industrial Engineering, vol 20(1), 32-40. [17]. Fenbert, JA. & Fleener, NK.(2002) “Implementing TOC multi-project management in a research organization, Frontiers of Project Management Research and Application”: Proceedings of PMI Research Conference 2002; 2002 July 14-2002 July 17; AIPM, USA; PMI; [18]. Gardiner, S.C., Blackstone, Jr., J.H. & Gardiner, L.R., (1994) “The evolution of the Theory of Constraints”, Industrial Management, vol 36(3), 13-16. [19]. Gillespie, M.W., Patterson, M.C. & Harmel, B., (1999) “TOC Beyond Manufacturing”, Industrial Management, vol 41(6), 22-25. [20]. Goldratt, E. M., (1990b) “What is this thing called Theory of Constraints and How Should it be Implemented?” North River Press, New York, NY. [21]. Goldratt, E.M.(1994) “Optimized Production Timetable: A Revolutionary Program for Industry”, APICS 23rd annual conference Proceedings, Goldratt, E.M., It’s Not Luck, North River Press Publishing corporation, Great Barrington, MA. [22]. Goldratt, E.M. & Cox, J.,(1992) second revised edition, The Goal, North River Press, Great Barrington, MA. [23]. Goldratt, E. M., & Fox, J.(1986), The Race, North River Press, New York, NY.. [24]. Goldratt, E. M., & Fox, J., (1987) Critical Chain, North River Press, Corton-on-Hudson, New York,.

342

Vol. 3, Issue 1, pp. 334-344

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
[25]. Goldratt, E. M., & Fox, J., and Robert E.,(1986) The Race, North River Press, great Barrington, MA. [26]. Gray, V., Felan, J., Umble, E. & Umble M., (2000) “A Comparison of drum-buffer-rope (DBR) and Critical Chain (CC) buffering Techniques. Project Management Research at the Turn of the Millennium”. Proceedings of PMI Research Conference 2000. 21-24 June 2000; (21-24 June 2000); AIPM Pennsylvania, USA: Project Management Institute; [27]. Gregory, Alan & Kearney Gillian, (2004) “Restriction Buster”. Project 16(10), 20-22. [28]. Gupta, S., (1997) “Supply Chain Management in Complex Manufacturing”, IIE Solutions, vol 29(3), 18-21. [29]. Hinneburg, P.A., Lynch, W. and Black, J. (1996), Lean Logistics, in 1996 APICS Constraints Management Symposium Proceedings, APICS, Detroit, Michigan, 89-94. [30]. Hodgdon, B., (1998) “Identifying an Elevating the Constraint Sales skill”, in 1998 APICS Constraints Management Symposium Proceedings, APICS, Seattle, Washington, 62-63. [31]. Jolley, A. & Patrick, A., (1990) “The Office Factory”, Management Today, July, 100-102. [32]. Kendall, G.I., (1998) “Securing the Future: Strategies for exponential Growth Using the Theory of Constraints”, St. Lucie Press/ APICS Series on Constraints Management: Boca Raton, FL. [33]. Klein, D. & DeBruine, M., (1995) “A Thinking Process for Establishing Management Policies”. Review of Business, vol 16, No.3:31-37. [34]. Lambrecht, M. & Segaert, A., (1990) “Buffer Stock Allocation in Serial and Assembly Type of Production Lines”, International Journal of Operations and Production Management, vol 10(2), 47-61. [35]. Leach, L., (2003) “Schedule and cost buffer sizing: how to account for the bias between project performance and your model”, Project Management Journal, vol 34(2), 34-47. [36]. Leach, LP. (1999) “Critical Chain Project Management Improves Project Performance”. Project Management Journal, vol 30(2), 39-51. [37]. Lockamy, III, A. & Spencer, M.S.,( 1998) “Performance Measurement in a Theory of Constraints Environment”, International Journal of Production Research, vol 36(8), 2045-2060. [38]. Lynch, W.E. & Newbold, R.,(1998) “A Practical Approach for Implementing Critical Chain Management”, in 1998 APICS Constraints Management Symposium Proceedings, Seattle, Washington, 6467. [39]. Mabin, V.J. & Balderstone, S.J.,( 2000) “The World of the Theory of Constraints: A Review of the International Literature”, St, Lucie Press, Boca Raton, FL. [40]. McMullen, Jr., T.B., (1998) “Introduction to the Theory of Constraints Management Systems”, St. Lucie Press/APICS Series on Constraints Management, Boca Raton, FL. [41]. Motwani, J., Klien, D. & Harowitz, R., (1996a) “The Theory of constraints in Services: Part 1—the basics”, Managing Service Quality, vol 6(1), 53-56. [42]. Motwani, J., Klien, D. & Harowitz, R., (1996b) “The Theory of constraints in Services: Part 2— examples from healthcare”, Managing Service Quality, vol 6(2), 30-34. [43]. Motwani, J. & Vogelsang, K., (1996) “The theory of Constraints in Practice—at quality engineering”, Inc., Managing Service Quality, vol 6(6), 443-473. [44]. Newbold, RC. (1998) “Project management in the fast lane--- applying the Theory of Constraints”. St. Lucie press. [45]. Noreen, E., Smith, D.A. & Mackey, J.T., (1995) “The Theory of Constraints and its Implications for Management Accounting”, The North River Press Publishing Corporation: Great Barrington, MA. [46]. Olson, C.T., (1998) “The Theory of Constraints: Application to a service firm”, Production and Inventory Management Journal, vol 39(2), 55-59. [47]. Patrick FS., (2001) “Buffering against risk- critical chain and risk management”. PMI Seminar and Symposium Proceedings 2001; 2001 Nov 1- 2001 Nov 10; AIPM (CD Rom). USA: PMI; [48]. Perez, J.L. (1997), TOC for World Class Global Supply Chain Management, Computers and Industrial Engineering, vol 33(1-2), 289-293. [49]. Rahman, S., (1998) “Theory of Constraints: A Review of the Philosophy and its Applications”, International Journal of Operations & Production Management, vol 18(4), 336-355. [50]. Rand, GK. (2000) “Critical Chain: the theory of Constraints applied to project management”, International Journal of Project Management, vol 18(3), 173-177. [51]. Raz, T., Barnes, R. & Dvir, D. (2003), “A Critical Look at Critical Chain Project Management”. Project Management Journal, vol 34(4), 24-32. [52]. Rivera, F. & Duran, A. (2004), “Critical Clouds and Critical sets in resource-constrained projects”. International Journal of Project Management, vol 22(6), 489-497. [53]. Ronen, B., Gur, R. & Pass, S., (1994) “Focused Management in Military Organizations: An Avenue for Future Industrial Engineering”, Computers and Industrial End, vol 27(1-4), 543-544. [54]. Roybal. H., Baxendale, S.J. & Gupta, M. (1999), “Using Activity-Based Costing and Theory of Constraints to Guide Continuous Improvements in Managed Care”, Managed Care Quarterly, vol 7(1),1-10.

343

Vol. 3, Issue 1, pp. 334-344

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
[55]. Sale, M.L. & Inman, R.A., (2003) “Survey-Based Comparison of Performance and Change in Performance of Firms Using Traditional Manufacturing, JIT and TOC”, International Journal of Production Research, vol 41(4), 829-844. [56]. Scheinkopf, L., (1999) “Thinking For change: Putting the TOC Thinking Processes to Use”, St, Lucie Press/ APICS Series on Constraints Management: Boca Raton, FL. [57]. Schragenheim, E. & Ronen, B., (1991) “Buffer Management: a Diagnostic tool for Production Control”, Production and Inventory Management Journal, vol 32(2), 74-79. [58]. Simons, J.V. & Moore, Jr., R.I., (1992a) “The Theory of Constraints Approach to focused Improvement”, Air Force Journal of Logistics, vol 16(3), 1-5. [59]. Simons, J.V. & Moore, Jr., R.I., (1992b) “Improving Logistics flow Using the Theory of Constraints”, Logistics Spectrum, vol 26(3), 14-18. [60]. Smith, Debra, (2000), “The Measurement Nightmare: How the Theory of Constraints Can Resolve Conflicting Strategies, Policies and Measures”, The St. Lucie Press/ APICS Series on Constraints Management, Boca Raton, FL. [61]. Sood, S., (2003) “Taming Uncertainty: Critical- Chain buffer management helps minimize risk in the project equation”. PM Networks, vol 17(3), 56-59. [62]. Spencer, M.S. & Cox, III, J.F., (1995) “Optimum Production Technology (OPT) and the Theory of Constraints (TOC): Analysis and Genealogy”, International Journal of Production Research, vol 33(6), 14951504. [63]. Spencer, M.S. & Wathen, s., (1994) “Applying the Theory of Constraints Process Management Technique to an Administrative Function at Stanley Furniture”, National Productivity Review, vol 13(3), 379-385. [64]. Steyn, H., (1996.) “An investigation into the fundamentals of critical chain project management”. International Journal of Project Management, PM [65]. Steyn, H., (2001) “An Investigation into the fundamentals of critical chain project scheduling”. International Journal of Project Management, vol 19(6), 363-369. [66]. Steyn, H., (2002) “Project Management Application of the Theory of Constraints beyond Critical Chain Scheduling”. International Journal of Project Management, vol 20, 75- 80. [67]. Underwood, J.W., (1994) “Applying the Theory of Constraints to army Logistics, Army Logistician”, July/august (4), 13-17. [68]. Wahlers James L. & Cox James F., (1994) “Competitive factors and performance measurement: applying Theory of Constraints to meet customer needs”, International Journal of Production Economics, vol 37, 229-240. [69]. Walker, II, E.D. & Cox, III, J.F., (1998) “Generic Airlines Reservation Sales Office”, Journal of Systems Improvement, vol 2(1), 9-16. [70]. Watson, K.J. & Patti, A., (2008) “A comparison of JIT and TOC buffering philosophies on system performance with unplanned M/C downtime”, International Journal of Production Research, vol 46(7), 18691885. [71]. Watson, K.J. & Polito, T., (2003) “Comparison of DRP and TOC Financial Performance within a Multi-Product, Multi-Echelon Physical Distribution Environment”, International Journal of Production Research, vol 41(4), 741-765. [72]. Wei, C. C., Liu, P. H. & Tsai, Y. C.,(2002) “Resource- constrained project management using enhanced theory of constraint”, International Journal of Project Management, vol 20(7), 561-567.

Authors
She is having 7 years of teaching experience in both U.G and P.G levels. She pursued her M.Tech degree in the field of Industrial Engineering. Currently pursuing her Ph.D as a full time research scholar.

A. Ramakrishna Rao is a vice-principal and professor in the department of mechanical Engineering, S.V.U. College of engineering, S.V. University Tirupati. He has more than 31 years of experience in teaching for both U.G and P.G level and in Research. He has published more than 55 papers in referred national and international journals. He has presented more than 45 research articles in national and international conferences. His current area of research includes Supply Chain Management, Operations Management, Lean Manufacturing, Graph Theory and TOC.

344

Vol. 3, Issue 1, pp. 334-344

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

TURBO BRAKE ASSISTS – TURBOTOR: CONCEPTUAL DEVELOPMENT AND EVALUATION
G. Ramkumar and Amrit Om Nayak
Student, Department of Mechanical Engineering, Thiagarajar College of Engineering, Madurai, India

ABSTRACT
‘Turbotor’ employs the principles of turbo machinery and automotive engineering in order to achieve effective braking assistance for the modern day disc as well as drum brakes. These brakes are intended for the purpose of braking assists during high speed motion. The major components include a turbine or rotor, pump, nozzle, enlarged brake fluid chamber, DOT 5.1 braking fluid. The functioning involves directing a jet of high velocity brake fluid (DOT 5.1 in this case) at the blades of the rotor which is coupled with the wheel. The braking circuit is used as such with the enlargement of the brake fluid chamber. A pump is added to the set up to enable increase of mass flow rate of fluid and also to ensure the continuity of fluid flow in the entire circuit. A speed sensor is integrated along with the circuit to activate the pump such that the braking assists are put to use only after a particular velocity is reached as braking assists are not necessary at lower speeds.

KEYWORDS: brake Assists, brake fluid, mass flow rate, and nozzle

I.

INTRODUCTION

The modern era has seen unprecedented technological growth and has propelled automotives on an ever rising curve of speed. Still, despite chargers, turbochargers, twin turbochargers or NOX, there are limits which cannot be surpassed by a land based vehicle in terms of speed. Brakes on the other hand are a part of automobiles that abide by only those limitations imposed to them by the ability of human body to withstand rapid decelerations. Otherwise, it would be a lot easier stopping a car than making it go insanely fast. The drum based braking system can be considered to be the forefather of the modern day brake.The man largely credited with the development of the modern day drum brake is French manufacturer Louis Renault, in 1902. This braking systems was external, a feature which soon turned into a problem. Dust, heat and even water rendered them less effective. It was time for the internal expanding shoe brake. By placing the shoes inside the drum brake, dust and water were kept out, allowing the braking process to remain effective. As the vehicles spilled out the assembly plants, they started becoming both faster and heavier. The earlier brakes were effective, but they had a tendency to ineffectively distribute heat. This feature made room for the creation of the disc braking system. The ‘turbotor’ will take the development process in the field of automotive braking a step further. It is an attempt to reduce our dependence on frictional braking and thereby reduce wear and tear of brake components. The ‘turbotor’utilizes the concepts governing turbo machinery and automotive engineering in order to achieve effective braking assistance for the modern day disc as well as drum brakes. The principle involved encompasses the targeting of a jet of viscous fluid onto the blades of a turbine which is attached to the wheel. This jet of fluid retards the motion of the turbine and in the process damps or slows down the rotation of the wheels. The ‘turbotor’ is a potential brake assist in high speed conditions thereby reducing the work done by the disc or drum brakes. This results in decreased braking distance as well as braking time. The major components include a turbine or rotor, pump, nozzle, enlarged brake fluid chamber, DOT 5.1 braking fluid. The functioning involves

345

Vol. 3, Issue 1, pp. 345-353

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
directing a jet of high velocity brake fluid (DOT 5.1 in this case) at the blades of the rotor which is coupled with the wheel. The braking circuit is used as such with the enlargement of the brake fluid chamber. A pump is added to the set up to enable increase of mass flow rate of fluid and also to ensure the continuity of fluid flow in the entire circuit. A speed sensor is integrated along with the circuit to activate the pump such that the braking assists are put to use only after a particular velocity is reached as braking assists are not necessary at lower speeds. The concept development and evaluation of “turbotor” is organised into six sub-sections. The preliminary section describes the Theory behind the functioning of turbotor, and is followed by the second section which discusses the Construction of Turbo Brake Assist system and the integral components that form the heart and soul of the system. The third section illuminates the Working principles and driving concepts of the system. The detailed Mathematical study of existing braking systemand turbotor is illuminated in the fourth section. The fifth section consists of Results and Discussion which throws light on the inferences from mathematical study of the brake assist system. The sixth section is the Conclusion which elucidates the practicality of the concept and performance of turbotor as an effective brake assist.

II.

THEORY

We deem the ‘turbotor’ to be a success even if it caters to at least 35 per cent of the braking force from the disc as well as the drum brakes. The main purpose of this system is to function as a braking assist. If the ‘turbotor’ is capable of providing excessive braking which is close to that obtained from the primary brake system, this would lead to locking of the wheels and very sudden deceleration resulting in utmost discomfort for the driver. Therefore the working of this system will be limited to a velocity range in order to accommodate for the above mentioned features. The total braking force obtained from the front disc brakes and the rear drum brakes is calculated independently. The braking force obtained from the ‘turbotor’ is calculated by evaluating the pressure density or force of the jet of viscous fluid (DOT 5.1 braking fluid) on the turbine blades. The weight distribution between front and rear axles during dynamic braking is calculated. This will enable us to decide the amount of braking assist required at the front and the rear wheels of the vehicle. The use of DOT 5.1 braking fluid will ensure that heat generated during damping of turbine motion will not pose a threat as it is hygroscopic in nature. The concept is explained in detail in the following sections.

III.

CONSTRUCTION

The integral parts of the Brake Assists are depicted in the schematic diagram Fig.1and the components are explained subsequently.

Fig.1 Turbotor schematic layout

3.1.Master cylinder chambers:
The Turbotor setup consists of a master cylinder which is partitioned into two chambers in order to provide undisturbed volumes for normal braking system as well as for Turbotor. The position of the

346

Vol. 3, Issue 1, pp. 345-353

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
master cylinder chambers is the same as in the existing braking system. The master cylinder chamber for Turbotor system is provided with pipelines to carry brake fluid to and fro from the rotor setup. These pipelines for the Turbotor are called auxiliary brake fluid lines(inlet and outlet). The other chamber of master cylinder is provided with pipelines for disc and drums brakes and is thus left undisturbed.

3.2. Auxiliary brake fluid pipelines:
The auxiliary brake fluid pipe lines are connected to the master cylinder chamber for turbotor at one end and the other end consists of nozzle to impart high velocity to the brake fluids at the striking end to the turbotor. There are four inlet and outlet brake fluid lines attached to the master cylinder chamber for turbotor. The outlet pipe lines from the turbotor carrythe brake fluid that is accumulated after striking the turbotor to the master cylinder chamber for turbotor. The auxiliary inlet pipelines to the turbotors at the front and rear is symmetric at the point along its longitudinal length as shown in the Fig. 1. This symmetric arrangement ensures equal mass flow rate of fluid to both left and right turbotors at front and rear axles.

3.3.Hydraulic pump:
The hydraulic pump that is going to be used here is a diaphragm type positive displacement reciprocating pump. A diaphragm pump is a positive displacement pump that uses a combination of the reciprocating action of a rubber, thermoplastic or teflon diaphragm and suitable non-return check valves to pump a fluid. Sometimes this type of pump is also called a membrane pump.

Fig.2. Diaphragm pump schematic

When the volume of a chamber of either type of pump is increased (the diaphragm moving up), the pressure decreases, and fluid is drawn into the chamber. When the chamber pressure later increases from decreased volume (the diaphragm moving down), the fluid previously drawn in is forced out. Finally, the diaphragm moving up once again draws fluid into the chamber, completing the cycle. This action is similar to that of the cylinder in an internal combustion engine.

3.4. Turbotor:

Fig.3. Tubotor schematic

The turbotor schematic is shown in Fig. 3. After the fluid jet hits the turbotor blades, it is allowed to pass through the orifice holes and collect in the collecting chamber. This collected brake fluid is now sent back via the outlet turbotor outlet back to the MCC 2 as shown in Fig. 1.

347

Vol. 3, Issue 1, pp. 345-353

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

IV.

WORKING

The overall working of the system is quite simple. The brake fluid is pumped from MCC 2 and is sent via the auxiliary inlet lines to the nozzle. High pressure fluid is sprayed on the turbotor blades which in turn decelerates the blades. Since the turbotor is coupled to the axle, it in turn causes overall deceleration of the vehicle. The brake fluid collects in the collecting chamber of the turbotor through the orifices and is sent back to the MCC 2 via the auxiliary fluid outlet lines with the help of the pump. This entire turbotor system aids the primary braking system and thereby leads to reduction in braking time and braking distance. This also reduces the wear and tear experienced by the primary braking system.

V.

MATHEMATICAL STUDY OF EXISTING BRAKE SYSTEM

NOTE: The numerals beside the equations are intended to signify the number of equations only.

5.1. Drum brakes braking equations:
The maximum wheel torque is limited by wheel slip and is given by: = Where, = ℎ = = = ℎ ℎ ℎ (1)

The torque produced at the drum brakes caused by friction between the lining and the drum which is necessary to bring the vehicle to a standstill is given by: = Where, = = = = Both wheel and drum torques must be equal up to the point of wheel slip but they act in opposite direction to each other. Therefore they may be equated. = = Therefore Force between lining and drum is: = / (N) (5) The above mentioned equation is the braking force produced by drum brakes in order to bring the vehicle to a stop. (3) (4) (2)

5.2. Disc brakes braking equations
The normal clamping thrust on each side of the disc acting through the pistons multiplied by coefficient of friction generated between the disc and pad interfaces produces a frictional forceon both sides of the disc. = in N (6)

348

Vol. 3, Issue 1, pp. 345-353

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
If the resultant frictional force acts through the centre of the friction pad the mean distance between centre of the pad pressure and centre of the disc will be − /2 = in m (7) Accordingly, the frictional braking torque will be dependent upon twice the frictional force (on both sides) and the distance the pad is located from the disc centre of rotation. That is, =2 i.e. , =2 (Nm) (Nm) (8)

5.3. Braking dynamics of Turbotor The maximum sustained deceleration required by the specifications is 0.65g(where g is the acceleration due to gravity and is taken as 9.81 m/s2) The braking force required to bring the car to a stop is given by, = in kN (9) The average power absorbed by the brakes is given by, = /2in kW (10) The static weight distribution of the car is given by, Weight on the front axle: = / + (11) Weight on the rear axle: = / + (12) Assuming that the car is established, under steady–state braking, application of Newton’s law provides the equations to determine the dynamic weight distribution on the front and rear axles during braking. / + + ′ℎ/ + (13) ′ = / + − ′ℎ/ + (14) ′ = NOTE*: All weight distributions are in kN. The following diagram indicates the parameters as noted in above formulae

Fig. 4 Vehicle force diagram

5.4. Mathematical study of proposed brake assist system
We know that the braking power obtained at the conventional brakes is the maximum braking power that can be provided at the turbotor for it to act safely as a brake assist. This power is carried to the turbotor by the brake fluid. The brake power (in kW) that needs to be imparted to the brake fluid is given as: = ∗ ′ (15) Where = braking force in kN and

= velocity of the brake fluid in m/s

349

Vol. 3, Issue 1, pp. 345-353

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
This is the velocity of the brake fluid (in m/s) obtained at the nozzle outlet in the auxiliary inlet fluid line. The head developed in the nozzle is given be

=2

(16)

Where g = acceleration due to gravityin m/s And H = head developed in m

2

The necessary power to be given to the brake fluid is obtained with the help of a suitable reciprocating pump (as described in Fig. 1) which is similar to the fuel pumps in use today. The power rating (in kW) of the pump is given by = ∗ Where ρ = density of the brake fluid in kg/m3 A= area of cross-section in m2 = stroke length in m N = rpm of the pump ′ = total head= Where = suction head in m = delivery head in m The head obtained at the nozzle outlet is the delivery head of the pump (without considering the losses in pipeline). From the above equation we can obtain the value of suction head of the pump. The pump is similar to fuel pump considering Head in terms of pressure. Head in terms of pressure can be obtained using the formula, = ∗ 10.197 / (18) Where, H = head in m SG = Specific gravity of the brake fluid P = pressure of the fluid in bar Since turbotor spins at the same rpm of the wheel, Rpm of the turbotor can be obtained from: ℎ /ℎ = 0.00595 ∗ ℎ ℎ ∗ ℎ ℎ / ∗ (19) = Where, = tangential velocity of the turbotor in m/s = diameter of the turbotor in m = Rpm of the turbotor From the velocity triangle of the turbotor blades, Fig 5, we get the following equations, = Where, = Absolute velocity of the brake fluid at inlet of the blade in m/s = brake fluid whirl velocity at the inletof the blade in m/s = − cos − (22) (21) /60 (20) + ∗ ′ / 60 ∗ 1000 (17)

350

Vol. 3, Issue 1, pp. 345-353

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Where, = whirl velocity of the brake fluid at outlet in m/s = tangential velocity of the turbotor in m/s = angle of deflection of the brake fluid from the blade surface = Where, = braking force given by the jet of brake fluid in kN And Where, = discharge of the brake fluid in m3/s = area of cross-section in m2
′ ′

+

(23)

=

(24)

= velocity of the brake fluid at the nozzle outlet in m/s = + ∗ (25)

Where, = work done by the brake fluid in kNm The velocity diagram for the blade is shown below:

Fig. 5 Velocity diagram for blade

In the above velocity diagram, = absolute velocity of the brake fluid at inlet in m/s = turbotor wheel velocity in m/s = whirl velocity of the brake fluid at inlet in m/s = relative velocity of the brake fluid with respect to blade at inlet in m/s = absolute velocity of the brake fluid at outlet in m/s = turbotor wheel velocity in m/s = whirl velocity of the brake fluid at outlet in m/s = relative velocity of the brake fluid with respect to blade at outlet in m/s And = = U = turbotor wheel velocity in m/s = , neglecting the effect of friction between fluid and blade. The heat developed in turbotor during braking is given by the following set of equations: = 35% / (26) Where, = braking force in kN

351

Vol. 3, Issue 1, pp. 345-353

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
= mass of the vehicle in kg = Where, = time taken for deceleration in s = velocity of the vehicle before braking in m/s = velocity of vehicle after braking in m/s − / (27)

= ∗ Where, = heat energy absorbed by brakes in kJ = power absorbed by the brake fluid during braking in kW = time taken for deceleration in s Considering the energy absorbed by a single turbotor at front axle, we have: = ∗ ℎ /2 The temperature change in the turbotor during braking is given by: ∆ = Where, = heat energy generated in turbotor in kJ = mass of the turbotor in kg = specific heat at constant volumeof material of the turbotor in kJ/kg.K ∆ = change in temperature of the turbotor

(28)

(29) (30)

VI.

RESULTS AND DISCUSSION

Here Ford Fiesta hatchback edition is taken as a case-study for the Turbo Brake Assists. The velocity for actuation of turbotor is taken as 19.44 m/s. We now assume that the vehicle is moving with an initial velocity of 27.78 m/s. With mass of the car as 1110kg and deceleration of 6.3765 m/s2 and the braking force obtained from the conventional braking system is 7.0779 kN. Now considering 35 per cent of the existing braking force to be obtained by Turbo Brake Assists, the braking force is calculated as 2.4773 kN. This is the braking force necessary to be impacted by the brake fluid to the turbotor. Considering 59.7 per cent of braking force at front axle (as the weight distribution ratio for Ford Fiesta is 59.7/40.3 in percentage), the braking force at the front wheels is obtained as 1.4789 kN. At each turbotor in the front axle, the braking force is obtained as 0.7395 kN. The average power absorbed by the conventional brakes during braking is the average power absorbed by the brake fluid during braking in Turbotor.Therefore the average power to be imparted to the brake fluid at each turbotor in the front wheels is calculated with = 0.7395 and = 27.7778 m/s. and is obtained as 10.27036 kW.This average power has to be given to the brake fluid in order to perform braking efficiently.The power to be absorbed by the brake fluid during braking is equal to the power given to the brake fluid before striking the turbotor, and the velocity is calculated as 13.8889 m /s. This is the velocity obtained at the nozzle out let before striking the turbotor. The above calculations give the necessary design conditions and values for desired efficient performance of Turbo Brake Assists i.e., 35 per cent of existing braking force. And in order to achieve this, a power of 10.27036 kW is to be given to the brake fluid for obtaining a velocity of 13.8889 m/s. The power is provided by the reciprocating hydraulic pump.The Head developed at the nozzle outlet is calculated as 9.8319m. This head is the delivery head for the pump. The total head for the pump is calculated using = 1050 kg/m3(property of DOT 5.1 brake fluid), g = 9.81 m/s2, Diameter of the cylinder = 64 mm, L = 175 mm, N = 5000 rpm, P = 10.27036 kW and the total head is calculated as 20.61456 m. In terms of pressure, the total head, suction head, delivery head are calculated as 2.1227 bar, 1.11031bar and 1.01240 bar respectively. Now considering the turbotor blade part, the absolute velocity of the brake fluid at inlet of the blade is obtained as 13.88890001 m/s. The tangential velocity of the wheel is obtained as 85.426471 m/s. With radius of the wheel = 7.5 inches, gear ratio at 5th gear is 0.756, final drive ratio is 4.07:1, and vehicle speed is 62.137119 miles/hr and value of n = 4284.3909 rpm.The whirl velocity of brake fluid at the blade outlet is obtained as 18.2031434 m/s. Here angle of deflection, is taken as 20 degrees.

352

Vol. 3, Issue 1, pp. 345-353

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
NOTE*: Since turbotor tangential velocity is greater than brake fluid velocity, the negative sign obtained in the values are omitted and only magnitudes are considered. The force imparted by the jet of brake fluid onto the blade surface is obtained as 0.58782 kN. Work done by the brake fluid is obtained as 50.21538792 kW. The above value is obtained for a nozzle diameter of 0.04m. The time taken for deceleration is calculated as 3.73398 s, considering v = 27.7778 m/s and u = 19.444 m/s. This time is considered for the braking force achieved using turbotor system entirely as shown previously i.e. 2477.27025 N. The heat energy absorbed by the turbo amounts to 128.47335 kJ. And each turbotor at front absorbs 38.3493 kJ.Therefore the temperature change in turbotor is obtained as 20.18384 K with m = 4 kg, = 0.475 KJ/kg-K.

VII.

CONCLUSION

From the mathematical results obtained, when a nozzle diameter of 0.04 m (which is a practical figure) is considered the DOT 5.1 brake fluid is able to impart necessary force i.e. 2.35128 kN, which is close to the desired performance value of 2.47727025 kN (35 per cent of existing braking force from conventional braking system), on the turbotor blades there by causing deceleration of the vehicle as a whole and bringing about effective braking assistance to the existing braking system. This reduces the work done by the conventional braking system at high speeds. Hence Turbotor can be practically employed with the existing braking system as effective braking assistance. Another important inference from the mathematical study is that the velocity of brake fluid remains constant at any ratio of braking power and braking force. By varying the nozzle diameter and angle of deflection, the braking force of the brake fluid can be varied accordingly.

REFERENCES
[1]. Richard Stone and Jeffrey K. Ball (2004), “Automotive Engineering Fundamentals”, SAE International, Warrendale, Pa. [2]. Dr.R.K.Bansal (2005), “A Textbook of Fluid Mechanics and Hydraulic Machines S.I.Units”, Laxmi Publications (P) Ltd., New Delhi. [3]. Heinz Heisler (2002), “Advanced Vehicle Technology”, Butterworth-Heinemann, Woburn, MA. [4]. Yuvan.S.W, Foundation of Fluid Mechanics, Prentice Hall, 1998. [5]. Ellis.J.R, Vehicle Dynamics, Business Books Ltd., London, 1991. [6] .Rogers, G.F.C. and Mayhew, Y.R. (1967), Engineering Thermodynamics Work and Heat Transfer, Longman. [7]. Limpert, R. (1999), Brake Design and Safety, Society of Automotive Engineers, Warrendale, Pa. [8]. SAE Handbook (1989), Volume 2, Society of Automotive Engineers, Warrendale, Pa. [9]. Gillespie, T.D. (1994), Fundamentals of Vehicle Dynamics, Society ofAutomotive Engineers,Warrendale, Pa. Authors G. Ram Kumar was born in Chennai, Tamil Nadu, India, in 1991. He is currently pursuing Bachelor of Engineering degree in sixth semester with the Department of Mechanical Engineering, in Thiagarajar College Of Engineering, Madurai, Tamil Nadu, which is affiliated to Anna University, Tamil Nadu. His research interest includes Automotive Technology and Engineering, Thermal Engineering, blended wing design in aircrafts, turbo machinery and theoretical physics.

Amrit Om Nayak was born in Bhubaneswar, Odisha, India, in 1991. He is currently pursuing Bachelor of Engineering degree in eighth semester with the Department of Mechanical Engineering, in Thiagarajar College of Engineering, Madurai, Tamil Nadu, which is affiliated to Anna University, Tamil Nadu. His research interests includes Automotive Technology and Engineering, Thermal Engineering, blended wing design in aircrafts, turbo machinery and theoretical physics.

353

Vol. 3, Issue 1, pp. 345-353

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

STRUCTURAL REFINEMENT BY RIETVELD METHOD AND MAGNETIC STUDY OF NANO-CRYSTALLINE CU-ZN
FERRITES
K.S. Lohar1, S.M. Patange1, Sagar E. Shirsath2, V.S.Surywanshi3, S. S.Gaikwad1, Santosh S. Jadhav4 and Nilesh Kulkarni5
Materials Research Laboratory, Shrikrishna Mahavidyalaya, Gunjoti, MS, India 2 Department of Physics, Vivekanand College, Aurangabad, MS, India 3 Department of Chemistry, S.C.S. College, Omerga, MS, India 4 Department of Physics, D.S.M. College, Jintur, MS, India 5 Department of Condensed Matter Physics and Materials Science, Tata Institute of Fundamental Research. Mumbai, MS, India
1

ABSTRACT
A series of nanocrystalline spinel ferrites with chemical compositions Cu1-xZnxFe2O4 (x = 0.0, 0.2, 0.4, 0.6, 0.8 and 1.0) synthesized by wet-chemical co-precipitation method, to investigate their structural and magnetic properties. The structural parameters were measured by applying the full pattern fitting of Rietveld method using Full Prof Program. The crystallite size was calculated by using Scherer formula and Williamson Hall analysis method, and also confirm from by transmission electron microscopy (TEM). The lattice parameter is found to increases with increasing zinc ion concentration. X-ray intensity ratios were calculated in order to determine the possible cation distributions among the tetrahedral (A) and octahedral [B] site. The saturation magnetization (Ms) and Bohr magneton per formula unit (nB) was measured at room temperature using vibrating sample magnetometer. The Y-K angle increases gradually with increase in Zn2+ concentration 'x'. Magnetic AC susceptibility measurements were carried out as a function of temperature in order to measure the Curie temperature (TC).

PCAS: 75.50.Gg, 75.75.+a, 61.46.Df, 75.40.Cx KEYWORDS: Ferrites, Nano-particle, X-ray diffraction, AC susceptibility.

I.

INTRODUCTION

Nano-crystalline ferrites are widely used in many electronic devices such as antenna cores, high frequency inductors, transformer cores, read write heads for high-speed digital taps or disc recordings etc. These are preferred because of their high electrical resistivity, low eddy current losses, high Curie temperature, mechanical hardness, chemical stability and reasonable cost [1-3]. The structural and magnetic properties of ferrites mainly depend upon chemical composition, methods of preparation, sintering temperature and sintering time [4]. A small amount of dopant ion can change structural and magnetic properties of ferrites [5-8]. Among the ferrites, copper ferrites (CuFe2O4) are extensively investigated because of their interesting cryptographic and magnetic properties [9]. Copper ferrite crystallizes either in a tetragonal (T) or cubic (C) symmetry depending on the cation distribution among sites in its spinel structure [10-11]. The Fe3+ ions occupy both tetrahedral (A) and octahedral [B] site. The copper ions occupy only [B] site and causes the tetragonal structure distortions as per Jahn-Teller effect [10-11]. The Cu-Zn ferrites have been studied in the literature due its technological importance [12-15]. The present work deals to investigates the structural and magnetic properties of Cu1-xZnxFe2O4 (x = 0.0 to x =1.0) system via wet-chemical method.

354

Vol. 3, Issue 1, pp. 354-361

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

II.

EXPERIMENTAL

Nano-ferrites Cu1-xZnxFe2O4 (x = 0.0 to 1.0 in steps of x = 0.2) were synthesized by wet-chemical coprecipitation method. In order to obtain desired composition, stoichiometric amount of corresponding AR grade metal sulphates, were dissolved in distilled water and 2M sodium hydroxide solution was added till pH become 11.5 with constant stirring at a temperature of 60 ºC in oxygen atmosphere. The precipitate was digested for 3 h. at same temperature then filtered and thoroughly washed with distilled water until it becomes free from sodium ions and dried in oven under vacuum. The dried powder mixed homogeneously and annealed at 700 ºC for 6 h. The annealed samples were X-ray examined on Jeol- JDX-8030 at room temperature by using Cu- Kα radiation and TEM analysis was carried out on Philips CM-12 transmission electron microscopy. The refinement of the structure was confirmed by using software WinPLOTR (version LIB-LCSIM). The magnetization and Curie temperature were measured by using vibrating sample magnetometer and AC susceptibility instrument respectively.

III.

RESULT AND DISCUSSION

The X-ray diffraction (XRD) patterns refinement was continuously done until to get goodness factor very close to one. The values of discrepancy factor (Rwp) and expected values (Rexp) with goodness index are listed in Table 1.
Table 1. The value of discrepancy factor (Rwp) goodness factor (χ2), expected values (Rexp), lattice constant (a), phase, particle size (t) of Cu1-x Zn xFe2O4 system a (Å ) t (nm) Comp. Rwp Rexp χ2 Treor Phase Scherer’s Williamson ‘x’ Bragg’s a c formula Hall Analysis 0.0 5.15 2.80 1.84 8.400 8.404 9.504 T 25.6 31.4 0.2 6.87 3.42 2.01 8.412 8.411 8.411 C 27.0 24.6 0.4 6.20 3.52 1.76 8.417 8.417 8.416 C 28.0 28.3 0.6 7.90 3.82 1.64 8.424 8.426 8.403 C 25.2 24.0 0.8 5.94 3.86 1.54 8.431 8.433 8.406 C 27.7 28.1 1.0 3.76 2.54 1.48 8.445 8.445 8.445 C 28.2 28.5

The Rietveld refined XRD pattern for the typical samples (x = 0.0 and 0.2) of Cu1-xZnxFe2O4 systems are shown in Fig. 1. Rietveld refined patterns illustrates the observed and calculated X-ray pattern as well as for their difference for sample x = 0.0 and x = 0.2.

355

Vol. 3, Issue 1, pp. 354-361

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963

Fig. 1.Observed ( º ) and calculated ( - ) X–ray diffraction pattern for sample (x = 0.0 and 0.2)

Analysis of XRD diffraction pattern of all samples confirm the formation of tetragonal structure for x = 0.0, for samples x = 0.2-1.0 it revels the single-phase cubic structure. The tetragonality decreases slowly with increase in Zn content ‘x’ [16]. The values of lattice constant (a) calculated by using Bragg’s law and by Powder X program for Zn substituted copper ferrite are listed in Table 1. Both the values of lattice parameter are good in agreement to each other. Table 1 illustrates that there is an increase in lattice constant with Zn2+ ion concentration x, the addition of Zn2+ ion in copper ferrite, causes the Fe3+ ions migrate from (A) site to [B] site, the Zn2+ ion being larger ionic radius (0.81 Å) than that of Fe3+ ion (0.67 Å). Zn2+ ions replaces Fe3+ from (A) site and its results there is expansion in lattice constant [12]. The tetragonal distortion of copper ferrite from cubic spinal structure, due to absence of Fe3+ ions at tetrahedral sites, this distortion explain by Jahn- Teller theory according to it, a non liner molecule in ground state is unstable in the symmetric configuration, it will distort it self to lower its energy. The ions with d4 or d9 electronic configuration, when situated at octahedral site causes a strong tetragonal distortion (c/a.>1). Similarly the ions with d3 and d8 configuration and located at tetrahedral site produce a tetragonal distraction with (c/a.>1), at tetrahedral site the ions with d3 and d8 configuration also produce tetragonal distortion with c/a>1 [13].

Fig. 2. Scanning electron micrograph for sample (x = 0.6)

356

Vol. 3, Issue 1, pp. 354-361

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
Figure 2 shows scanning electron micrographs (SEM) of the typical sample (x = 0.6). The SEM image indicating the fracture surface, further the texture was homogenous structure constiting polyhedral grain with well defined grain boundaries. The average crystallite size for each sample was determined using Scherrer formula [17] and also calculated by Williamson’s Hall analysis [18]. Crystallite size obtained by both the method shows good agreement with each other (Table 1). The morphology of synthesized sample is studied by TEM as shown in Fig. 3 (x = 0.4). It observed that the particle obtained in the range of 25 to 32 nm.

Fig. 3.Transmission electron micrograph for sample (x = 0.4)

The cation distributions for the system Cu1-xZnxFe2O4 were determined by X-ray diffraction method. In order to determine the cation distribution. The X-ray intensity was calculated using the formula suggested by M.G. Burger [19]. The X-ray intensity ratios I(220)/I(400) and I(422)/I(440) were calculated for various possible compositions of cations and compare with observed intensity ratios. The combinations of cation for which observed and calculated intensity ratio agree close to each other are taken to be corrected cation distributions which are listed in Table 2.
Table 2. Cation distribution and X- ray intensity ratio for oCu1-x Zn xFe2O4 system. Intensity ratio Cation distribution I220/I400 I422/I440 A- site B- site Obs. Cal. Obs. Cal. (Cu0.0Fe1.0) (Cu0.0Zn0.2Fe0.8) (Cu0.0Zn0.4Fe0.6) (Cu0.0Zn0.6Fe0.4) (Cu0.0Zn0.8Fe0.2) (Cu0.0Zn0.5Fe0.5) [Cu1.0Fe1.0] [Cu0.8Zn0.0Fe1.2] [Cu0.6Zn0.0Fe1.4] [Cu0.4Zn0.0Fe1.6] [Cu0.2Zn0.0Fe1.8] [Cu0.0Zn0.5Fe1.5] 1.1428 1.1785 1.3461 1.6500 1.3636 1.5714 1.1646 1.4700 1.4981 1.7143 1.9485 1.5854 0.3809 0.3260 0.3061 0.3809 0.3500 0.2916 0.1960 0.2099 0.2128 0.2395 0.2560 0.2356

`x` 0.0 0.2 0.4 0.6 0.8 1.0

Table 2 indicates that the Zn ions occupy (A) site for the samples x = 0.2-0.8. The Zn cation prefers to situate at (A) site as the Cu concentration decreases from [B] site, at the same time Fe cation shifts from tetrahedral (A) site to octahedral [B] site in order to balance the relative occupancy by the space group [20]. For x = 0.2 to x = 0.8, zinc shows a preference for tetrahedral (A) site where their 4S, 4P or 5S, 5P electron respectively can perform covalent bond with the 2 P electron of oxygen ions. The magnetic properties of Cu1-xZnxFe2O4 system were studied by hysteresis loop techniques and AC susceptibility measurements. The saturation magnetization (Ms) and magneton number (nB) were

357

Vol. 3, Issue 1, pp. 354-361

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
measured at room temperature with the help of hysteresis loops. The observed magneton number (nB Obs.) were calculated using the relation,

nB =

where Mw is molecular weight of the composition and Ms is saturation magnetization. According to the Neel’s two sublattices model of ferrimagnetisms, the calculated magnetic moment N (nB Cal.) per unit formula in µ B, n B is expressed as,
N n B = M B ( x) − M A ( x)

Mw ×Ms 5585

(1)

(2)

where, MA and MB are the A and B sublattice magnetic moment in B.
Table 3. Compositional variation of Saturation magnetization (Μs), Magneton number (nB) and Y-K angle (YK) of Cu1-x Zn xFe2O4 system

Comp. ‘x’ 0.0 0.2 0.4 0.6 0.8 1.0

Μs (emu/gm) 38.85 62.16 108.78 62.16 38.85 23.31

Obs. 1.66 2.66 4.67 2.67 1.67 1.00

nB (µB)

cal 1.0 2.8 4.6 6.4 8.2 5.0

Y-K (degree) -----56.22 73.13 62.18

Fig.4. Variation of magneton number (nB), observed and calculated with Zn content (x).
N The values of Neel’s magnetic moment n B were calculated by taking ionic magnetic moment of Fe, Cu and Zn. The values of saturation magnetization ‘Ms’ and magneton number ‘nB’ are represented in Table 3. It is observed from Table 3 and Fig. 4 that both the values ‘Ms’ and ‘nB’ increases up to x = 0.4 then decrease as ‘ x ’ increases. Since the diamagnetic Zn2+ ions replaces Fe3+ ions at tetrahedral (A) site due to the prominent inter-sub lattice A-B interaction up to x = 0.4 the net magnetic moment is increased. The behavior of magnetic moment nB is also explained on the basis of Neel’s theory [21] and the calculated ‘nB’ value of spin moment and also by Neel’s two sub-lattice model. The observed and calculated magnetic moment values ‘nB’ are similar to each other for x = 0.4, estimating that the structure is collinear, then x = 0.4 the observed ‘nB’ values are lower then calculated values. The reason for this decrease in magnetic moment after x = 0.4 can explain on basis of Yafet-Kittle angle [22]. These low magnetic moments can be explained in term of non-collinear spin arrangement that is

358

Vol. 3, Issue 1, pp. 354-361

International Journal of Advances in Engineering & Technology, March 2012. ©IJAET ISSN: 2231-1963
the presence of a small canting of [B] site moment with respect to distortion of (A) site. [23]. The values of Yafet-Kittle angle were calculated using relation discussed elsewhere [24], and their values are presented in Table 3. The temperature dependence AC susceptibility measurements for all the samples are shown in Fig. 5. Theoretical calculation of Curie temperature for Zn substituted copper ferrites was obtained by Upadhya’s model [25]. The Curie temperature is depends upon the number of active magnetic linkages per magnetic ion per formula unit. Curie temperature calculated by using relation [26].

TC =

M ( x = 0) × Tc ( X = 0) × n x n( x = 0 ) × M x

(3)

where M(x) is the relative weighted total magnetic ions per unit formula, calculated by considering the weighing of magnetic ion M’ to that of ions as µM’/µFe’ where µM’ and µFe’ are magnetic moments of M’ and iron ions respectively.

Fig. 5. Variation of χT/χRT versus temperature. The inset is the variation of Curie content (x).

temperature versus Zn

The observed Curie temperature and Curie temperature calcula