You are on page 1of 88

www.cisdijournal.net Vol. 3.

No 2,

May, 2012

Computing, Information Systems & Development Informatics Journal


Vol. 3. No.2. May, 2012 An International Journal Publication of the Creative Research & Technology Evaluation Network in Conjunction with the African Institute of Development Informatics & Policy, Accra, Ghana and the Cyber Systems Division, International Centre for Information Technology & Development, Baton Rouge, USA.

All Rights Reserved


Published in the USA by:
Trans-Atlantic Management Consultants, 2580 Fairham Drive Baton Rouge, LA, USA 70816

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May, 2012

CONTENTS
i. ii. iii. 1-8 Table of Contents Editorial Board Preface On the Classification of Gasoline-fuelled Engine Exhaust Fume Related Faults Using Electronic Nose and Principal Component Analysis Arulogun, O.T., Waheed, M. A., Fakolujo, O.A & Olaniyi, O.M. GIS-based Decision Support Systems Applied to Study Climate Change Impacts on Coastal Systems and Associated Ecosystems. Iyalomhe F., Rizzi J., Torresan S., Gallina V., Critto A. & Marcomini A Data Mining Technique for Predicting Telecommunications Industry Customer Churn Using both Descriptive and Predictive Algorithms. Kolajo, T & Adeyemo, A.B. Lecture Attendance System Using Radio Frequency Identification and Facial Recognition Olaniyi, O.M, Adewumi D.O, Shoewu O. & Sanda O.W Employees Conformity to Information Security Policies In Nigerian Business Organisations The Case of Data Engineering Services PLC Adedara, O., Karatu, M.T. & Lagunju, A. Policing the Cyber Space Is the Peel Theory of Community Policing Applicable? Wada, F.J. An Intelligent System for Detecting Irregularities in Electronic Banking Transactions Adeyiga,, J.A, Ezike, J.O.J & Adegbola, O.M. Management Issues and Facilities Management In Geographic Information System The Case of the Activities of the Lagos Area Metropolitan Transport Authority (LAMATA). Nwambuonwo. J.O & Mughele.E.S Strategies for Effective Adoption of Electronic Banking In Nigeria Orunsolu A.A, Bamgboye O., Aina-David O.O & Alaran M.A Call for Papers CISDI Journal Publication Template

9-26

27-34

35-42 43-50

51-56 57-66 67-74

75-81 82 83

ii

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May, 2012

Editorial Board
Editor-in-Chief Prof. Stella S. Chiemeke University of Benin Benin City, Nigeria Associate Editors Dr. Friday Wada Nelson Mandela Sch. of Public Policy Southern University Baton Rouge, LA, USA Prof. MAntonio .L. Llorens Gomez

Co-Editor In-Chief Dr. Richard Boateng University of Ghana Legon, Ghana

Universidad Del Este Carolina, Peurto Rico, USA.


Dr. Yetunde Folajimi University of Ibadan, Ibadan, Nigeria Azeez Nureni Ayofe University of Western Cape Bellville, Cape Town, South Africa Dr. John Effah University of Ghana Business School University of Ghana, Legon Accra Colin Thakur Durban University of Technology South Africa Makoji Robert Stephen Salford Business School Greater Manchester, United Kingdom Dr Akeem Ayofe Akinwale Department of Social Sciences, Landmark University, Omu Aran, Nigeria

Editorial Advisory Board Prof. C.K. Ayo Covenant University Ota, Nigeria Prof. Adenike Osofisan University of Ibadan Ibadan, Nigeria Prof. Lynette Kvasnny Penn. State University Pennsylvania, USA Prof. Bamidele Oluwade Salem University Lokoja, Nigeria. Dr. Istance Howell DeMontfort University Leceister, United Kingdom Prof. Victor Mbarika The ICT University United State of America

Prof. Maritza .I. Espina Universidad Del Este Carolina, Peurto Rico, USA
Managing/Production Editor

Prof. Damien Ejigiri Nelson Mandela Sch. of Public Policy Southern University, USA
Dr. Abel Usoro University of the West of Scotland Paisley, Scotland Dr. Onifade O.F.W. Nancy 2 Universit France

Dr. Longe Olumide PhD Int. Centre for Information Technology & Development Cyber Systems Division College of Business Southern University and A & M College, Baton Rouge, USA Nigeria Contact Department of Computer Science University of Ibadan, Ibadan, Nigeria

iii

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May, 2012

Preface to the CISDI Journal Vol 3, No. 2, May, 2012


This volume of the Computing, Information Systems and Development Informatics Journal (CISDI) provides a distinctive international perspective on theories, issues, frameworks and practice at the nexus of computing, information systems Developments Informatics and policy. A new wave of multidisciplinary research efforts is required to provide pragmatic solution to most of the problems the world faces today. With Computing and Information Technology (IT) providing the needed momentum to drive growth and development in different spheres of human endeavours, there is a need to create platforms through which breakthrough research and research findings that cuts across different discipline can be reported. Such dissemination to a global audience will in turn support future discoveries, sharpen the understanding of theoretical underpinnings and improve practices. This is exactly what the CISDI Journal aims to achieve with timely publications of research, cases and findings from practices in the domain of Computing, Information Technology, Information System/Science and Development Informatics. Articles in this volume cover a broad spectrum of issues that reflects on cases, practices, theories and design. Case studies on using ICTs to support social intervention among asylum seekers was reported from Ireland. eCollaboration for Tertiary Education Using Mobile Systems, eCollaboration for law enforcement agencies as well as factors affecting the use and adoption of Open-Source Software were presented. Other papers covers topics such as social and enterprise informatics, inclusion criteria and instructional technology design, climate change and protocols for improving transactional support in interoperable service oriented application systems. We encourage you to read through this volume and consider submitting articles that reports cutting edge research in computing and short communications/reviews in development informatics research that appropriate design, localization, development, implementation and usage of information and communication technologies (ICTs) to achieve development goals. The CISDI Journal accept articles that promote policy research by employing established (and proposed) legal and social frameworks to support the achievement of development goals through ICTs - particularly the millennium development goals. We will also consider for acceptance, academically robust papers, empirical research, case studies, action research and theoretical discussions which advances learning within the Journal scope and target domains. Extended versions and papers with approved copyright release previously presented at conferences, workshops and seminars will also be accepted for publication. We welcome feedbacks and rejoinders Enjoy your reading Thank you

Longe Olumide Babatope PhD Managing Editor Computing, Information Systems and Development Informatics Journal Fulbright SIR Fellow International Centre for Information Technology & Development Southern University System Southern University Baton Rouge Louisiana, USA E-mail: submissions@cisdijournal.net longeolumide@fulbrightmail.org

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Computing, Information Systems & Development Informatics Journal


Volume 3. No. 2. May, 2012

On the Classification of Gasoline-fuelled Engine Exhaust Fume Related Faults Using Electronic Nose and Principal Component Analysis
Arulogun, O.T. & Omidiora, E. O. Computer Science and Engineering Department Ladoke Akintola University of Technology P.M.B. 4000 Ogbomoso, Nigeria Waheed, M.A., Mechanical Engineering Department Ladoke Akintola University of Technology P.M.B. 4000 Ogbomoso, Nigeria Fakolujo, O.A., Electrical and Electronic Engineering Department University of Ibadan Ibadan, Nigeria Olaniyi, O.M. Electronic and Electrical Engineering Department, Bells University of Technology Ota, Nigeria

Reference Format: Arulogun, O.T., Waheed, M. A., Fakolujo, O.A & Olaniyi, O.M. (2012). On the Classification of Gasoline-fuelled Engine Exhaust Fume Related Faults Using Electronic Nose and Principal Component Analysis . Computing, Information Systems & Development Informatics Journal. Vol 3, No.2. pp 1-8

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

On the Classification of Gasoline-fuelled Engine Exhaust Fume Related Faults Using Electronic Nose and Principal Component Analysis
Arulogun, O.T., Waheed, M. A., Fakolujo, O.A & Olaniyi, O.M.

ABSTRACT The efficiency and effectiveness of every equipment or system is of paramount concern to both the manufacturers and the end users, which necessitates equipment condition monitoring schemes. Intelligent fault diagnosis system using pattern recognition tools can be developed from the result of the condition monitoring. A prototype electronic nose that uses array of broadly tuned Taguchi metal oxide sensors was used to carry out condition monitoring of automobile engine using its exhaust fumes with principal component analysis (PCA) as pattern recognition tool for diagnosing some exhaust related faults. The results showed that the following automobile engine faults; plug-not-firing faults and loss of compression faults were diagnosable from the automobile exhaust fumes very well with average classification accuracy of 91%.
Key words: Electronic nose, Condition Monitoring, Automobile, Fault, Diagnosis, PCA. The developed model was experimented on a real life diesel engine powered electricity generator to simulate detection of fan fault, thermostat fault and pump fault using temperature measurements. Reference [1] used micro-acoustic viscosity sensors to carry out on-line condition monitoring of lubricating oils in order to monitor the thermal aging of automobile engine oils so as to predict the appropriate time for engine oil change. Electronic noses are technology implementation of systems that are used for the automated detection and classification of odours, vapours and gases [3]. Electronic nose utilizes an instrument, which comprises two main components; an array of electronic chemical sensors with partial specificity and an appropriate patternrecognition system, capable of recognizing simple or complex odours [3]. The main motivation for the implementation of electronic noses is the development of qualitative low cost real-time and portable methods to perform reliable, objective and reproducible measures of volatile compounds and odours [16]. Reference [7] reported the use of electronic nose for the discrimination of odours from trim plastic materials used in automobiles. Reference [9] used electronic nose to quantify the amount of carbon monoxide and methane in humid air. A method for determination of the volatile compounds present in new and used engine lubricant oils was reported by [15]. The identification of the new and used oils was based on the abundance of volatile compounds in headspace above the oils that were detectable by electronic nose. The electronic nose sensor array was able to correlate and differentiate both the new and the used oils by their increased mileages. In [3], electronic nose-based condition monitoring scheme consisting of array of broadly tuned Taguchi metal oxide sensors (MOS) was used to acquire and characterize the exhaust fume smell prints of three gasoline-powered engines operating under induced faults. Reference [8] applied high temperature electronic nose sensors to exhaust gases from modified automotive engine for the purpose of emission control.

1. INTRODUCTION The engine is one of the most critical and complex subsystems in the automobile. It is more prone to fault because of its electromechanical nature. It has been shown that early detection of the malfunctions and faults in automobiles as well as their compensation is crucial both for maintenance and mission reliability of vehicles [2]. There are two major approaches that are employed in detecting or predicting faults in any automobile engine, namely: physical observation and electronic condition monitoring approaches. While the first approach uses human senses such as hearing, sight, and smell, the second approach deploys electronic sensors to monitor some conditions such as thermal, vibration, acoustic emission, torque, speed, voltage, current, flow rate, power and so on. The latter approach is more desirable because it avoids human errors when properly implemented. In addition, it predicts with high level of accuracy the real status of the system to which it is deployed when employed with intelligent pattern recognition tools. Condition monitoring consists of methods by which small variations in the performance of equipment can be detected and used to indicate the need for maintenance and the prediction of failure [11]. It can be used to appraise the current state and estimate the future state using real time measurements and calculations. Reference [6] pointed out that a contributing factor in providing ongoing assurance of acceptable plant condition is the use of condition monitoring techniques. Its technologies, such as vibration analysis, infra-red thermal imaging, oil analysis, motor current analysis and ultra-sonic flow detection along with many others have been widely used for detecting imminent equipment failures in various industries [5]. Its techniques have been applied in various fields for the purpose of fault detection and isolation. Reference [17] developed a condition monitoring based diesel engine cooling system model.

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

The array included a tin-oxide-based sensor doped for nitrogen oxide (NOx) sensitivity, a SiC-based hydrocarbon (C xHy ) sensor, and an oxygen (O 2) sensor. The results obtained showed that the electronic nose sensors were adequate to monitor different aspect of the engine's exhaust chemical components qualitatively. In this present study, a prototype of an electronic nosebased condition monitoring scheme using array of ten broadly tuned Taguchi metal oxide sensors (MOS) was used to acquire the exhaust fume of a gasoline-powered engine operating with induced faults. The acquired exhaust fume data were analysed by PCA to diagnose some exhaust related faults.

2. MATERIALS AND METHOD 2.1 The Automobile Engine Automobile engine is a mechanical system where combustion takes place internally. The parts of an engine vary depending on the engines type and the manufacturer. Fig. 1 shows some of the basic parts of the internal combustion engine. The system is a heat engine in which combustion occurs in a confined space called a combustion chamber. In a gasoline fuelled engine, a mixture of gasoline and air is sprayed into a cylinder and the mixture is compressed by a piston. The ignition system produces a high-voltage electrical charge and transmits it to the spark plugs via ignition wires. The hot gases that are contained in the cylinder possess higher pressure than the air-fuel mixture so this drives the piston down [13].

Fig. 1: Basic parts of an internal combustion engine [13]

In a perfectly operating engine with ideal combustion conditions, the following chemical reaction would take place in the presence of the following components of basic combustion namely air, fuel and spark: 1. 2. 3. Hydrocarbons (H xCy ) would react with oxygen to produce water vapour (H2O) and carbon dioxide (CO2) and Nitrogen (N2) would pass through the engine without being affected by the combustion process

In any case of variations in the components of basic combustion or loss of compression due to worn piston rings or high operating temperature the composition of the exhaust gases will change to H2O, CO2, N2, NOX, CO, HxC y and O2. Measurements of exhaust gases such as CO2, CO, NOx, and O 2 can provide information on what is going on inside the combustion chamber and other things going on in the remaining engine units. For example, CO2 is an excellent indicator of efficient combustion: The higher the CO 2 measurement, the higher the efficiency of the engine. High HxC y indicates poor combustion that can be caused by ignition misfire (ignition system failures), insufficient cylinder compression.

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

The gasoline fuelled spark ignition automobile engine considered was a test bed automobile engine. Table 1 gives the specification of the engine, while Fig. 2 shows the snapshot of the test bed engine used in this study.

Samples of the exhaust fumes of the engine operating in normal and various induced faulty conditions were collected for analysis using electronic nose system that consisted of array of ten broadly tuned chemical sensors.

Fig. 2: Snapshot of the Gasoline Fuelled Engine

Table 1: The Engine specification

S/N 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.

Item Track (rear axle) Kerb weight Engine capacity Number of valves Number of cylinder Bore/Stroke ratio Displacement Compression ratio Maximum output Maximum rpm coolant Top gear ratio

Value 50.6 in 900 Kg. 1.61 L 8 4 1.21 96.906 Cu in 9.5:1 78.3 kW Water bhp/litre 0.86 66.1

2.2 Chemical sensors The chemical sensor is usually enclosed in an air tight chamber or container with inlet and outlet valves to allow volatile odour in and out of the chamber. The most popular sensors used to develop electronic noses are; semiconductor metal oxide chemo resistive sensors, quartzresonator sensors and conducting polymers. Semiconductor metal oxide chemo resistive sensors types were used in this study because they are quite sensitive to combustible materials such as alcohols but are less efficient at detecting sulphur or nitrogen based odours [4]. The overall sensitivity of these types of sensors is quite good. They are relatively resistant to humidity and to ageing, and are made of particularly strong metals [12]. Taguchi metal oxide semiconductor (Figaro Sensor, Japan) TGS 813, TGS 822, TGS 816, TGS 2602, TGS 5042, TGS 2104 and TGS 2201 were used based on their broad selectivity to some exhaust gases such as CO2, N2, NOX, CO, uncombusted HxCy, and some other gases such as H2, methane, ethanol, benzene.

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

2.3 Induced Fault Conditions Faults may take time to develop in an automobile engine, hence the need to induce the faults to be investigated. The major faults classes under consideration in this work are plug-not-firing faults and worn piston ring (loss of compression). (a) Plug-not-firing faults: When any of the plugs is malfunctioning, the air-fuel mixture will not be properly ignited but will only be compressed by the piston thereby producing unburnt hydrocarbon with lean quantity of carbon dioxide and more carbon monoxide. Different ignition faults considered were the one-plug-firing, two-plug-firing and the three-plugfiring faults. The faults were inducted into the engines by removing the cables connected to the spark plugs one after the other. (b) Worn piston ring faults: The piston ring prevents engine oil from the oil sump to mix with gasoline-air mixture in the engine combustion chamber and to maintain the engine compression at optimum level. When this ring wears out, the engine oil escapes and mixes with the gasoline-air mixture thereby increases the amount of unburnt hydrocarbon that comes out of the combustion chamber via the exhaust valve. The worn piston ring fault was induced by mixing the gasoline and engine oil in various proportional ratios as 90:10, 80:20, 70:30, 60:40, 50:50 and 40:60. The following calibration was used for the loss of compression faults: a 90:10 fuel mixture will correspond to a 1st degree worn ring and 80:20, 70:30, 60:40, 50:50 and 40:60 will correspond to 2nd, 3rd, 4th, 5th and 6th degree worn ring respectively. The higher the percentage of engine oil that mixes with the gasoline, the higher the degree of wearing of the piston ring which adversely affect the efficiency of the engine. 2.3 Data Acquisition The required exhaust fumes of the gasoline fuelled engine operating in various induced fault conditions were obtained from the engine exhaust tail pipe in the absence of a catalytic converter as specimens into 1000ml Intravenous Injection Bags (IIB). Drip set was used to connect each of the IIB containing the exhaust gases to a confined chamber that contained the array of the selected Taguchi MOS sensors. Static headspace analysis odour handling and sampling method was used to expose the exhaust fume samples to the plastic chamber because the exhaust fume tends to diffuse upwards in clean air due to its lighter weight thus there was no need for elaborate odour handling and sampling method. Readings were taken from the sensors 60 seconds after the introduction of each exhaust fume sample into the air tight plastic chamber so as to achieve odour saturation of the headspace. The digitized data were collected continuously for 10 minutes using Pico ADC 11/10 data acquisition system into the personal computer for storage and further analysis. 1400 x 10 data samples (1 dataset) for each of the ten (10) fault classes making a total of 14000 x 10 data samples (10 datasets) were collected from the test bed engine in the first instance and were designated as training datasets.

The sensors were purged after every measurement so that they can return to their respective default states known as baseline with the use of compressed air. The baseline reading was taken as the unknown fault data. These measurement procedures were repeated five more times to have five samples for each fault class as testing datasets. All data collection were done with the engine speed maintained at 1000 revolutions per second except for 5th degree worn ring, 6th degree worn ring and 3 plugs bad fault conditions that were collected at engine speed of 2000 revolutions per second. 2.4 Data Analysis Principal Components Analysis (PCA) is a technique of linear statistical predictors that has been applied in various fields of sciences especially in process applications [18]. The primary objectives of PCA are data summarization, classification of variables, outlier detection and early indication of abnormality in data structure. PCA has been successfully applied to reduce the dimensionality of a problem by forming a new set of variables. PCA seeks to find a series of new variables in the data with a minimal loss of information [5]. Let X = x1, x2, x3 , xm be an m-dimensional observation vector describing the process or machine variables under consideration. A number of observation vectors (obtained or measured at different times) constitute data matrix X. The PCA decomposes the data matrix, X, as

X = TPT = t1 p1 + t2 p2 ........+ tm pm = ti pi
i =1

(1)

Where Pi is an eigenvector of the covariance matrix of X. P is defined as the Principal Components (PC) loading matrix and T is defined to be the matrix of PC scores. The loading provides information as to which variables contribute the most to individual PCs. That is, they are the coefficients in the PC model, whilst information on the clustering of the samples and the identification of transition between different operating conditions is obtained from the score. PCA transforms correlated original variables into a new set of uncorrelated variables using the covariance matrix or correlation matrix [18]. The expectation from conducting PCA is that the correlation among the original variables is large enough that the first few new variables or PCs account for most of the variance [5]. If this holds, no essential insight is lost by applying the first few PCs for further analysis and decision making. If the original variables are collinear, k PCs (k m) will explain the majority of the variability [5]. In general, k will normally be much smaller than the number of variables in the original data. Consequently, it is desirable to exclude higher-order PCs and retain a small number of PCs.

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Equation (1) can then be expressed as 3. RESULTS AND DISCUSSION


K

X = TP

+ E =

I =1

ti p iT + E

(2)

Where E represents residual error matrix [19]. For instance, if the first three PCs represent a large part of the total variance, the residual error matrix will be:

This study was conducted with various numbers of faulty conditions and normal datasets with each condition having its own developed PCA model. The first ten PCs were used for the purpose of fault classification using euclidean distance metric for to discrimination between PCA models obtained from the training datasets and the testing datasets. Table 2 shows the summary of results of testing the new PCA models against initial PCA models. In Table 2, the number in the squared brackets represents the fault number while the number of times classification was done is shown in bold typeface. Testing of each fault class was done five times. Results of testing of the PCA models with new data samples Compression fault with 1st degree worn ring was classified correctly four out of five times and was incorrectly classified once as compression fault with 3rd degree worn ring. One plug bad fault and compression faults with 4th and 6th degree worn ring were also not classified correctly during the testing. Compression faults with 2nd, 3rd, 5th degree worn ring, oneplug-not-firing fault, two-plugs-not-firing fault, unknown fault and normal condition were correctly classified five out five times. Out of 55 testing samples, 5 were inaccurately classified while 50 were correctly classified. The average classification accuracy of 91% was achieved from the testing. Out of the five inaccurately classified classifications, three were classified as a subset of the same fault class while the other two were truly misclassified to wrong classes as shown in Table 2

E = X t1 p 1

+ t2 p2

+ t3 p 3

(3 )

Typically, in the literature, it is emphasized that the first few PCs contain all the important information [5]. In this study, the singular value decomposition (SVD) technique was used to implement the PCA. In SVD, data matrix X is decomposed into three products by the following equation
X = UP
T

(4)

where U are Eigenvectors, are Eigen values and PT is the loading matrix. The main virtue of SVD is that all three matrices are obtained in one operation without having to obtain a covariance matrix as in conventional PCA method [5]. Loading matrices obtained in this method were used to establish the initial PCA models of the system which were based on normal condition and faulty condition data. New observations (measurements) PCA models were projected onto the initial PCA models. Discrepancy or residual between the initial PCA models and new measurement PCA models were detected by calculating the Euclidean distances of the new observations PCA models to initial PCA models. The new measurement was classified as any of the existing PCA models with the minimum Euclidean distance or as unknown fault. Eleven initial PCA models were created from the training datasets collected. These PCA models corresponded to the following engine conditions 1st , 2nd, 3rd, 4th, 5thand 6th degree worn ring, one-plug, two-plug, three-plug, and unknown faults, normal conditions. Five different PCA models were developed for each engine condition from the testing datasets.

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Table 2: Results of testing of the PCA models with new data samples
Cla ssification [5 ] [6] [7 ] [8 ] 0 0 0 0 5 0 0 0 0 0 0 0 0 0 0 0 4 0 0 0 0 0 0 0 0 0 0 0 5 0 0 0 0 0 0 0 0 0 0 0 5 1 0 0

D ata sample C ompre ssion fa ul t With 1 st degree w orn ring C ompre ssion fa ul t With2nd deg ree w orn ring C ompre ssion fa ul t With 3 rd degree wor n ring C ompre ssion fa ul t With 4 th degree worn ring C ompre ssion fa ul t With 5 th degree worn ring C ompre ssion fa ul t With 6 th degree worn ring One-plug-not-firing f ault Tw o-plug-no t-firi ng fa ult

[1 ] 4 [1 ] 0 [2] 0 [3 ] 1 [4 ] 0 [5 ] 0 [6 ] [7] [ 8] 0 0 0 0 0

[2] 0 5 0 0 0 0 0 0 0 0 0

[3] 1 0 5 0 0 0 0 0 0 0 0

[4] 0 0 0 3 0 0 0 0 0 0 0

[9] 0 0 0 0 0 0 0 0 4 0 0

[10] 0 0 0 1 0 1 0 0 0 5 0

[11 ] 0 0 0 0 0 0 0 0 0 0 5

Three-pl ug-not-firing fault [9] U nkno wn engine fault N orma l engine [10 ] [1 1]

4. CONCLUSION An electronic nose-based condition monitoring scheme prototype comprising of ten broadly tuned Taguchi metal oxide sensors (MOS) was used to acquire the exhaust fume of a gasoline-powered engine operating with induced faults. The acquired exhaust fume data were analysed by PCA to diagnose the exhaust related faults. The testing of the PCA algorithm on the exhaust fume data showed a good performance with regards to automobile engine fault diagnosis. The developed system is capable of classifying the plug-not-firing faults and worn piston ring faults from the exhaust fumes very well REFERENCES [1] Agoston, A., Otsch, C. and Jakoby, B. (2005): Viscosity sensors for engine oil condition monitoring: application and interpretation of results, Sensors and Actuators A, Vol.121, pp.327332. [2] Alessandri, A. A., Hawkinson, T. Healey, A. J., and Veruggio, G. (1999): Robust Model-Based Fault Diagnosis for Unmanned Underwater Vehicles Using Sliding Mode-Observers, 11th International Symposium on Unmanned Untethered Submersible Technology (UUST'99). [3] Arulogun, O.T., Fakolujo, O.A., Olatunbosun, A., Waheed, M.A., Omidiora, E. O. and Ogunbona, P. O. (2011): Characterization of Gasoline Engine Exhaust Fumes Using Electronic Nose Based Condition Monitoring, Global Journal of Researches In Engineering (GJRE-D), Vol. 11, No. 5

[4] Bartlett, P. N., Elliott, J. M. and Gardner, J. W. (1997): Electronic Noses and their Applications in the Food Industry, Food Technology, Vol.51, No 12, pp 44-48. [5] Baydar, N., Ball, A. And Payne, B. (2002): Detection of Incipient Gear Failure Using Statistical Techniques, IMA Journal Of Management Mathematics, Vol. 13, Pp 71-79. [6] Beebe, R. (2003): Condition monitoring of steam turbines by performance analysis, Journal of Quality in Maintenance Engineering, Vol. 9, No. 2, pp. 102112. [7] Guadarrama, A., Rodriguez-Mendez, M.L., and De Saja, J.A. (2002): Conducting polymer-based array for the discrimination of odours from trim plastic materials used in automobiles, Analytica Chimica Acta., Vol.455, pp. 4147. [8] Hunter, G. W., Chung-Chiun, L, and Makel, D. B.(2002): Microfabricated Chemical Sensors for Aerospace Applications, The MEMS Handbook, Mohamed Gad-el-Hak, ed., CRC Press, Boca Raton, FL, pp. 22-1--22-24. [9] Huyberechts, G., Szecowka, P., Roggen, J., and Licznerski, B.W. (1997): Simultaneous Quantification of Carbon Monoxide and Methane in Humid Air Using A Sensor Array and an Artificial Neural Network, Sensors And Actuators B, Vol. 45, Pp.123-130.

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

[10] Martini, E. B., Morris, A.J. and Zhang, J. (1996): Process Performance Monitoring Using Multivariate Statistical Process Control, I.E.E. Process Control Applications., Vol. 43 (2). [11] Massuyes, T. L. and Milne, R., (1997): Gas-Turbine Condition Monitoring Using Qualitative Model based Diagnosis, IEEE EXPERT, Vol. 12, pp 22-31. [12] Meille, P. (1996): Electronic noses: Towards the objective instrumental characterization of food aroma, Trends in Food Science and Technology, Vol.7, pp. 432-438. [13] NASA (2007): Internal Combustion Engine, National Aeronautical and Space Administration website. Retrieved on 10th October, 2008 from http://www.grc.nasa.gov/WWW/K12/airplane/combst1.html [14] Pyhnen, S., Jover, P., Hytyniemi, H. (2004): Signal Processing of Vibrations for Condition Monitoring of an Induction Motor, Proc. of the 1st IEEE-EURASIP Int. Symp. on Control, Communications, and Signal Processing, ISCCSP 2004, pp. 499-502, Hammamet, Tunisia. [15] Sepcic, K., Josowicz, M., Janata, J. and Selbyb, T. (2004): Diagnosis of used engine oil based on gas phase analysis, Analyst, Vol. 129, pp 1070 1075

[16] Shilbayeh, N. and Iskandarani, M. (2004): Quality Control of Coffee Using an Electronic Nose System, American Journal of Applied Sciences Vol.1, No.2, pp. 129-135. [17] Twiddle, J.A. (1999): Fuzzy Model Based Fault Diagnosis of a Diesel Engine Cooling System, Department of Engineering, University of Leicester, Report No. 99-1. Retrieved 7th February 2007 (http://www.le.ac.uk/engineering/mjp9/li1.pdf) [18] Venkatasubramanian, V., Raghunathan, R., Kewen, Y., Surya, N.K. (2003): A Review of Process Fault Detection and Diagnosis, Part 1, Computers and Chemical Engineering, Vol. 27, pp. 293-311. [19] Wise, B. M. and Gallagher, N.B. (1996): The Process Chemometrics Approach to Process and Fault Detection, Journal of Process Control, Vol. 6, Pp 329-348.

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Computing, Information Systems & Development Informatics Journal


Volume 3. No. 2. May, 2012

On the Application of GIS-based Decision Support Systems to study climate change impacts on coastal systems and associated ecosystems
Iyalomhe F., Rizzi J. & Critto A. University Ca Foscari Venice Department of Environmental Sciences, Informatics and Statistics Calle Larga S. Marta 2137, 30123 Venezia (Italy)
felix.iyalomhe@stud.unive.it

Torresan S. , Gallina V & Marcomini A. Euro Mediterranean Centre for Climate Change CMCC, Lecce Via Augusto Imperatore, 16 73100 Italy

Reference Format:

Iyalomhe F., Rizzi J., Torresan S., Gallina V., Critto A. & Marcomini A. (2012). GIS-based
Decision Support Systems applied to study climate change impacts on coastal systems and associated ecosystems. Computing, Information Systems & Development Informatics Journal. Vol 3, No.2. pp 9-26

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

On the Application of GIS-based Decision Support Systems to study climate change impacts on coastal systems and associated ecosystems
Iyalomhe F., Rizzi J., Torresan S., Gallina V., Critto A. & Marcomini A

ABSTRACT One of the most remarkable achievements by scientists in the field of global change in recent years is the improved understanding of climate change issues. Its effects on human environments, particularly coastal zones and associated water systems, are now a huge challenge to environmental resource managers and decision makers. International and regional regulatory frameworks have been established to guide the implementation of interdisciplinary methodologies, useful to analyse water-related systems issues and support the definition of management strategies against the effects of climate change. As a response to these concerns, several decision support systems (DSS) have been developed and applied to address climate change through geographical information systems (GIS) and multi-criteria decision analysis (MCDA) techniques; linking the DSS objectives with specific functionalities leading to key outcomes, and aspects of the decision making process involving coastal and waters resources. An analysis of existing DSS focusing on climate change impacts on coastal and related ecosystems was conducted by surveying the open literature. Consequently, twenty DSS were identified and are comparatively discussed according to their specific objectives and functionalities, including a set of criteria (general technical, specific technical and applicability) in order to better inform potential users and concerned stakeholders through the evaluation of a DSS actual application. Key words: Climate change, Decision support, GIS, regulations, Environment 1. INTRODUCTION One of the most remarkable achievements by scientists in the field of global change in recent years is the improved understanding of climate change issues, whose effects have been linked to the increase in global average temperature according to the IPCC emission scenarios [11]. Resulting ocean thermal expansion is expected to generate significant impacts via sea level rise, seawater intrusion into coastal aquifers, enhanced coastal erosion and storm surge flooding, while increasing population in coastal cities, especially megacities on islands and deltas, further aggravates major impacts of climate change on marine coastal regions. The latter include transitional environments such as estuaries, lagoons, low lying lands and lakes, which are particularly vulnerable because of their geographical location and intensive socio-economic activities [12,13]. Accordingly, several environmental resource regulations have already included the need to assess and manage negative impacts derived from climate change through their implementation. For instance, the European Commission approved the Green and White papers [1415], the Water Framework Directive (WFD) [16], which represent an integrated and sound approach for the protection and management of water-related resources in both inland and coastal zones. They also signed the protocol for Integrated Coastal Zone Management (ICZM) [17], useful in the promotion of the integrated management of coastal areas in relation to local, regional, national and international goals. Moreover, the principles of Integrated Water Resources Management (IWRM) aimed to address typical water quality and quantity concerns with the optimisation of water management and sustainability in collaboration with WFD policy declarations [18]. Likewise, relevant national legislations like Shoreline Management Planning (SMP) in the United Kingdom [19], Hazard Emergency Management (HEM) in the United States [20] and Groundwater Resources Management (GRM) in Bangladesh and India [21] were ratified and further endorse the assessment and management of coastal communities in relation to climate change impacts. Decision Support System (DSSs) is computer-based software that can assist decision makers in their decision process, supporting rather than replacing their judgment and, at length, improving effectiveness over efficiency [1]. Environmental DSS are models based tools that cope with environmental issues and support decision makers in the sustainable management of natural resources and in the definition of possible adaptation and mitigation measures [2]. DSS have been developed and used to address complex decision-based problems in varying fields of research. For instance, in environmental resource management, DSS are generally classified into two main categories: Spatial Decision Support Systems (SDSS) and Environmental Decision Supports Systems (EDSS) [3-5]. SDSS provide the necessary platform for decision makers to analyse geographical information in a flexible manner, while EDSS integrate the relevant environmental models, database and assessment tools coupled within a Graphic User Interface (GUI) for functionality within a Geographical Information System (GIS) [1-4-6]. In some detail, GIS is a set of computer tools that can capture, manipulate, process and display spatial or geo-referenced data [7] in which the enhancement of spatial data integration, analysis and visualization can be conducted [89]. These functionalities make GIS-tools useful for efficient development and effective implementation of DSS within the management process. For this purpose they are used either as data managers (i.e. as a spatial geodatabase tool) or as an end in itself (i.e. media to communicate information to decision makers) [8].

10

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

At present the increasing trends of industrialisation, urbanisation and population growth has not only resulted in numerous environmental problems but has increased the complexity in terms of uncertainty and multiplicity of scales. Accordingly, there is a consensus on the consideration of several perspectives in order to tackle environmental problems, particularly, climate change related impacts in coastal zones which are characterised by the dynamics and interactions of socio-economic and biogeophysical phenomena. There is the need to develop and apply relevant tools and techniques capable of processing not only the numerical aspects of these problems but also knowledge from experts, to assure stakeholder participation which is essential in the decision making process [5] and to guarantee the overall effectiveness of assessment and management of coastal environments including related inland watersheds (i.e. surface and groundwater affected by, and affecting, coastal waters). The scientific community projected that climate change would further exacerbate environmental problems due to natural and anthropogenic impacts with specific emphasis in coastal areas [10]. This data, nevertheless, depends on global and regional policy measures especially in sectors such as energy, economy and agriculture which seem to be a major threat to global sustainable development. As a response to this, mitigation and adaptation measures are already identified through intense research activities, yet these may not limit the projected effects of climate change over the next few decades On one side there is the influence of socio-economic development and environmental response while on the other there is the significant uncertainty still associated with present climatic predictive models. Thus, model inputs need to take into account scenarios highly affected by present and future policy measures in order to further reduce uncertainty in their predictions and thereby guarantee robust adaptation strategies. In addition, climate change effects have been linked to the increase in global average temperature according to the IPCC emission scenarios [11]. Resulting ocean thermal expansion is expected to generate significant impacts via sea level rise, seawater intrusion into coastal aquifers, enhanced coastal erosion and storm surge flooding, while increasing population in coastal cities, especially megacities on islands and deltas, further aggravates major impacts of climate change on marine coastal regions. The latter include transitional environments such as estuaries, lagoons, low lying lands, lakes, which are particularly vulnerable because of their geographical location and intensive socio-economic activities [12-13].

Within this context, the development of innovative tools is needed to implement regulatory frameworks and the decision making process required to cope with climate related impacts and risks. To this end, DSS are advocated as one of the principal tools for the described purposes. This work will attempt to examine GIS-based DSS resulting from an open literature survey. It will highlight major features and applicability of each DSS in order to help the reader in the selection of DSS tailored on his specific application needs. 2. DESCRIPTION OF THE EXAMINED DECISION SUPPORT SYSTEMS (DSS) The literature survey led to identify twenty DSS designed to support the decision making-process related to climate change and environmental issues in coastal environments including inland watersheds. The identified DSS are listed in Table 1 with the indication of the developer, development years, and literature reference. In order to provide a description of major features and an evaluation of the applicability of the 20 examined DSS, the work adopted the sets of criteria reported in Table 2 and grouped them within three different categories: general technical criteria, specific technical criteria, and availability and applicability criteria. The general technical criteria underline relevant general features related to each DSS, which include: the target coastal regions and ecosystems domain; the regulatory frameworks and specific legislations supported by each DSS; the considered climate change impacts and related scenarios, as well as the objectives of the examined systems. The specific technical aspects include the main functionalities, analytical methodologies and inference engine (i.e. structural elements) of the systems. A final set of criteria concerned applicability, i.e. scale and study areas, flexibility, status and availability of the examined systems. Within the following sections the identified DSS, listed in Table 1, will be presented discussed according to these criteria

11

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Table 1. List of existing DSS on coastal waters and related inland watersheds. Name CLIME: Climate and Lake Impacts decision support system CORAL: Coastal Management Decision Support Modelling for Coral Reef Ecosystem COSMO: Coastal zone Simulation MOdel Coastal Simulator decision support system. CVAT: Community Vulnerability Assessment Tool DESYCO: Decision Support SYstem for COastal climate change impact assessment DITTY: Information technology tool for the management of Southern European lagoons DIVA: Dynamic Interactive Vulnerability Assessment ELBE: Elbe river basin Decision Support System GVT:Groundwater Vulnerability Tool IWRM: Integrated Water Resources Management Decision Support System KRIM decision support system MODSIM decision support systems RegIS-Regional Impact Simulator RAMCO: Rapid Assessment Module Coastal Zone Management SimLUCIA: Simulator model for St LUCIA Developer Helsinki University of Technology, Finland Within a World Bank funded Project :LA3EU Year of Development 1998-2003 Reference Source [22] http://clime.tkk.fi

1994-1995

[23]

Coastal Zone Management Centre, Hague Tyndall Centre for Climate Change Research, UK. National Oceanic and Atmospheric Administration, US. Euro-Mediterranean Centre for Climate Change, (CMCC) Italy. Within the European region project: DITTY

1992 2000-2009

[24] [25]

1999 2005-2010

[20] www.csc.noaa.gov/products/nchaz/startup.htm [2]

2002- 2005

[26]

Potsdam Institute for Climate Impact Research, Germany Research Institute of Knowledge SystemRIKS, Netherland University of Thrace and Water Resource Management Authority, Greece. Institute of Water Modelling, Bangladesh Within the KRIM Project in Germany. Labadie of Colorado State University, US Cranfield University, UK Research Institute of Knowledge SystemRIKS, Netherland Research Institute of Knowledge SystemRIKS within the UNEP Project, Netherland

2003-2004

[27] http://www.dinas-coast.net.

2000-2006

[28] www.riks.nl/projects/Elbe-DSS

2003-2004

[29]

2002-2010

[21] www.iwmbd.org

2001-2004 1970 2003-2010 1996-1999

[30] www.krim.uni-bremen.de [31-32] www.modsim.engr.colostate.edu [33]http://www.cranfield.ac.uk/sas/naturalresources /research/projects/regis2.html [34-35] http://www.riks.nl/projects/RAMCO

1988-1996

[36] http://www.riks.nl/projects/SimLUCIA

12

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

SimCLIM: Simulator model System for Climate Change Impacts and Adaptation STREAM: Spatial Tools for River Basins and Environment and Analysis of Management Options TaiWAP: Taiwan Water Resources Assessment Program to Climate Change WADBOS: decision support systems

University of Waikato and CLIMsystem limited, New Zealand.

2005

[37] www.climsystems.com

Vrije Universiteit Amsterdam and Coastal Zone Management Centre, Hague

1999

[38] http://www.geo.vu.nl/users/ivmstream/

National Taiwan University, Taiwan

2008

[39]

Research Institute of Knowledge SystemRIKS, Netherland

1996-2002

[40-41] www.riks.nl/projects/WADBOS

Table 2. List of criteria used for the description of existing DSS. Categories General technical criteria Criteria Coping with regulatory framework. This indicates the particular legislation or policy, the DSS refers to and which phase of the decision-making process is supported at the National, Regional and Local level (e.g., EU WFD, ICZM, IWRM, SMP, GRM, and HEM). Study/ field of application area. The coastal zones where this DSS has been applied and tested (e.g., coastal zone, lakes, river basin, lagoon, groundwater aquifer etc.) Objective. It specifies the main aims of the DSS. Climate change impacts. This refers to relevant impacts due to climate change on the system (e.g., sea-level rise, coastal flooding, erosion, water quality). Climate Change Scenarios. The kind of scenarios considered by the DSS, which are relevant to the system analysis and connected to climate change (e.g., emission, sea level rise, climatic scenarios). Functionalities. These indicate relevant functionalities (key outcomes) of the system useful to the decision process: environmental status evaluation, scenarios import (climate change and socioeconomic scenarios) and analysis, measure identification and/or evaluation, relevant pressure identification and indicators production. Methodological tools/ (analytical tools). These indicate the methodologies included in the system such as risks analysis, scenarios construction and/or analysis, integrated vulnerability analysis, MultiCriteria Decision Analysis (MCDA), socio-economic analysis, uncertainty analysis, ecosystem-based approach etc. Structural elements. The three major components of the DSS: dataset (i.e., the typology of data), models (e.g., economic, ecological, hydrological and morphological), interface (i.e., addressing if its user-friendly and desktop or web-based). Scale and area of application. This specifies the spatiality of the system (e.g., local, regional, national, supra-national and global) within the case study areas. Flexibility. The characteristics of the system to be flexible, in terms of change of input parameters, additional modules or models and functionalities. It is also linked to the fact that it can be apply on different coastal regions or case study areas. Status and Availability. This specifies if the system is under development or already developed and ready for use, and if it is restricted to the developer and case study areas only or the public can access it too and the website where information about the DSS can be found.

Specific technical criteria

Availability and applicability

13

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

3 GENERAL TECHNICAL CRITERIA As far the application domain, the considered DSS focus on coastal zones and related ecosystems (e.g. lagoons, groundwater, river basins, estuaries, and lakes), specifically thirteen DSS are on coastal zones, seven concern coastal associated ecosystems and four focuses on both (Table 3). As far as regulatory frameworks (i.e. ICZM, WFD, and IWRM) and national legislations are concerned, the examined DSS reflect the assessment and management aspects of the related decision making process. Within the coastal, marine and river basin environments, the assessment phase of these frameworks consists of the analysis of environmental, social, economic and regulatory conditions, while the management phase looks at the definition and implementation of management plans. Accordingly, support is provided by each DSS to the implementation of one or two frameworks in the assessment and/or management phase in relation to specific objectives and application domain. Specifically, the investigated DSS can provide the evaluation of ecosystem pressures, the assessment of climate change hazard, vulnerability and risks, the development and analysis of relevant policies, and the definition and evaluation of different management options. Eight out of the twenty examined DSS provide support for the ICZM implementation through an integrated assessment involving regional climatic, ecological and socio-economic aspects (Table 3, second column). With respect to the WFD (i.e. six DSS) and IWRM (i.e. seven DSS), the main focus is on the assessment of environmental or ecological status of coastal regions and related ecosystems and on the consideration of anthropogenic impacts and risks on coastal resources. These two groups of DSS consider also the river basins management via evaluation of adaptation options, which is essential for the management phase of the WFD and IWRM implementation. Particularly interesting are the approaches adopted by three DSS: CLIME, STREAM and COSMO. CLIME supports both the assessment and management phases of WFD through the analysis of present and future climate change impacts on ecosystems and the socio-economic influence on water quality of the European lakes. STREAM evaluates climate change and land use effects on the hydrology of a specific river basin, in order to support the management phase of IWRM and WFD via the identification of water resources management measures. Lastly, COSMO provides support for the ICZM through the identification and evaluation of feasible management strategies for climate change and anthropogenic impacts relevant for coastal areas. Moreover, RegIS, Coastal Simulator, CVAT and GVT specifically support the implementation of national legislations through the consideration of socio-economic and technological issues relevant for identifying suitable mitigation actions. To this purpose, these DSS promote the involvement of stakeholders through participatory processes.

The main objective of the examined DSS is the analysis of vulnerability, impacts and risks, and the identification and evaluation of related management options, in order to guarantee robust decisions required for sustainable management of coastal and inland water resources. Specifically, the objectives of the examined DSS are concerned with three major issues: (1) the assessment of vulnerability to natural hazards and climate change (four DSS: CVAT, GVT, SimLUCIA, TaiWAP); (2) the evaluation of present and potential climate change impacts and risks on coastal zones and linked ecosystems, in order to predict how coastal regions will respond to climate change (nine DSS); (3) the evaluation or analysis of management options for the optimal utilisation of coastal resources and ecosystems through the identification of feasible measures and adequate coordination of all relevant users/stakeholders (seven DSS: WADBOS, COSMO CORAL, DITTY, ELBE, MODSIM, RAMCO).

14

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Table 3. List of the examined DSSs according to the general technical criteria (ND: Not Defined). Name Application Regulatory Objective Climate change Climate change domain Framework of impacts scenarios reference addressed generating impacts WFD for To explore the potential CLIME Emission Lakes. Water quality. environmental impacts of climate change scenarios. assessment. on European lakes Temperature dynamics linked coast. scenarios. IWRM and ICZM Sustainable management CORAL Coral reef ND ND both for of coastal ecosystems in environmental particular, coral reef. assessment and management. ICZM for To evaluate coastal COSMO Coastal Sea-level rise. Sea-level rise environmental management options zones. scenarios. management. considering anthropic (human) forcing and climate change impacts. National Effects of climate change Storm surge Coastal Coastal Emission legislation for /management decisions on Simulator zones. flooding. scenarios. environmental the future dynamics of the Coastal Sea-level rise assessment and coast. erosion. scenarios. management. National To assess hazards, CVAT Coastal Storm surge Past observations legislation for vulnerability and risks zones. flooding. environmental related to climate change Coastal assessment and and support hazard erosion. management. mitigation options. Cyclone. Typhoon. Extreme events ICZM for To assess risks and DESYCO Coastal Sea-level rise. Emission environmental impacts related to climate zones. scenarios. Relative seaassessment and change and support the Coastal level rise Sea level rise management. definition of adaptation Storm surge Lagoons scenarios. measures. flooding. Coastal erosion. Water quality IWRM and WFD To achieve sustainable and ND DITTY ND for environmental rational utilization of Coastal management. resources in the southern Lagoons. European lagoons by taking into account major anthropogenic impacts. ICZM for To explore the effects of DIVA Coastal Sea-level rise. Emission environmental climate change impacts on Coastal zones. scenarios. assessment and coastal regions. erosion. Sea-level rise management. Storm surge scenarios. flooding. WFD for To improve the general ELBE River Precipitation Emission environmental status of the river basin basin. and scenarios. usage and provide Catchment. management. temperature sustainable protection variation. measure within coast. National To describe the GVT Coastal Groundwater Sea-level rise legislation for vulnerability of zones. quality. scenarios. environmental groundwater resources to Saltwater assessment. pollution in a particular intrusion. coastal region.

15

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

IWRM

Coastal zones. River basin Coastal zones.

IWRM for environmental assessment and management. ICZM for environmental assessment.

To explore potential risks on coastal resources due to climate and water management policies. To determine how coastal systems reacts to climate change in order to develop modern coastal management strategies. To improve coordination and management of water resources in a typical river basin. To evaluate the impacts of climate change, and adaptation options.

Sea-level rise. Coastal erosion. Sea-level rise. Extreme events. Coastal erosion. ND

KRIM

Sea-level rise scenarios. Emission scenarios. Sea-level rise scenarios. Extreme events scenarios. ND

MODSIM

River basin. Coastal zones.

IWRM for environmental management. SMP and Habitats regulation (UK) for environmental assessment and management. WFD and ICZM for environmental assessment and management. National legislation for environmental assessment. ICZM for environmental assessment and management. IWRM and WFD for environmental management. IWRM for environmental assessment. WFD and ICZM for environmental assessment and management.

RegIS

Coastal and river flooding. Sea level rise

RAMCO

SimLUCIA

River basin. Coastal zones. Coastal zones.

For effective and sustainable management of coastal resources at the regional and local scales. To assess the vulnerability of low lying areas in the coastal zones and island to sea-level rise due to climate change. To explore present and potential risks related to climate change and natural hazards (e.g. erosion, flood). To integrate the impacts of climate change and landuse on water resources management. To assess vulnerability of water supply systems to impacts of climate change and water demand. To support the design and analysis of policy measures in order to achieve an integrated and sustainable management.

ND

Emission scenarios Socio-economic scenarios Sea level rise scenarios ND

SimCLIM

Coastal zones.

STREAM

River basin. Estuaries. River basin. River basin. Coastal zones.

Sea-level rise. Coastal erosion. Storm surge flooding. Sea-level rise. Coastal flooding. Coastal erosion. Water quality variation. Salt intrusion. Water quality variations. ND

Sea-level rise scenarios.

Sea-level rise scenarios.

Emission scenarios. Emission scenarios. ND

TaiWAP

WADBOS

According to the climate change impacts considered by the examined DSS, the review highlights that fifteen out of the 20 DSS applications regard the assessment of climate change impacts and related risks (CC-DSS). These DSS consider climate change impacts relative to sea level rise, coastal erosion, and storm surge flooding and water quality. In particular, DESYCO also consider relative sea level rise in coastal regions where there are records of land subsidence, whereas KRIM and CVAT assess impacts related to extreme events and natural hazards (e.g. typhoon, cyclone, etc.) respectively. Moreover, GVT is specifically devoted to groundwater quality variations.

The relevant climate change related scenarios considered by the examined DSS refer to emission of greenhouse gases, temperature increase, sea level rise and occurrence of extreme events. In addition, CVAT used previous observations as baseline scenarios for the assessment of natural hazards; while RegIS considered scenarios related to coastal and river flooding along with socio-economic scenarios in order to estimate their potential feedback on climate change impacts. Although most of these CC-DSS applications used sea level rise scenarios, only DIVA used global sea level rise scenarios to estimate related impacts like coastal erosion and storm surge flooding. KRIM is the only DSS considering extreme events scenarios in its analysis to support the development of robust coastal management strategies.

16

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

4. SPECIFIC TECHNICAL CRITERIA The criteria related to the specific technical aspects are reported in Table 4. As far as the functionalities are concerned (Table 4, first column), the ones implemented by DESYCO, COSMO, SimCLIM, KRIM and RegIS include the identification and prioritisation of impacts, targets and areas at risk from climate change, sectorial evaluation of impacts or integrated assessment approach, and vulnerability evaluation and problem characterisation. These are to effectively differentiate and quantify impacts and risks at the regional scale. Moreover, they also support the definition and evaluation of management options through GIS-based spatial analysis. Other DSS, i.e. DIVA, SimCLIM and KRIM, implement scenarios import and generation, environmental status evaluation, impacts and vulnerability analysis and evaluation of adaptation strategies to adequately achieve a sustainable state of coastal resources and ecosystems.

Table 4. List of the examined DSSs according to the specific technical criteria. Name CLIME Functionalities Identification of pressure generated by climatic variables. Environmental status evaluation. Water quality evaluation related to climate change. Socio-economic evaluation. Spatial analysis (GIS). Evaluation of management strategies Spatial analysis (GIS). Analytical methodologies Scenarios construction and analysis. Probabilistic Bayesian network. Uncertainty analysis. Structural elements Climatic, hydrological, chemical, geomorphological data. Climate, ecological and hydrological models. Web-based user interface

CORAL

Scenarios construction and analysis. Cost-effectiveness analysis. Ecosystem-based.

COSMO

Coastal Simulator

Problem characterization (e.g. water quality variation, coastal erosion etc.) Impact evaluation of different development and protection plans. Indicator production. Spatial analysis (GIS). Environmental status evaluation. Management strategies identification and evaluation. Indicator production. Spatial analysis (GIS).

Scenarios construction and analysis. MCDA. Ecosystem-based

Environmental, socioeconomic, ecological, biological data. Economic and ecological models. Desktop user interface. Socio-economic, climatic, environmental, hydrological data. Ecological, economic and hydrological models. Desktop user friendly interface Climatic, socio-economic, environmental, hydrological, geomorphological data. Ecological, morphological climatic and hydrological models. Desktop user interface. Environmental and socioeconomic data. Hydrological model. Desktop user friendly interface

Scenarios construction and analysis. Uncertainty analysis. Risk analysis. Ecosystem-based.

CVAT

Environmental status evaluation. Hazard identification. Indicators production. Mitigation options identification and evaluation. Spatial analysis (GIS).

Hazard analysis. Critical facilities analysis. Society analysis. Economic analysis. Environmental analysis. Mitigation options analysis.

17

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

DESYCO

DITTY

Prioritization of impacts, targets and areas at risk from climate change. Impacts, vulnerability and risks identification. Indicators production. Adaptation options definition Spatial analysis (GIS). Management options evaluation Indicator production. Spatial analysis (GIS).

Regional Risk Assessment methodology. Scenarios construction and analysis. MCDA. Risk analysis.

Climatic, biophysical, socio-economic, geomorphological, hydrological data. Desktop automated user interface.

DIVA

ELBE

Scenarios generation and analysis. Environmental status evaluation. Indicators production. Adaptation options evaluation. Spatial analysis (GIS). Environmental status evaluation. Protection measures identification. End-user involvement. Spatial analysis (GIS). Environmental status evaluation. Indicators production Spatial analysis (GIS). Impact and vulnerability evaluation Environmental status evaluation. Indicators production. Adaptation measures evaluation. Information for non-technical users. Spatial analysis (GIS). Environmental status evaluation. Adaptation measures evaluation. Information for non-technical users. Spatial analysis (GIS).

Scenarios construction and analysis. Uncertainty analysis. MCDA. Social cost and benefits analysis. DPSIR. Scenarios construction and analysis. Cost-benefit analysis. Ecosystem-based.

Morphological, social, hydrological, ecological data. Hydrodynamics, biogeochemical, socioeconomic models. Desktop user interface. Climatic, socio-economic, geography, morphological data. Economic, ecological, geomorphological, climate models. Desktop graphical user interface. Hydrological, ecological, socio-economic, morphological data. Economic, Hydrological, models. Desktop complex user interface. Data (environmental, climatic, hydrological, socioeconomic). Hydrological, socioeconomic and DEM models. Desktop user interface. Climatic, environmental, socio-economic, geomorphological data. Hydrodynamic, climate, economic models. Desktop user interface. Climatic, socio-economic, ecological, environmental, hydrological data. Economic, ecological, hydrodynamic, geomorphological models. Desktop user interface.

Scenarios construction and analysis.

GVT

Risks analysis. Fuzzy logic. MCDA.

IWRM

Scenarios construction and analysis. Risk analysis. Cost-benefit analysis. Socio-economic analysis.

KRIM

Scenarios construction and analysis. Impact and risk analysis. Ecosystem-based.

18

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

MODSIM

Environmental status evaluation. Management measures evaluation. Spatial analysis (GIS).

Statistical analysis. Analysis of policies.

RegIS

RAMCO

Indicators production Management measures evaluation. Information for non-technical users. sectoral evaluation Spatial analysis (GIS). Environmental status evaluation. Indicators generation. Management measures evaluation. Spatial analysis (GIS). Indicators production. Impact and vulnerability evaluation. Management and land-use measures evaluation. Spatial analysis (GIS). Environmental status evaluation. Impact and vulnerability evaluation. Adaptation strategies evaluation Spatial analysis (GIS). Environmental status evaluation. Indicators production. Management measures evaluation spatial analysis (GIS). Environmental status evaluation. Indicators production. Spatial analysis (GIS).

Scenarios construction and analysis. Impact analysis. DPSIR. Integrated assessment.

Administrative, hydrological, socioeconomic, environmental data. Socio-economic, hydrological models. Web-based user interface. Climatic, socio-economic, geomorphological, hydrological data. Climate and flood metalmodels. Desktop user interface. Socio-economic, environmental, climatic data. Biophysical, socioeconomic and environmental models. Web-based user interface. Climatic, environmental, socio-economic data. Land use, social and economic, climate models. Web-based user interface.

Scenarios construction and analysis. Cellular automata. Ecosystem-based.

SimLUCIA

SimCLIM

Cellular Automata. Scenarios construction and analysis. Socio-economic analysis. Bayesian probabilistic networks. Ecosystem-based. Scenario construction and analysis. Statistical analysis. Risk analysis. Cost/benefit analysis. Ecosystem-based. Scenarios construction and analysis.

Climatic, hydrological, socio-economic data. Climate, hydrological, economic models. Desktop user interface.

STREAM

TaiWAP

Scenarios construction and analysis. Impact and vulnerability analysis.

WADBOS

Management measures identification and evaluation. Spatial analysis (GIS).

Scenarios construction and analysis. Sensitivity analysis. MCDA.

Climatic, socio-economic, ecological, hydrological data. Climate, hydrological models. Web-based user interface. Climatic, socio-economic, hydrological data. Climate, hydrological, water system dynamic models. Desktop user interface. Socio-economic, hydrological, environmental, ecological data. Socio-economic, ecological, landscape models. Desktop user interface.

19

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

In order to effectively support the assessment and management of groundwater resources, GVT and DESYCO estimate indicators in assessing impacts, vulnerability and risks to estimate groundwater quality and coastal environmental quality, respectively. Similarly, STREAM, ELBE, RAMCO and DITTY employ environmental status evaluation, protection measures identification, and spatial analysis to support the management aspects of coastal ecosystems. Moreover, CLIME and CORAL specifically support the assessment and management of lakes and coral reefs via the adoption of management strategies and the evaluation and identification of pressures from climatic variables. In particular, five out of the 20 examined DSS (i.e. CVAT, GVT, Coastal Simulator, SimLUCIA and RegIS) consider hazards identification, impacts and vulnerability evaluation, mitigation/ management options identification and evaluation and sectoral evaluation to achieve a comprehensive and integrated analysis of coastal issues at the local or regional scale. Among all considered DSS, RegIS is the one most oriented to stakeholders. The second column of table 4 shows the methodologies adopted by each DSS. Seventeen out of 20 examined DSS consider scenarios analysis to enable coastal managers, decision makers and stakeholders to anticipate and visualise coastal problems in the foreseeable future, and to better understand which future scenario is most suitable for consideration in the evaluation process. A useful methodology is represented by the Multi-Criteria Decision Analysis (MCDA) technique that is considered by five DSS (i.e. COSMO, DESYCO, DITTY, GVT and WADBOS) in order to compare, select and rank multiple alternatives that involve several attributes based on several different criteria. Moreover, DITTY and RegIS also consider the DPSIR approach as a causal framework to describe the interactions between the coastal system, society and ecosystems to carry out an integrated assessment with the aim to protect the coastal environment, guarantee its sustainable use, and conserve its biodiversity in accordance to the Convention on Biodiversity (2003). An ecosystemic assessment was developed nine DSS (i.e. CORAL, COSMO, Coastal simulator, DIVA, RegIS, KRIM, RAMCO, SimLUCIA, SimCLIM) to support the analysis of the studied region through the representation of relevant processes and their feedbacks. Furthermore KRIM, IWRM, COSMO, SimCLIM and Coastal Simulator employ the risk analysis approach for impacts and vulnerability evaluation and also for general environmental status evaluation. A more detailed approach to risk analysis, through the regional risk assessment methodology (RRA), was adopted by DESYCO, Coastal Simulator and RegIS with huge emphasis on the local or regional scales. Finally, CLIME and SimLUCIA consider the Bayesian probability network to highlight the causal relationship between ecosystems (e.g. lakes) and climate change effects.

With regard to the structure of examined DSS (Table 4, third column), most of them employ analytical models useful to highlight the basic features and natural processes of the examined territory, such as the landscape and ecological models used by the WADBOS, the environmental model employed by RAMCO, the geomorphological model used within KRIM and the flood meta-model which interface other models considered by the RegIS. Moreover, the majority of these DSS utilise numerical models necessary to simulate relevant circulation and geomorphological processes that may influence climate change and related risks. DSS like CLIME, DESYCO, CVAT and TaiWAP adopt models useful to represent specific climatic processes (e.g. hydrological cycle and fate of sediment). More importantly, ten (i.e. WADBOS, SimLUCIA, RAMCO, MODSIM, GVT, ELBE, DIVA, CORAL, DITTY AND SimCLIM) out of the twenty examined DSS consider relevant socioeconomic models outputs in their analysis to critically support the integrated assessment of coastal zones. Finally, the majority of these DSS consider integrated assessment models in order to emphasise the basic relationship among different categories of environmental processes such as physical, morphological, chemical, ecological and socio-economic and to provide inclusive information about the environmental and socioeconomic processes. As far as the software interfaces are concerned, very few of the examined DSS are applied through webbased interfaces, in spite of the fact that web-based facilities enhance easy access to information within a large network of users. Furthermore, all the reviewed DSS consider GIS tools as basic media to express their results or outputs in order to provide fast and intuitive results representation to non-experts (i.e. decision makers and stakeholders) and empower them for robust decisions. In addition to maps, the outputs produced by each DSS are also graphs, charts, and statistical tables. 5. APPLICABILITY CRITERIA Table 5 shows the implementation of the criteria concerning applicability to the examined DSS. Applicability includes three aspects: scale/study areas, flexibility and status/availability (Table 2). The spatial scales considered were five: global, supranational, national, regional, and local, in order of decreasing size. The study areas are those reported in the literature cited in Table 1. The flexibility derives from the capability of a given DSS to include new modules and models in its structure, thus new input parameters, and the suitability to be used for regionally different case studies. In order to visualize the estimation of the overall flexibility of a system, highly flexible/flexible/moderately-to-no flexible were indicated as +++/++/+. Status and availability refer to different extent of development (e.g. research prototype, commercial software) and public accessibility/last updated version, respectively.

20

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Table 5. List of the examined DSSs according to the applicability criteria (+++, highly flexible; ++, flexible; +: moderately to no-flexible).
Name Scale and area of application Flexibility Status and availability last updated version (year) CLIME Supra-National, National, Local. (Northern, western and central part of Europe). CORAL Regional, Local. (Coastal areas of Curacao; Jamaica and Maldives). COSMO National, Local. (Coast of Netherland). Coastal Simulator National, Regional, Local. (Coast of Norfolk in East Anglia, UK). ++ Flexible in study area. + +++ Flexible in structural Available to the public. Demo. 2010.

modification and study area. +++ Flexible in study area. Not available to the public.

Prototype. 1995. Commercial application. 1998. Available only to the Tyndall Research Centre. Prototype. 2009

CVAT

Regional, Local. (New Hanover County, North Carolina).

++ Flexible in study area. ++ Flexible in study area.

Available to public. Prototype. 2002. Not available to the public. Prototype. 2010.

DESYCO

Regional, Local. (North Adriatic Sea).

DITTY

Supranational, National, Regional. (Ria Formosa-Portugal; Mar MenorSpain; Etang de Thau-France; Sacca di Goro-Italy, Gera-Greece).

+++ Flexible in study area.

Not available to the public. 2006

DIVA

Global, National.

+++ Flexible in study area.

Available to the public. 2009 Available to the public. 2003

ELBE

Local. (Elbe river basin Germany).

GVT

Regional, Local. (Eastern Greece). Macedonia and Northern

Not available to the public. 2006

IWRM

Regional, Local. (Halti-Beel, Bangladesh)

++ Flexible in study area.

Not available to the public. Prototype. 2009

21

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

KRIM

Regional. (German North sea Coast, Jade-Weser area in Germany).

Not available to the public. Prototype. 2003

MODSIM

National, Regional. (San Diego Water County, Geum river basin- Korea).

++ Flexible in study area.

Available to the public online. 2006

RegIS

Regional, Local. (North-West, East Anglia).

++ Flexible in study area.

Available online to stakeholders. Prototype. 2008

RAMCO

Regional, Local. (South-West Sulawesi coastal zone).

++ Flexible in the used dataset and concepts.

Not

available

to

the

public.

Prototype. 1999 Available online to the public. Demo. 1996

SimLUCIA Local (St Lucia Island, West India)

SimCLIM

National, Regional, Local. (Rarotonga Queensland). Island, Southeast

++ Flexible in structural

Available to the public. Demo. 2009

modification and study area. +++ Flexible in structural Available online to the public. Demo. 1999

STREAM

Regional, Local. (Ganges/Brahmaputra river basin, Rhine river basin, Yangtze river basin and Amudarya river basin).

modification and study area.

TaiWAP

Regional, Local. (Touchien river basin).

Available to National Taiwan University. Prototype. 2008

WADBOS

Regional, Local. (Dutch Wadden sea).

Available online to the public. Demo. 2002

As far as the scale of application is concerned, all the examined DSS, except DIVA, have been applied only at the local and regional scales because they were developed for a specific geographical context. Moreover, five out of the 20 examined DSS (i.e. CLIME, CORAL, DITTY, DIVA and STREAM) considered global, supranational, national, regional and local scales during their implementation. Five of the reported DSS are highly flexible systems because they are used to address several impacts related to different case studies.

Although DIVA can be applied to any coastal area around the world, it is sometimes not considered a highly flexible tool in terms of structural modification due to its inability to change its default integrated dataset. Finally, ELBE and WADBOS are identified as moderately-to-no flexible systems because their structure and functionalities were based on the specific needs of particular river basins.

22

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

The applicability of DSS reflects their ability to be implemented in several contexts (i.e. case study areas and structural modification), for example to include new models and functionalities ensuring common approaches to decision making and the production of comparable results [42]. Finally, concerning the availability and the status of the development, Table 5 shows that nine DSS are available to the public, three are available with a restricted access (i.e. only to stakeholders or to the developers), one is a commercial software (i.e. COSMO) and seven are not available to the public. Sometimes the restriction of the access is due to the fact that results require special skill for their interpretation, so the public can use them only with the support of the developer team. Among examined DSS, only 11 were developed/updated during the last 5 years, and 4 over the previous five years (for a total of 15 during the last 10 years) with the remaining five DSS showing the last version dating back to the 90s. The overall content of Table 5, together with the main features of each DSS reported in Tables 3 and 4, allow the reader to undertake a screening evaluation of available DSS in relation to the specific impacts from climate change to be addressed. 6. CLIMATE CHANGE & DSS FUNCTIONALITIES Among the challenges of coastal environmental problems identified by [23, 8, 43 and 8] the paper elicits those related to climate change and categorises them into assessment and management aspects bearing in mind that scientific solutions to climate change are often based on assessment and management procedures which are very contingent because assessment methodologies or approaches, data and tools could determine the robustness of potential management measures. Thus, the examined DSS functionalities necessary to cope with climate change can be evaluated from an in depth consideration of framed questions intended to reflect the significant coastal systems challenges. Assessment Does the DSS consider interdisciplinary processes/modelling? Does the DSS support spatial and temporal dimensions of coastal issues? Does the DSS consider uncertainty range or incomplete knowledge? Does the DSS support sensitivity analysis? Does the DSS predict potential effects of proposed scenarios? Management Does the DSS consider the integration of science and policy / stakeholders involvement? Does the DSS support optimisation of management measures? Does the DSS make complex information understandable / aid visualization of processes? An attempt to answer these questions, the paper synthesised the information elicited from the open literature survey in Table 3, 4 and 5. The results reflect the fact that, none of these tools possess all the functionalities related to both the assessment and management aspects.

However, they all appear to support the spatial and temporal dimensions of coastal processes; prediction of scenarios outcomes; integrated analysis of issues via in-inclusion of several models and approaches and making complex processes understandable through visualisation techniques e.g. GIS, 2D and 3D models etc. It should be noted, none of these DSS prove adequate sensitivity analysis of climate variables. Whereas only three (Coastal Simulator, CLIME and RegIS) partly consider uncertainty range via the application of the Monte Carlo Simulation and climate change projection analysis. RegIS adopts a novel 3D visualisation in order to communicate uncertainty associated with future coastal change modelling [33]. Nine out of the twenty DSS (COSMO, CVAT, DIVA, IWRM, KRIM, RegIS, SimLUCIA, SimCLIM and STREAM) partly support the optimisation of management measures, by considering effects related to different protection plans and, cost-benefit, socio-economic and mitigation options analysis. To a large extent stakeholders participation is not fully supported by these tools even though there could be workshops and capacity building during development phases. Nonetheless potential users cannot use these tools effectively; for instance, four out of the twenty systems (ELBE, RegIS, KRIM and IWRM) support the provision of information for non-technical experts among which only RegIS can be used by stakeholders without the intervention of expert. 7. CONCLUSIONS This work should be regarded as a preliminary attempt to describe and evaluate the main features of available DSS for the assessment and management of climate change impacts on coastal area and related inland watersheds. A further and comprehensive evaluation should be based on comparative application in selected and relevant case studies, in order to evaluate the DSS technical performance, especially in relation to datasets availability, that often represents the real limiting factor. Moreover, sensitivity and uncertainty analyses will provide further evidence of the reliability of the investigated DSS. This review highlighted the relevance of developing climate change impact assessment and management at the regional scale (i.e. subnational and local scale), according to the requirements of policy and regulatory frameworks and to the methodological and technical features of the described DSS. In fact, most of the available DSS show a regional to local applicability with a moderate to high flexibility. Indeed climate change impacts are very dependent on regional geographical features, climate and socio-economic conditions and regionally-specific information can assist coastal communities in planning adaptation measures to the effects of climate change. Despite the current situation that shows available DSS mainly focusing on the analysis of specific individual climate change impacts and affected sectors (15 out of the 20 examined DSS), the further developments should aim at the adoption of ecosystem approaches considering the complex dynamics and interactions between coastal systems and other systems closely related to them (e.g. coastal aquifers, surface waters, river basins, estuaries).

23

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

The adoption of multi-risk approaches in order to consider the interaction among different climate change impacts that affect the considered region should also be a focus. Finally, it is important to remark the need to involve the end users and relevant stakeholders since the initial steps of the development process of these tools, in order to satisfy their actual requirements, especially in the perspective of providing useful climate services, and to avoid the quite often and frustrating situation where time and resource demanding DSS are not used beyond scientific testing exercises. ACKNOWLEDGEMENTS The authors gratefully acknowledge the Euro-Mediterranean Centre for Climate Change (CMCC; Lecce, Italy), GEMINA project, for financial support.

[10]

IPCC, Climate Change (2007): Impacts, Adaptation and Vulnerability, Summary for Policymakers, Contribution of Working Group II to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Geneva, 2007. Nakicenovic, N., Alcamo, J., Davis, G., de Varies, B., Fenhann, J., Gaffin, S., Gregory, K., Grubler, A., Jung, T.Y. and Kram, T. (2000). Special report on emissions scenarios: a special report of Working Group III of the Intergovernmental Panel on Climate Change, Pacific Northwest National Laboratory, Richland, WA (US), Environmental Molecular Sciences Laboratory (US). Nicholls, R.J., Cazenave, A., (2010). Sea-level rise and its impact on coastal zones. Science, 328(5985):1517-1520. Jiang L., Hardee K.., (2010). How do Recent Population Trends Matter to Climate Change? Popul re policy rev, 30(2):287-312 EC, COM(2007) 354, 29.06.2007. Green Paper Adapting to climate change in Europe options for EU action, Brussels. EC, COM(2009) 147, 01.04.2009. White Paper Adapting to climate change: Towards a European framework for action, Brussels. EC (2000) Directive 2000/60/EC Directive 2000/60/EC of the European Parliament and of the Council Establishing a Framework for the Community Action in the Field of Water Policy Official Journal (OJ L 327) on 22 December 2000. EC (2002) Recommendation the European Parliament and of the Council of 30 May 2002 Concerning the Implementation of Integrated Coastal Zone Management in Europe, 2002/413/EC. EC (2003) Common Implementation Strategy for the Water Framework Directive (2000/60/CE). Guidance Document n. 11. Planning process. Office for Official Publications of the European Communities, Luxembourg. Thumerer, T., A. P. Jones, and D. Brown. (2000) A GIS Based Coastal Management System for Climate Change Associated Flood Risk Assessment on the East Coast of England. Int j geogr inf sci 14(3):265281. Flax, L.K., Jackson, R.W. & Stein, D.N. (2002). Community vulnerability assessment tool methodology. Natural Hazards Review, 3:163.

[11]

[12] REFERENCES [1] Janssen, R. (1992). Multiobjective decision support for environmental management. Kluwer Academic. (Dordrecht) Boston. Torresan, S., Zabeo, A., Rizzi, J., Critto, A., Pizzol, L., Giove, S. and Marcomini, A. (2010). Risks assessment and decision support tools for the integrated evaluation of climate change impacts on coastal zones. International Congress on Environmental Modelling and Software Modelling for Environmental Sake, Fifth Biennial Meeting, Ottawa, Canada. Matthies, M., Giupponi, C. and Ostendorf, B. (2007). Environmental Decision Support Systems; Current Issues, Methods and tools Environ model soft 22(2):123-28. Uran, O. & Janssen, R. (2003). Why spatial decision support systems not used? Some experiences from the Netherlands. Comput environ urban, 27(5):511526. Poch, M., Comas, J., Rodriguez-Roda, I., Sachez-Marrie, M. and Cortes, U. (2004). Designing and building real environmental decision support systems. Environ modell soft 19:857-873. Fabbri, K.P., (1998). A methodology for supporting decision making in integrated coastal zone management. Ocean cost manage, 39(1-2), pp.5162. Environmental Systems Research Institute ESRI, (1992). Arc Version 6.1.2, Redlands, California, USA. Nobre, A.M. & Ferreira, J.G. (2009). Integration of ecosystem-based tools to support coastal zone management. J costal res, 1676-1670. Matthies, M., Giupponi, C., Ostendorf, B. (2007). Environmental Decision Support Systems; Current Issues, Methods and tools. Environ modell soft 22(2):123-28. [13]

[2]

[14]

[15]

[3]

[16]

[4]

[17]

[5]

[18]

[6]

[19]

[7]

[8]

[20]

[9]

24

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

[21]

Zaman, A. M., S. M. M. Rahman, and M. R. Khan. Development of a DSS for Integrated Water Resources Management in Bangladesh (2009), 18th World IMACS / MODSIM Congress, Cairns, Australia 13-17 July. Jolma, A., Kokkonen, T., Koivusalo, H., Laine, H., Tiits, K. (2010). Developing a Decision Support System for assessing the impact of climate change on lakes. The Impact of Climate Change on European Lakes, 411435. Westmacott, S. (2001). Developing Decision Support Systems for Integrated Coastal Management in the Tropics: Is the ICM Decision-Making Environment Too Complex for the Development of a Useable and Useful DSS? J environ manage, 62(1): 55-74. Feenstra, J.F., Programme, U.N.E. & Milieuvraagst, V. (1998). Handbook on methods for climate change impact assessment and adaptation strategies, United Nations Environment Programme. Nicholls, R., Mokrech, M. & Hanson, S. (2009). An integrated coastal simulator for assessing the impacts of climate change. In IOP Conference Series: Earth and Environmental Science. 6 092020. Agnetis A., Basosi R., Caballero K., Casini M., Chesi G., Ciaschetti G., Detti P., Federici M., Focardi S., Franchi E., Garulli A., Mocenni C., Paoletti S., Pranzo M., Tiribocchi A., Torsello L., Vercelli A., Verdesca D., VicinoA. (2006). Development of a Decision Support System for the management of Southern European lagoons. Centre for Complex Systems Studies University of Siena, Siena, Italy. Technical Report TR2006Hinkel, J. & Klein, R.J.T. (2009). Integrating knowledge to assess coastal vulnerability to sealevel rise: The development of the DIVA tool. Global environ chang, 19(3):384395. BfG, (2003). Plot phase for the design and development of a decision support system (DSS) for river basin management with the example of the Elbe (Interim Report 2002-2003 (in German). Bundesanstalt fur Gewsser-kunde (German Federal Institute of Hydrology), Koblenz. Germany. Gemitzi, A. et al., 2006. Assessment of groundwater vulnerability to pollution: a combination of GIS, fuzzy logic and decision making techniques. Environ geol, 49(5):653 673.

[30]

[31] [22]

Schirmer, M., B. Schuchardt, B. Hahn, S. Bakkenist & D. Kraft (2003): KRIM: Climate change risk construct and coastal defence. DEKLM German climate research programme. Proceedings, 269-273 Salewicz, K. A. and M, Nakayama. (2004): Development of a web-based decision support system (DSS) for managing large international rivers. Global environ chang, 14:25-37 Labadie, J.W. (2006). MODSIM: decision support system for integrated river basin management. Summit on Environmental Modeling and Software, the International Environmental Modeling and Software Society, Burlington, VT USA. Holman, I.P., Rounsevell, M.D.A., Berry, P.M., Nicholls, R.J. (2008). Development and application of participatory integrated assessment software to support local/regional impacts and adaptation assessment. Journal of Earth and Environmental Science. DOI: 10. 1007/s 10584008-9452-7. De Kok, J.L., Engelen, G., White, R. and Wind, H. G. (2001). Modelling land-use change in a decision-support system for coastal-zone management. Environ model assess, 6(2):123 132. Uljee, I, Engelen, G. And White, R. (1996) Rapid Assessment Module for Coastal Zone Management (RAMCO). Demo Guide Version 1.0, Work document CZM-C 96.08, RIKS (Research Institute for Knowledge Systems). Engelen, G., White, R., Uljee, I. and Wargnies, S. (1995). Vulnerability Assessment of Lowlying Coastal Areas and Small Islands to Climate change and sea-level rise. Report to the United Nations Environment Programme, Caribbean Regional Co-ordinating Unit, Kingston, Jamaica (905000/9379). Maastrict: Research Institute Knowledge Systems (RIKS). Warrick, R. A. (2009): Using SimCLIM for modelling the impacts of climate extremes in a changing climate: a preliminary case study of household water harvesting in Southeast Queensland. 18th World IMACS/MODSIM Congress, Cairns, Australia 13-17 July 2009. Aerts, J., Kriek, M. & Schepel, M. (1999). STREAM (Spatial tools for river basins and environment and analysis of management options): set up and requirements. Phys chem earth, Part B: Hydrology, Oceans and Atmosphere, 24(6):591595.

[32]

[23]

[33]

[24]

[25]

[34]

[26]

[35]

[36]

[27]

[28]

[37]

[38] [29]

25

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

[39]

Liu, M.T., Tung, C.P., Ke, K.Y., Chuang, L.H, Lin, C.Y. (2009): Application and development of decision support systems for assessing water shortage and allocation with climate change. Paddy Water Environ, 7:301-311. DOI 10.1007/s/0333-009-0177-7. van Buuren, J.T., Engelen, G. & van de Ven, K. (2002). The DSS WadBOS and EU policies implementation. Engelen G. (2000). The WADBOS Policy Support System: Information Technology to Bridge Knowledge and Choice. Technical paper prepared for the National Institute for Coastal and Marine Management/RIKZ. The Hague, the Netherlands.

[42]

Agostini, P., Suter, G.W.II, Gottardo, S., Giubilato, E. (2009). Indicators and endpoints for risk-based decision process with decision support systems. In Marcomini, A., Suter, G.W.II, Critto, A. (Eds). Decision Support Systems for Risk Based Management of Contaminated Sites. Springer Verlag, New York. Van Kouwen, F., C. Dieperink, P. Schot, and M. Wassen. Applicability of Decision Support Systems for Integrated Coastal Zone Management. Coastal Management 36, no. 1 (2007): 1934.

[40]

[43]

[41]

26

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Computing, Information Systems & Development Informatics Journal


Volume 3. No. 2. May, 2012

Data Mining Technique for Predicting Telecommunications Industry Customer Churn Using both Descriptive and Predictive Algorithms

Kolajo Taiwo Computer Science Department Federal College of Education (Technical) Bichi, Kano, Nigeria taiwo_kolajo@yahoo.com

Adeyemo, A.B. Computer Science Department University of Ibadan, Ibadan, Nigeria sesan_adeyemo@yahoo.com

Reference Format:

Kolajo, T & Adeyemo, A.B. (2012). Data Mining Technique for Predicting Telecommunications Industry Customer Churn Using both Descriptive and Predictive Algorithms. Computing, Information Systems & Development Informatics Journal. Vol 3, No.2. pp 27-34

Data Mining Technique for Predicting Telecommunications Industry Customer Churn Using both Descriptive and Predictive Algorithms
27

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Kolajo, T & Adeyemo, A.B

ABSTRACT As markets have become increasingly saturated, companies have acknowledged that their business strategies need to focus on identifying those customers who are most likely to churn. It is becoming common knowledge in business, that retaining existing customers is the best core marketing strategy to survive in industry. In this research, both descriptive and predictive data mining techniques were used to determine the calling behaviour of subscribers and to recognise subscribers with high probability of churn in a telecommunications company subscriber database. First a data model for the input data variables obtained from the subscriber database was developed. Then Simple K-Means and Expected Maximization (EM) clustering algorithms were used for the clustering stage, while Decision Stump, M5P and RepTree Decision Tree algorithms were used for the classification stage. The best algorithms in both the clustering and classification stages were used for the prediction process where customers that were likely to churn were identified.
Keywords: customer churn; prediction; clustering; classification The varied behaviour of consumers has baffled researchers and market practitioners alike [6]. Voluntary churn can be divided into two sub categories, incidental churn and deliberate churn. Incidental churn happens when changes in circumstances prevent the customer from further requiring the provided service and is a small percentage of a companys voluntary churn [7]. Deliberate churn is the problem that most churn management solutions attempt to identify. This type of churn occurs when a customer decides to move to a competing company due to reasons of dissatisfaction [1]. Deliberate churn within the telecommunications industry is minimised because switching would require a change in the telephone number. In 2003 customers in the United States of America were given the option to switch mobile telephone provider but keep their existing phone number, and as soon as this law came into force 12 million customers immediately churned from their service providers thereby increasing the retention battle [8]. A churn management solution should not target the entire customer base because (i) not all customers are worth retaining, and (ii) customer retention costs money; attempting to retain customers that have no intention of churning is a waste of resources. Nowadays lack of data is no longer a problem, but the inability to extract useful information from data [9]. Due to the constant increase in the amount of data efficiently operable to managers and policy makers through the high speed computers and rapid data communication, there has grown and will continue to grow a greater dependency on statistical methods as a means of extracting useful information from the abundant data sources. To survive or maintain an advantage in an ever-increasing competitive marketplace, many companies are turning to data mining techniques to address churn prediction and management [10]. Data mining is the analysis of (often large) observational data sets to find unsuspected relationships and to summarize the data in novel ways that are both understandable and useful to the data owner [11]. Data mining is an interdisciplinary field bringing together techniques from machine learning, pattern recognition, statistics, databases, and visualization to address the issue of information extraction from large data bases. Data mining techniques include a wide range of choices from many disciplines. These choices include techniques such as support vector machines, correlation, linear regression, nonlinear regression, genetic algorithms, neural networks,

1. INTRODUCTION The mobile telephony market is one of the fastest growing service segments in telecommunications, and more than 75% of all potential phone calls worldwide can be made through mobile phones and as with the any other competitive markets, the mode of competition has shifted from acquisition to retention of customers [1]. Among all industries that suffer customer churn the telecommunications industry can be considered as being at the top of the list with an approximate annual rate of 30% [2]; [3]. This results in a waste of money and effort and is like adding water to a leaking bucket [4]. Considering the fact that the cost to European and US telecommunication companies is US$ 4 billion per year, then it seems reasonable to invest more on churn management rather than acquisition management for mature companies, especially when it is noted that the cost of acquiring new customer is eight times more than that of retaining an existing one [3]. On the other hand, existing subscribers tend to generate more cash flow and profit, since they are less sensitive to price and often lead to sales referrals [5]. Due to the high cost of acquiring new subscribers and considerable benefits of retaining the existing ones, building a churn prediction model to facilitate subsequent churn management and customer retention is critical for the success or bottom-line survival of a mobile telecommunications provider in this greatly compressed market-space. Subscriber churning (often referred to as customer attrition in other industries) in mobile telecommunication refers to the movement of subscribers from one provider to another. Many subscribers frequently churn from one provider to another in search of better rates or services. Churning customers can be divided into two main groups, voluntary churners and non-voluntary churners. Non-voluntary churn is the type of churn in which the service is purposely withdrawn by the company. Voluntary churn is more difficult to determine. This type of churn occurs when a customer makes a conscious decision to terminate his/her service with the provider. This type of churn has been a serious and puzzling problem for service providers.

28

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

and decision trees. The choice of a data mining technique is contingent upon the nature of the problem to be solved and the size of the database. Based on the kind of knowledge which can be discovered from databases, data mining techniques can be broadly classified into several categories including clustering, classification, dependency analysis, data visualization and text mining [12]. Clustering analysis is a process whereby a set of instances (without a predefined class attribute) is partitioned (or grouped) according to some distance metric into several clusters in which all instances in one cluster are similar to each other and different from the instances of other clusters. Classification is a model that induces a model to categorize a set of pre-classified instances (called training examples) into classes. Such a classification model is now used to classify future instances. Clustering is a way to segment data into groups that are not previously defined, whereas classification is a way to segment data by assigning it to groups that are already defined. Dependency analysis discovers dependency patterns (e.g. association rules, sequential patterns, temporal patterns, and episode rules) embedded in data. Data Visualization allows decision makers to view complex patterns in the data as visual objects in three dimensions and colour; it supports advanced manipulation capabilities to slice, rotate or zoom the objects to provide varying levels of details of the patterns observed. In this study both descriptive model and predictive data mining techniques will be used to extract information on the calling behaviour of subscribers and to recognise subscribers with high probability of churn in the future. While some researchers have focused on the use of either descriptive or descriptive algorithms, in this work both algorithms will be combined. First the elements of the dataset will be grouped by clustering them and then classification algorithms will be applied to the clusters of interest so that each cluster's unique "rules" for relating attributes to classes can be determined and thereby more accurately classify the members of each cluster. The dataset used were obtained from a Nigerian Telecommunications service provider. 2. MATERIALS AND METHODS The Customer churn prediction model was developed based on some selected input variables from a Nigerian telecommunications service provider customer database.

order to utilize them in building the required and targeted features: a. Phone No of each subscriber b. Incoming Calls c. Incoming Start Time d. Incoming Duration e. Outgoing Calls f. Outgoing Start Time g. Outgoing Duration 2.2 Data Mining Figure 1 presents the Data Mining framework developed for this work. Both descriptive and predictive data mining techniques were used. In the descriptive step, the customers were clustered based on their usage behavioural (RFM) feature. K-means and (EM) Expected Maximization clustering methods were used for the clustering. K-means clustering is a partitioning method that treats observations in data as objects having locations and distances from each other. It partitions the objects into K mutually exclusive clusters, such that objects within each cluster are as close to each other as possible, and as far from objects in other clusters as possible. Each cluster is characterized by its centroid, or centre point. EM (Expectation Maximization) assigns a probability distribution to each instance which indicates the probability of it belonging to each of the clusters. EM can decide how many clusters to create by cross validation, or you may specify apriori how many clusters to generate.

Fig 1. Data Mining Framework 2.1 Data Selection and Preprocessing The data used were from the call records of subscribers in one of the telecommunications service providers in Nigeria. The total number of records in the dataset is 228,520. The records were for transactions covering a period of 3 months from October 1st December 31st 2010. The raw data was uploaded into Mysql database for the extraction of necessary features from the raw data. The features that were selected were based on those used by [12], [13] and the RFM (Recency, Frequency, Monetary) related features. These features were chosen due to the nature of pre-paid service providers. The focus is on constructing features that are able to reflect the changes in usage behavior. The final dataset used consisted of 996 subscriber call records and consisted of the following data variables which were selected from the call records in For the predictive step, classification techniques were utilized. Classification is the process of finding a model (or function) that describes and distinguishes data classes or concepts for the purpose of being able to use the model to predict the class of objects whose class label is unknown. Decision tree was chosen because it is capable of efficiently generating interpretable knowledge in an understandable form. Models from tree classifier (DecisionStump, M5P, and RepTree) were used. DecisionStump is a model consisting of a one-level decision tree. That is, it is a decision tree with one internal node (the root) which is immediately connected to the terminal nodes. A decision stump makes a prediction based on the value of just a single input feature. Sometimes they

29

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

are also called 1-rules. The algorithm builds simple binary decision stumps (1 level decision trees) for both numeric and nominal classification problems. It copes with mission values by extending a third branch from the stump or treating missing as a separate attribute value. DecisionStump is usually used in conjunction with a boosting algorithm such as LogitBoost. It does regression (based on mean-squared error) or classification (based on entropy). M5P implements base routines for generating M5 Model trees and rules. A learning technique that consistently yields the best results is M5P regression trees. RepTree is a fast decision tree learner. It builds a decision/regression tree using information gain/variance and prunes it using reduced-error pruning (with backfitting). The algorithm only sorts values for numeric attributes once. Missing values are dealt with by splitting the corresponding instances into pieces. 2.3 Building the Clustering Model Weka was leveraged with Java in building the clustering model. SimpleKmeans wrapped up with MakeDensityBased Clusterer and EM (Expectation Maximization) was used for the clustering. The following set of 12 RFM related variables were constructed by the use of Mysql in order to segment the subscriber based on their calling behavior: 1. Call Ratio: Proportion of calls which has been made by each subscriber to his/her total number of calls (incoming and outgoing calls). 2. Max Date: The last date in our observed period in which a subscriber has made a call. 3. Min Date: The first date in our observed period in which a subscriber has made a call. 4. Average Call Distance: The average time distance between ones calls. 5. Life: The period of time in our observed time span in which each subscriber has been active. 6. Max-Distance: The maximum time distance between two calls of a specific subscriber in our observed period. 7. No-of-days: Number of days in which a specific subscriber has made or received a call. 8. Total-no-in: The total number of incoming calls for each subscriber in our observed period. 9. Total-no-out: The total number of outgoing calls for each subscriber in our observed period. 10. Total Cost: The total money that each subscriber has been charged for using the services in the specific time period under study. 11. Total-duration-in: The total duration of incoming calls (in Sec) for a specific subscriber in our observed time span. 12. Total-duration-outgoing: The total duration of outgoing calls (in Sec) for a specific subscriber in our observed time span.

a. b. c.

Minutes of use (MOU): this refers to the total number of minutes of outgoing calls made be the subscriber over a specific period. Frequency of use (FOU): this refers to the total number of outgoing calls made by the subscriber over a specific period. Sphere of influence (SOI): this refers to the total number of distinctive receivers contacted by the subscriber over a specific period.

For every single cluster the following features were extracted: a. MOUinitial: this represents the MOU of a subscriber in the first sub-period. b. FOUinitial: this represents the FOU of a subscriber in the first sub-period. c. SOIinitial: this represents the SOI of a subscriber in the first sub-period. d. MOUs: this represents the change in MOU of a subscriber between the sub-period s 1 and s (for s=2, n) and is measured by MOUs = (MOUs MOUs-1+)/( MOUs-1+ ), where MOU1 = MOUinitial and is a small positive real number (e.g. 0.01) to avoid the case when MOUs-1 is 0 (i.e. when MOUs cannot be calculated). e. FOUs: this represents the change in FOU of a subscriber between the sub-period s 1 and s (for s=2,...n) and is calculated as FOUs = (FOUs FOUs-1+)/( FOUs-1+ ). f. SOIs: this represents the change in SOI of a subscriber between the sub-period s 1 and s (for s=2,...n) and is calculated as SOIs = (SOIs SOIs-1+)/( SOIs-1+ ). Using Decision Tree algorithms the predictive models were constructed for each of the clusters. 3. RESULTS AND DISCUSSION The following presents the results of the descriptive and predictive models. The descriptive model was used to describe the calling behaviour of the subscribers while the predictive model was used for prediction of subscribers who are likely to churn. 3.1 Clustering (Descriptive) Model Result In the descriptive model, SimpleKMeans and EM (Expected Maximization) algorithms were used in describing the customer behaviour. EM performed better than the SimpleKMeans algorithm. The performance measure used was the log likelihood. The log likelihood measures how well an algorithm has performed; the closer to zero the better the performance of the algorithm. From the Log likelihood value of both the SimpleKMeans clustering and EM clustering: a. The Log likelihood value of table the SimpleKMeans is -58.56228 b. The Log likelihood value of table the EM is 58.0476 -

2.4 Building the Predictive Model Among the call details maintained in the investigated company, three measures commonly used to describe call patterns of a subscriber by aggregating his/her recall records which are:

The EM result is better than that of the SimpleKMeans. Hence EM algorithm was used for the clustering model. The analysis of each of the attributes used is presented in a graphical form.

30

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

3.1.1 Call Ratio Call Ratio is the proportion of calls which has been made by each subscriber to his/her total number of calls (incoming and outgoing calls). Almost all the clusters are having the same call ratio except for cluster 10 with 0.3. The more a subscriber calls the more likelihood he /she is retained and the more turnover for the telecom service provider. Hence the service provider should intensify effort in deploying strategy that will encourage subscribers in cluster 10 to make more calls.

as quality of the service, coverage, price, etc. For instance if a subscriber relocates to a location where there is no network coverage, the subscriber will need to go for another network.

Fig 4. Life for each Cluster using EM (Expected Maximization) 3.1.4 Max Distance Max-Distance is the maximum time distance between two calls of a specific subscriber in our observed period. Clusters with high maximum time distance represents the subscribers that have not been calling regularly. The higher the maximum time distance the more tendencies for the subscribers to churn. As a result retention efforts should be focused on the subscribers that fall into clusters 8, 9 and 10. Fig 2. Call-Ratio for each Cluster using EM (Expected Maximization) 3.1.2 Average Call Distance The Average Call Distance is the average time distance between ones calls. From figure 3, a cluster with high average call distance implies that the subscribers are not making call regularly. Clusters 8 and 9 fell into this category. This might be due to a number of reasons including getting the same service at lower cost from other service provider, quality of service to mention a few. Hence they are likely to churn in the nearest future. The telecommunications service provider should intensify retention efforts on them so as to win them back.

Fig 5. Max-Distance for each Cluster using EM (Expected Maximization) 3.1.5 No of Days No-of-days stands for the number of days in which a specific subscriber has made or received a call. The total number of days in the observed period was ninety (90) days. The number of days for clusters 8, 9 and 10 is far below average and this implies that they have not been active. For them to be won back retention efforts have to be focused on them otherwise in the nearest future they are likely to churn.

Fig 3. Average-Call Distance for each Cluster using EM (Expected Maximization) 3.1.3 Life Life represents the period of time in our observed time span in which each subscriber has been active. Those subscribers that fall in the category of clusters 8, 9 and 10 are likely to churn which may be due to some factors such

31

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Fig 6. No-of-Days for each Cluster using EM (Expected Maximization) 3.1.6 Total No In Total-no-in is the total number of incoming calls for each subscriber in our observed period. When a subscriber stops receiving calls though a network; it points to the fact that the subscriber might not be interested in the network again because if he/she is making calls through that network he will definitely be communicated back through that same network. Subscribers in clusters 2, 6, and especially clusters 8, 9 and 10 should be tracked to know what actually went wrong. From the investigation telecommunications service provider will now be informed of the kind of retention effort to deploy.

Fig 8. Total-No-Out for each Cluster using EM (Expected Maximization) 3.1.8 Total Cost Total Cost is the total money that each subscriber has been charged for using the services in the specific time period under study. The more money spent the likelihood that the subscriber is satisfied with the network services and vice versa. Price is the most determinant factor here because the lower the price the more the total turnover for the service provider and the more calls made by the subscribers. From figure 4.8, the clusters 2, 3, 6, 8, 9 and 10 have the lowest total cost. Retention efforts should be focused on those subscribers that form those clusters.

Fig 7. Total-No-In for each Cluster using EM (Expected Maximization) 3.1.7 Total No Out Total-no-out represents the total number of outgoing calls for each subscriber in our observed period. A subscriber that stops making calls through a network will definitely not be receiving call through that network. The subscribers that are in clusters 2, 3, 6, 8-10 are likely to churn.

Fig 9. Total-Cost for each Cluster using EM (Expected Maximization) 3.1.9 Remarks 1. Subscribers in clusters 8, 9 and 10 have not been responding well and are very likely to churn in the nearest future. Hence they should form the focus of targeted campaign in order to win them back. 2. Subscribers in clusters 2, 3 and 6 are slightly different, their call ratio and no of days attributes were still fairly okay. With demographic data further investigation could be carried out. For instance they could be students in school. Therefore a package could be developed for them in other to encourage them.

32

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

3.2 The Classification (Predictive) Model Result The DecisionStump, M5P, and RepTree classifier algorithm implemented in WEKA were used. Five performance measures were used in determining the performance of these algorithms on the dataset. These were: i. Correlation coefficient (CC): This measures the degree of correlation or relationship among the attributes. It ranges between 1 for high positive correlation to -1 for high negative correlation, with 0 indicating a purely random relationship. ii. Mean Absolute Error (MAE): This is a quantity used to measure how close forecasts or predictions are to the eventual outcomes. As the name suggests, the mean absolute error is an average of the absolute errors. MAE can range from 0 to . It is a negativelyoriented score: Lower values are better. iii. Root Mean Squared Error (RMSE): The RMSE is a quadratic scoring rule which measures the average magnitude of the error. In other words, it represents the difference between forecast and corresponding observed values are each squared and then averaged over the sample. Finally, the square root of the average is taken. Since the errors are squared before they are averaged, the RMSE gives a relatively high weight to large errors. This means the RMSE is most useful when large errors are particularly undesirable. It can range from 0 to . It is a negatively-oriented score: Lower values are better. iv. Relative Absolute Error (RAE): This takes the total absolute error and normalizes it by dividing by average of the actual values. Lower values are better. v. Root Relative Squared Error (RRSE): Instead of total absolute error as in RAE, it takes total squared error and divides by the average of the actual values. Finally, the square root of the result is taken. Clusters 8, 9 and 10 which contains the subscribers that are likely to churn now becomes the focus of the next stage where the actual churners are determined. Table 1 presents the classification algorithm results. Table 1: Classification Algorithm Result M5P performed better than both DecisionStump and Reptree. Hence M5P algorithm was used in building the predictive model on each cluster. The most significant features in building the predictive model for clusters 8, 9 and 10 are presented in a Table 2. Table 2: Determinant Features for Clusters 8, 9 and 10 Cluster Determinant Features Number 8 FOUs, SOIs, MOU_Final, FOU_Initial, FOU_Final and SOU_Initial 9 SOIs, MOUs, FOU_Initial, FOU_Final 10 MOUs, SOIs, MOU_Final, FOU_Initial and SOU_Initial 3.2.1 M5P Result on Cluster 8 The M5 pruned model tree (using smoothed linear models) generated just one rule which classified all 44 subscribers in cluster 8 as churner.

3.2.3 M5P Result on Cluster 9 The M5 pruned model tree (using smoothed linear models) generated the following decision tree: ISOIs <= -0.527 : | ISOIs <= -0.619 : | | ISOIs <= -0.742 : LM1 (14/23.19%) | | ISOIs > -0.742 : LM2 (55/21.703%) | ISOIs > -0.619 : | | IMOUs <= -0.09 : LM3 (29/32.134%) | | IMOUs > -0.09 : LM4 (10/30.915%) ISOIs > -0.527 : | FOU_Initial <= 26 : | | FOU_Final <= 9.5 : | | | FOU_Initial <= 5 : LM5 (2/52.559%) | | | FOU_Initial > 5 : LM6 (7/25.64%) | | FOU_Final > 9.5 : LM7 (29/24.454%) | FOU_Initial > 26 : | | FOU_Final <= 24.5 : LM8 (13/2.656%) | | FOU_Final > 24.5 : | | | FOU_Initial <= 37.5 : LM9 (14/2.494%) | | | FOU_Initial > 37.5 : | | | | FOU_Initial <= 77 : LM10 (18/22.782%) | | | | FOU_Initial > 77 : LM11 (5/22.962%) Cluster 9 had a total of 196 instances. A total of 108 subscribers that fall under the rules LM1:LM4 were classified as churners while the remaining 88 subscribers under the rules LM5:LM11 are classified as non-churners. 3.2.5 M5P Result on Cluster 10 The M5 pruned model tree (using smoothed linear models) generated the following decision tree: ISOIs <= -0.48 : LM1 (75/15.868%) ISOIs > -0.48 : LM2 (20/47.175%) A total of 75 instances were in cluster 10. Seventy (75) subscribers in cluster 10 that fall under the rule LM1 were classified as churners while the remaining 20 subscribers under the rules LM2 were classified as non-churners. 4. CONCLUSION Inability to distinguish the churner from non-churner has been the problem of telecommunications service provider. There are two alternatives; either to send incentives to all customers (both churners and non-churners), which will be tantamount to a waste of money or to focus on acquisition program (that is, acquisition of new customers) which is more costly than retention effort. Since both alternatives have their negative implication on the finance of the company, distinguishing churner from non-churner is the best approach. Ability of the telecom service to distinguish between churner and non-churner is the central idea and the achievement of this research.

33

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Table 3: Factor for Differentiations


Decision Stump Cluster 9 Cluster 10 M5P Cluster 9 RepTree Cluster 9

Perfo rman ce Meas ure CC MAE RMS E RAE RRSE

Cluster 8

Cluster 8

Cluster 10

Cluster 8

Cluster 10

0.5292 0.1103 0.1609 88.84% 84.85%

0.5536 0.1896 0.2632 83.10% 83.28%

0.8739 0.1702 0.2351 77.30% 48.61%

0.9463 0.0476 0.0613 38.33% 32.32%

0.9283 0.0811 0.1261 35.57% 39.91%

0.9645 0.0866 0.1288 39.33% 26.64%

0 0.1241 0.1897 99.96% 100.06%

0.7278 0.1417 0.2171 62.14% 68.69%

0.608 0.1478 0.386 67.13% 79.81%

This work has been able to identify the subscribers that are likely to churn in the nearest future in one of the Nigerian Telecommunications Service providers. Specifically, the churn probability of subscribers in clusters 8, 9 and 10 is very high and hence serious retention campaign should commence otherwise those subscribers will be lost to other telecommunications service provider. The work was further able to identify the specific churners from clusters 8, 9 and 10. In-order to improve the interpretation of the results demographic data could also be added in further research works. REFERENCES [1] H. Kim and C. Yoon, Determinants of subscriber churn and customer loyalty in the Korean mobile telephony market. Telecommunications Policy vol. 28, 2004, pp. 751-765. R. Groth, Data mining: building competitive advantage. Santa Clara, CA: Prentice Hall, 1999. SAS Institute, Best Practice in Churn Prediction. A SAS Institute White Paper, 2000. P. Kotler and L. Keller, Marketing management. 12th ed. New Jersey: Pearson Prentice Hall, 2006. A. E. Eiben, T.J. Euverman, E. Kowalczyk and F. Slisser, Modeling customer retention with statistical techniques, rough data models, and genetics programming. Fuzzy sets, rough sets and decision making processes. Eds. A. Skowron & S. K. Pal. Berlin: Springer, 1998. C. Berne, J. M. Mugica and M. J. Yague, The effect of variety seeking on customer retention in services. Journal of Retailing and Consumer Services, vol. 8, 2001, pp. 335-345. J. Burez and D. Van Den Poel, Separating financial from commercial customer churn: a modeling step towards resolving the conflict between the sales and credit department. Expert Systems with Applications. Retrieved Nov. 15th, 2010, from http://www.feb.ugent.be/nl/Ondz/wp/Papers/wp_11_ 717.pdf, 2008. A. Eshghi, D. Haughton and H. Topi, Determinants of customer loyalty in the wireless telecommunications industry. Telecommunications Policy, vol. 31, 2007, pp. 93-106, S. Lee, and K. Siau, A review of data mining techniques. Industrial Management and Data Systems, vol. 101(1), 2001, pp. 41-46. [10] [11] [12] A. Berson, S. Smith, and K. Thearling, Customer retention. Building data mining applications for CRM. New York: McGraw-Hill, Chapter 12, 2000. D. Hand, H. Mannila, and P. Smyth, Principles of data mining, Cambridge: MIT Press, MA, 2001. C. Wei and I. Chiu, Turning telecommunication call details to churn prediction: a data mining approach. Expert Systems with Applications vol. 23, 2002, pp. 103-112. T. J. Ali, Predicting customer churn in telecommunication service providers. Retrieved Nov. 20th , 2010, from http://LTU-PB-EX-09052se.pdf (application/pdf object), 2009.

[2] [3] [4] [5]

[13]

[6]

[7]

[8]

[9]

34

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Computing, Information Systems & Development Informatics Journal


Volume 3. No. 2. May, 2012

Lecture Attendance System Using Radio Frequency Identification and Facial Recognition
Olaniyi, O.M. Department of Electronic and Electrical Engineering engrolaniyi@bellsuniversity.edu.ng

Adewumi D.O. & Sanda O.W. sandaolanrewaju@yahoo.com * Department of Computer Science and Technology Bells University of Technology, Ota , Ogun-state, Nigeria.

Shoewu .O. Department of Electronic and Computer Engineering Lagos State University, Epe, Nigeria. engrshoewu@yahoo.com,

Reference Format:

Olaniyi, O.M, Adewumi D.O, Shoewu O. & Sanda O.W (2012). Lecture Attendance System Using Radio Frequency Identification and Facial Recognition. Computing, Information Systems & Development Informatics Journal. Vol 3, No.2. pp 35-42

35

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Lecture Attendance System Using Radio Frequency Identification and Facial Recognition
Olaniyi, O.M, Adewumi D.O, Shoewu O. & Sanda O.W

ABSTRACT We propose a nexus of wireless biometric solution to the problem of lecture attendance records in an academic environment. The conventional method of taking attendance records on paper particularly in an environment with lower student/lecturer ratio is not only laborious but robs on the precious time that could be used for an effective learning. We demonstrated the efficacy of our proposed method against conventional methods as being capable of eliminating time wastage. .
Keywords: RFID, Facial Recognition, Lecture, Attendance, Tags, Short range reader.

1. INTRODUCTION The monitoring of attendance in conventional learning environment consists of a number of requirements. The availability of both the learner and the learned usually for a period of not less than seventy percent of entire lecture period and proper record keeping of the learner during the lecture period by the tutor. In most developing countries, lecture attendance is usually noted using paper sheets, file system, surprise quizzes, and roll call of names and/or student identification number etc. These methods have made it so inadequate for the academic department to regularly update and effectively assess the true record of students in a learning environment [16,14]. The current lecture attendance monitoring system in academic environment in developing countries embraces the use of paper based method for taking and usually for computing students percentage of attendance [14]. This method of attendance monitoring is time consuming and laborious because the valuable lecture time that could otherwise have been used for lectures is dedicated to student attendance taking. This inadequacy in the process of attendance monitoring leads to wrong compilation of students that were in the class for the entire duration of the course. Biometric systems have been widely used for the purpose of automatic recognition of objects based on some specific physiological and behavioural features [10].Many biometric systems can be applied for a specific system but the key structure of a biometric system is always the same. In biometric facial recognition, record of the spatial geometry of distinguishing feature of the face is recorded. Because a persons face can be captured from some distance away, the technology has been used to identify card counters or other undesirables in shoplifting and monitoring of criminals and terrorists in some countries with the history of terrorism. Biometric Face recognition is one of the few biometric methods with the merit of both high accuracy and low intrusiveness. It has the accuracy of a physiological approach without being intrusive. The technology has drawn the attention of researchers in fields from security, psychology, and image processing up to computer vision [6][7]. Accordingly, there have been proliferations of Radio Frequency Identification (RFID) systems in a number of applications. Successes have been recorded in diverse areas as Healthcare Monitoring [17], Library [15], Home and Business Security Systems [4] and Construction [9] to name a few in Literatures. Radio Frequency Identification (RFID) systems facilitate automatic and identification and tracking of remote components. Research in this field involves improving tags, readers and adapting tags to multiple substrates and function under extreme conditions of temperature, humidity and application of the latest technology to achieve various objectives such as improving traceability, efficiencies, and real-time monitoring system behavior especially in critical health care condition [11][1]. This work seeks to combine value added advantages attributed to these two electronic identity systems: RFID and Facial Recognition in exploring a cutting edge wireless biometric solution to the students academic attendance monitoring problem in developing countries. 2.0 REVIEW OF RELATED WORKS A number of related works exists in literature in the application of RFID and Facial Recognition to different areas of attendance monitoring problem. In [12] authors proposed student tracking using RFID. It involves the use of the student identification card to obtain student attendance. The author tried to solve the problem of manual computation of attendance but his work does not eliminate the risk of student impersonation. Consequently, authors in [1] proposed an RFID matrix card based auto identity system to the manual problem of monitoring student in boarding schools. Upon initial study of the three Boarding school in Malaysia, current process of maintaining students records in and out was not only tedious, misinformation always happen as students tend to provide inaccurate information. The fusion of passive RFID Tags, Wireless local area networking and database management system development helps to ease the monitoring of the availability of boarding students as system RFID reader monitors and recorded student identity through their unique and pre-assigned RFID tag.

36

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Also, authors in [8] reviewed the use of RFID in an integrated-circuit(IC) packaging house to resolve inventory transaction issues. This study suggests that RFID contributes significant improvements to the water receiving process and the inventory transaction process that reduce labour cost and man-made errors. In [5] author proposed the use of finger print to solve attendance monitoring problem. The fingerprint technique verification was achieved using extraction of the fingerprint of students. The proposed system was successful in monitoring attendance but the proposal of [5] lacks the inclusion of a report generation and audit trail system. Similar attendance monitoring solution was developed in [3] to manage the context of the student for the classroom lecture attendance using the Personal Computer of each student. Authors in [11] proposed design and prototype implementation of a secure and portable embedded reader system for reading biometric data from an Electronic passport(E-passport) using Electronic Product Code (EPC) RFID tags. The passport holder is authenticated online by using GSM network. Secure communication through Advance Encryption Standard (AES) encryption technique between server and the proposed e-passport reader helps to provide comprehensive system to create, manage and monitor identity data online. In [14], authors proposed a simplified and cost effective model of embedded computer based solution to the manual method of managing student lecture attendance problem in higher institutions in developing countries. The developed system is capable speeding up the process of taking students lecture attendance and allows for error free and faster verification process of authenticating student lecture attendance policy required for writing examination in a campus environment but could not provide absolute solution to the problem of impersonation by erring students. In [2] Artificial Neural Networks and Facial Recognition were used to develop a security door system where authorization of facial appearance of privilege users in the database is the only guarantee for entrance. In the system, the personal computer processes the face recognized by the system digital camera and compares data with privileged users in the database. The system control program either sends a signal to open the electromechanical door upon facial existence or deny entry. In this paper, we proposed a nexus wireless biometric solution to the problem of lecture attendance problem in academic environment. The current process of taking student particularly in an environment with lower student/lecturer ratio is not only laborious but robs of the precious time that could be used for an effective learning. The amalgamation of these technologies to student attendance monitoring problem as demonstrated in this study is capable of eliminating time wasted during classical/manual collection of attendance, provide solution to the problem of impersonation liable to similar solution as proposed in [1, 14,5,12] and an avenue for proper academic monitoring of students performance by University administrators.

3.0 MATERIALS AND METHOD 3.1 System overview The system was developed for Lecture Attendance Management Scenario of Bells University of Technology, Ota, Nigeria for each lecture period. The system manages the student lecture attendance using a Windows Application system and the developed RFID and Face Recognition based attendance model. The application system contains a module known as the administrator module. The function of the administrator module is to handle the entire administrator task: Adding, editing and deleting classes, subject and college/department. Only the administrator can view, add and delete data in the attendance system. Figure 1.0 shows the general block diagram of the system. The developed model consists of an RFID Reader incorporated with a RFID Reader board, RS232 to USB converter cable

TAGGED STUDENT Radio Facial Recognition

READER

PROGRAMMED PC

IP-BASED CAMERA USB to USB

USB to RS232 Communication

Fig 1: System Block diagram

3.2 Design Considerations The proposed attendance management system in this work consists of the following considerations: Hardware Design Considerations Considering RFID systems shown in Figure 2.0, electronic tags communicate with the reader through radio waves. RFID Tags can be one of three types: active, semi active or passive. Because these tags do not supply their own power, communication with them needs to be short and usually does not transmit much data usually just an ID code. The range for transmission is from 10mm to 5 meters. There are four different kinds of tags in use, categorized by their radio frequency: low frequency (between 125 to 134 KHz), high frequency (13.56MHz), UHF (868 to 956 MHz), and microwave (2.45 GHz).The tag has a unique set of numbers which makes every card unique, in each case a reader must scan the tag for the data it contains and then the information is sent to the database.

37

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Fig 2: Basic RFID System In the foregoing we shall described the hardware consideration: The Electronic Tag: The study exploited popular wide range of EM4100 transponder available for the Micro RFID Reader (RW) as the low frequency Electronic RFID tag. The tag electronic is mapped with student information (Name, Matriculation number, level and Department) available in the system database. For the lecture attendance management scenario of Bells University of Technology, Ota, Nigeria considered in this study, the RFID tag for four students and untagged card is shown in figure 3:

Fig 3.0: Electronic Tag RFID READER RW For this study, the RW RFID Reader was chosen for cost reason. It was designed to read from EM4100 transponder used as electronic access card at frequency of 134 kHz. In operation the reader continually scan for EM4100 transponder pre-defined at 134 kHz to respond to C# program commands via the UART Receive line (Rx) serially connected through the RS232 to USB converter to the USB port of the PC. The overall circuit of the RFID subsystem is shown in Figure 4:
VCC U3 LM2931AZ-5 5V

POWER IN

LINE VOLTAGE

VREG

COMMON

LED1 C1 10F

R1 1k

URFID RS 232 CIRCUIT


CTS RTS

U1

U2 R2 1k BC337

BUZZER 135 Hz

PC

TXD RXD

Fig 4:. Overall Circuit diagram of the RFID system

38

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

POWER SUPPLY Most TTL (Transformer-Transfer Logic) digital circuit uses 5V to operate. A 5V source needs to be regulated to power for the RW RFID Reader circuit. Through 9V to 24V DC unregulated supply, this part was developed by using LM2931AZ5 as the voltage regulator. Facial Recognition/Comparison Due to cost reasons, this stage face capture and comparison session was accomplished through a simple web camera. Once a reader badges his card in for attendance the web camera automatically takes a picture of the person holding the tag and compares it with the enrolled in the system database during initial registration of the student.

Fig 5: The RFID Student Attendance Monitoring Hardware Prototype

Software Design Consideration In the development cycle of the proposed RFID system, decisions are made on the part of the system to be realized in the hardware design and the parts to be implemented in the software. This software module consists of modules that can be easily decomposed and tested as individual units; this was done to make sure software meets design considerations. The attendance monitoring program was written in Microsoft visual C# programming language in a Visual studio development environment. Figure 6.0 shows the overall flowchart of the system for both RFID and Facial Recognition sub systems.

Facial Compariso n

Y
Valid face?

N
Tag RFID Reader
Valid Tag ?

Take Attendance

N
Fig 6: Overall Flowchart of the student RFID and Facial Recognition based Attendance System

39

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

4. SYSTEM OPERATION/TESTING & DISCUSSION Considering figure 6.0 every student with a pre-programmed EM4100 transponder RFID tag has a privilege to attend lecture through the entrance door, a serial number of tag is associated with the student database entry on the Programmed PC. Each time a student flips his/her card/RFID tag, the RFID reader responds wirelessly through the pre-defined commands via the UART Receive line of the URFID. The availability of EM4100 transponder RFID tag selected in range of 135kHz makes its serial number to be read, set the LED color from red to green, buzzer to function and associated data transmitted on the UART Tx line in serial ASCII format.

This corresponding ASCII format code is then decoded by the programmed PC through the RS232 to USB converter shown in figure 5.0 Since two-level authentication and verification is required for acknowledgement of student attendance for each lecture, equal facial comparison of the student at the entrance with pre- enrolled facial appearance of the student stored in the database by the intelligent IP camera justify the biometric verification of the student and thus acknowledged the student attendance for the lecture automatically by the Programmed PC. This mutual exclusiveness of the wireless radio waves monitoring between the EM4100 RFID tag and RFID reader and facial comparison of the real time student facial appearance with facial appearance in the database is shown in Figure 7.0.

Fig 7: Illustration of the RFID and Facial Recognition Operational Principle The buzzer is activated when a valid RFID tag passes through a radio frequency of the RFID reader. If the tag and the captured face is similar to the captured face in the system database, then the system register the student as present in the class. Due to cost and flexibility reasons, this RFID attendance system uses passive tags and thus for every class, students needs to swipe their tags close to the reader (about 15mm from the reader). The reader reads the tag and the application reads check-in time and when the student is leaving the same process is repeated and the application reads check-out time. Also the facial recognition is accomplished with a web camera. If an invalid EM4100 RFID tag is used, the program will give a notification that the tag has not been registered to any student and requires a valid tag. The database contains the name of student, Matric number, Address, E-mail, Course duration and Course Information. Figure 8-figure 12 shows the Graphic User Interfaces (GUIs) of the system application control program developed with Visual C# programming Object Oriented Programming Language:

Fig 8 : Home page

Fig 7: Student Information Enrollment Interface

Fig 9 : Course Registration Interface

40

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Fig 10: Attendance Monitoring Interface

Fig 12. Attendance Facial Comparison

Fig 11: Attendance Check-in and Check-out Fig 13: Attendance Report Page 5.0 CONCLUSION This paper has successfully presented a simplified, low cost wireless biometric solution to the problem of lecture attendance records in an academic environment in developing countries. The prototype implementation of RFID and Facial recognition in attendance taking and the objectives stated on previous section has been achieved. The major strength of the system lies in its portability and high scalability but with less flexibility in programming as compared to the previous design and implementation in [1, 14, 5, 12]. By careful examination, it can be inferred that the proposed system could not only speed up the process of taking attendance, it also solves the problem of impersonation which was encountered in previous solutions. 6. FUTURE WORK The developed system is not without exceptions. Hence the following recommendations could be made for improvement in the immediate future: (1) Incorporation of Iris and IP camera for secured Facial Recognition that would further increase the efficiency and security of the system against impersonation in distributed Network of different real time lecture room monitoring respectively. (2) Application of an active reader for effective RFID performance. (3) Browser testing and extended wireless testing must be conducted for possible deployment situations. REFERENCES [1] Abdul Kadir H, Abdul Wahab M and Siti Nurul A(2009),Boarding Students Monitoring Systems(E-ID) Using Radio Frequency Identification, Journal of Social Sciences, Volume 5(3),pp 206-211. Arulogun O.T, Omidiora, O., Olaniyi O.M, and Ipadeola A.A. (2008), Development of Security System Using Facial Recognition, Pacific Journal of Science and Technology. 9(2):377386. Cheng K, Xiang L Hirota T and Ushijima , K(2005),Effective Teaching for large classes with Rental PCs by Web System ,Proceedings of Data Engineering Workshop(DEWS 2005). ID-d3, Japan.

[2]

[3]

41

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

[4]

Fujikawa M, H. Doi and Tsuijii (2006),Proposal for a new Home Security System in terms of Friendliness and Prompt Notification, Proceeding of the IEEE International Conference on Computational Intelligence for Homeland Security and Personal Safety, October 1617,IEEE Explore Press, Alexandria VA,pp 6166.DOI:10.1109/CIHSPS.2006. 313308. [5] Kokumo B (2010),Lecture Attendance System Using Biometric Fingerprint, B.Tech Dissertation, Department of Computer Science and Technology, Bells University of Technology, Ota, Nigeria. [6] Kelly, M.D., (1970),Visual Identification of People by Computer, Stanford AI Project, Stanford, CA, Technical Report. AI-130. [7] Lin S (2000),An Introduction to Face Recognition Technology, Informing Science Journal: Special Issue on Multimedia Informing Technology, Volume 3 No1, pp1-7. [8] Liu C.M and Chen L.S (2009),Application of RFID Technology for Improving production efficiency in an Integrated-Circuit Packaging House", International Journal of Production Research, Vol. 47, No. 8, pp. 2203-2216. [9] Lu, M., Chen. W. X. Shen, H.C. Lam and J. Liu,(2007),Positioning and Tracking Construction Vehicles in Highly dense Urban Areas and Building construction Sites. Automatic Construct., 16: 47-656. DOI: 10.1016/j.autcon.2006.11.001 [10] Maltoni D., Maio D, Jain, A. K.and Prabhaker S(2003),Handbook of Fingerprint Recognition, Springer, New York, Page 3-20. [11] Mohammed A.B, Ayman A, Karrem M (2009), Implementation of an Improved Secure System Detection for E-passport by using EPC RFID Tags, World Academy of Science, Engineering and Technology (WASET) Journal, volume 60, pp114-118. [12] Mahyidin M. (2008), Student Attendance Using RFID System , B.Eng Thesis, Electrical and Electronics Engineering Department, University of Malaysia Pahang, Retrieved online at http://umpir.ump.edu.my/345/1/3275Firdaus.pdf on 21st September,2011 [13] Pala Z and Inanc N (2007),Smart Parking applications Using RFID Technology, Proceeeding of the first Annual Eurasia September 5-6,IEEE Explore Express, Istanbul pp1-.DOI:10.1109/RFIDERASIA.2007.4368108

[14]

[15]

[16]

[17]

Shoewu O, Olaniyi O.M. and Lawson, A (2011). Embedded Computer-Based Lecture Attendance Management System, African Journal of Computing and ICT. Vol 4, No. 3, Pp 27- 36. Singh, J., N. Brar and C. Fong,(2006), The state of RFID applications in libraries, Inform. Technol. Libraries, 25: 24-32,Retrived online at: http://cat.inist.fr/?aModele=afficheN&cpsidt=17 860855 on 21st September 2011. Sanda O.W(2011),Development of an Automated Lecture Attendance Management System Using RFID and Facial Recognition, B.Tech Dissertation, Department of Information Technology, Bells University of Technology, Ota,Ogun State, West Africa. Wang, S.W., W.H. Chen, C.H. Ong, L. Liu and Y.W. Chuang (2006), RFID Application in Hospitals: A case study on a demonstration RFID project in a Taiwan hospital, Proceeding of the 39th Annual Hawaii International Conference on System Sciences, Jan. 04-07, IEEE Xplore Press, USA., pp: 184-184.

42

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Computing, Information Systems & Development Informatics Journal


Volume 3. No. 2. May, 2012

Employees Conformity to Information Security Policies- The Case of a Nigerian Business Organizations
ADEDARA, Olusola Department of Computer Science The Federal Polytechnic Ado-Ekiti Nigeria fafvfk@yahoo.com KARATU, Musa Tanimu Computer Department Faculty of Science University of Ibadan Nigeria OLAGUNJU, Abiodun Department of Computer Science University of Ibadan Ibadan, Nigeria abbeylag@yahoo.com

Reference Format:

Adedara, O., Karatu, M.T. & Lagunju, A. (2012). Employees Conformity to Information Security Policies In Nigerian Business Organisations (The Case of Data Engineering Services PLC). Computing,, Information Systems & Development Informatics Journal. Vol 3, No.2. pp 43-50

43

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Employees Conformity to Information Security Policies The Case of a Nigerian Business Organizations
Adedara, O.; Karatu, M.T. & Lagunju, A.

ABSTRACT We evaluated employees conformity to information security policies using four research questions and three research hypotheses. A survey research methodology was used to evaluate staff of the case study (Skannet) using both questionnaires and interview. Data gathered were analysed using both descriptive and inferential statistics. The findings of this study reveal that majority of Skannets employees have adequate knowledge of Skannets information security policies. However, the level of employees compliance with the ICT policies is very low. Among organizational measures put in place by Skannet to ensure information security compliance include regular training / re-training for staffs on policies, regular survey sent to all staff to test level of awareness and punitive measures by Human Resource Department. The study further revealed that Skannets Information Security policy positively affect the attainment of organizational goals such as improvement in quality of services, promotion of information sharing, transparency and accountability among staffs in the organization . Keywords: Policies, security, compliance, training and services.
1. INTRODUCTION The Internet is a collection of networks linked together using a common protocol global computer network achieved through the interconnection of smaller computer networks around the world. People, computers and information are link together electronically by a common protocol or set of communication rules. It should be evident now why telecommunications, broadcastings and the internet all have to be dealt with not necessarily by a single policy but within a single policy frame work. They all use the same infrastructure to transmit messages (copper cable, optical fiber, satellites) over the radio spectrum and they can all deliver the same content (voice, data text, pictures, videos etc) to the same users. (Kate Wild 2004). Information Security policies generally covers three areas, they are: 1. Telecommunication 2. Broadcasting 3. Internet (Networking Technologies) A corporate policy is usually a documented set of broad guidelines formulated after an analysis of all internal and external factors that affect the firms objectives, operations and plan, formulated by the firms board of directors. (Folayan, 2010, Boubakar, 2008). Information security on the other hand is the process of protecting information. It protects its availability, privacy and integrity. Access to stored information on computer databases has increased greatly. More companies store business and individual information on computer than ever before. Much of the information stored is highly confidential and not for public viewing. (Toni & Tsubuira, 2002). Effective information security systems incorporate a range of policies, security products, technologies and procedures. Software applications which provide firewall information security and virus scanners are not enough on their own to protect information. A set of procedures and systems needs to be applied to effectively deter access to information. (Steve, 2008; ISP US, 2004) Information security means protecting information and information systems from unauthorized access, use, disclosure, disruption, modification, perusal, inspection, recording or destruction.The terms information security, computer security and information assurance are frequently incorrectly used interchangeably. These fields are interrelated often and share the common goals of protecting the confidentiality, integrity and availability of information; however, there are some subtle differences between them. (Wikipedia 2011). 1.1 Knowledge Gaps Information and communication technology is increasingly penetrating all social and economic activities. It is a high stake game that involves all sectors of society comprising many stakeholders. Information Security policies are often made as a result for concern for issues bothering on system vulnerabilities occasioned by authorized usage. Information Security policy is a set of principles or a broad course of action that guides the behavior of governments, organizations, corporations and individuals (Steve, 2008). Information Security policy covers information and communication technologies network, services, markets and relationships between the different actors involved in these from the operators of submarine cable systems to the users of telecentres (Microsoft, 2010; OECD, 2012). It may be national or international in scope. Each level may have its own decision making bodies, sometimes taking different or even contradictory decisions about how Information Security will develop within the same territory (David, 2009). In order to give staff members the feelings of autonomy and a sense of belonging, they need to know the rules and allowable usage limits of organization al information systems. These go beyond what time to show up, vacation time and health benefits. Written company policy that covers Information Security policy must be produced and adhered to.

44

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

This paper identifies and evaluate the manner in which Information Technology policies are defined and the level of usage compliance in a corporate setting using General Data Engineering Services (Skannet) Nigeria as a case study. The paper attempts to ascertain Skannets Information Security policies awareness by the employees; examine the extent to which the employees comply with the Information Security polices; identify organizational measures instituted by Skannet to ensure policy compliance by employees; ascertain the impacts of Information Security policies on Skannets organizational goals and make recommendations on Skannets Information Security policies improvement. 2. EXISTING INFORMATION SECURITY POLICY AT SKANNET General Data Engineering Services being an Internet Service Providing Company has certain rules and procedures governing what users can do on her network (Folayan, 2010). This Information Security Policy is as highlighted below. Access Management and Control To prevent or minimize unauthorized access to computer systems or damage, theft or loss of equipment, the following must be adhered to: 1. Physical Control a. Access to server room and other major ICT facilities should be adequately secured at the doors and windows and only authorized persons can be allowed into the Server room. b. The Contact Centre is required to maintain a Register (access log) where authorized staff logs in any activities carried out in the server room. Logical Control a. Each user (staff and client) on the network must have a Username and Password to access networked facilities. b. Both staff and client username must not exceed eight characters. The password can be as long as desired by the users. c. Each staff upon appointment must fill a staff account creation form. The form shall be administered by the human resource department. d. The newly appointed staff will submit the completed form which must be signed by his supervisor to the officer on duty at the contact centre. e. The duty officer is responsible for creating new staff accounts and activating the account on the network. f. The duty officer should ensure that the entries in the form is correctly filled and signed by the staff before creating such account. g. Each prospect/client will fill an account creation form at sign-up. h. The client sign-up form will be administered by the Marketing Department. i. The contact centre shall activate such account upon verification of the client identity.

Internet Usage It is unacceptable to use General Data Engineering Services networks to: a. View, make, publish or post images, text or materials that are, or might be considered as illegal, paedophilic or defamatory. b. View, make, publish or post images, text or materials that are, or might be considered as, indecent, obscene, pornographic or of a terrorist nature. c. View, make, publish or post images, text or materials that are or might be considered as, discriminatory, offensive, abusive, racist or sexist when the context is a personal attack or might be considered as harassment. d. Send Spams, unwanted and unsolicited emails. Network Control a. The setup of PCs, laptops, printers, etc for network access should be done by the Engineering Department (GDES Workshop). b. Point-to Point-over-Ethernet should be setup for the user's authentication on the network during the installation. c. The company antivirus should be installed on each staff Computer. The License shall be administered by the Engineering Department. Others include Troubleshooting, Repairs, Maintenance and Replacements; Disaster Recovery and Contingencies and Electronic Mail (Email), 3. RESEARCH METHODOLOGY In this study, research design using a descriptive method was used, the population of design and the population of study. Sample size and sampling technique was used to manage the research work. Data collection procedure such as questionnaires, personal interviews, was instruments used for collection of data. Data analysis method used were descriptive statistical method of sample frequency counts and percentages were used to analyze the demography and research questions while inferential statistics of T-test, ANOVA and Chi-square was used to analyze the hypotheses. 3.1 Research Questions Based on the foregoing, the research questions that this paper seeks to address include: a. What is skannets employees awareness level of Information Security policies? b. To what extent do Skannets employees comply with Information Security policies? c. What are the organizational measures put in place by Skannet to ensure Information Security policies compliance by employees? d. What are the impacts of Information Security policies on Skannets organizational goal?

45

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

3.2 Research Hypothesis: Hypothesis 1: There will be no significant difference between Skannets employees compliance with Information Security and attainment of organizational goals. Hypothesis 2: There will be no significant relationship between Skannets employees department and employees policy compliance Hypothesis 3: There will be no significant difference between Skannets employees Information Security policies awareness and compliance. 3.3 Research Design The research design adopted for this study was survey descriptive research method. This method was adopted because it enabled the researcher to collect data on the concerns of this study from the population at the locations of the population and to describe in a systematic manner the state or condition of the objects of research in this study. 3.3.1 Population Of The Study The population of this study is made up of this staff and management of General Data Engineering Services in the South-western Zone and East of Nigeria; this include Oyo, Ogun, Kwara, Ekiti and Enugu State. A total number of ninety-six Skannet staff is in these states. 3.3.2 Sample Size And Sampling Technique The entire population of this study, a census, was used in this study because it was manageable by the researchers. 3.3.3 Research Instruments A self-designed questionnaire was used to collect data from this study. The instrument was divided into two sections A and B. In section A, the demography of the respondents such as age, years of experience, sex, marital status, educational background and religion was solicited. In section B, questions meant to solicit information to answer all research questions and hypothesis were asked. The instrument contained both open-ended and close end questions. Also, an interview guide was used to interview six Heads of Departments at Skannet. 3.3.4 Validation Of Instrument In order to ensure both face and content validity of the instrument, the questionnaire was submitted to the researchers supervisor and two other scholars for constructive criticisms. It was after their corrections were effected that the researcher went to the field to administer questionnaire. 3.3.5 Data Collection Procedure The researcher distributed copies of the questionnaire among the staff of Skannet in the South-Western Zone division of the organization. Along with two other research assistants, the researcher made sure that the respondent were given adequate time to fill the questionnaire and to ensure high return rate, the instrument was handed over to solicit the assistance of the authority figures in the offices.

Data Analysis Method Descriptive statistical methods of simple frequency counts and percentages were used to analyse the demography and research questions while inferential statistics of T-test, ANOVA and Chi-square was used to analyze the hypotheses. 4. DATA PRESENTATION AND ANALYSIS This section is divided into two; section A is on the demography and section B presents data to validate the hypothesis raised in the study. The data are first presented in tables before they are presented in essay form; inferences are made from the data. SECTION A Table 1: Distribution of Respondents By Departments Departments I.T Customer Care Sales Finance Transmission Transmission Planning Switch Site Acquisition Site Maintenance Radio Frequency HR Fleet Management Total Frequency 13 7 8 3 3 8 3 8 6 7 13 6 88 Percentage (%) 14.8 8 9.1 3.4 3.4 9.1 3.4 9.1 6.8 8 14.8 6.8 100

The above table shows various departments at General Data Engineering Services. The organisation has twelve (12) departments. It is evident from the data that 13 (14.8%) respondents are from Information Technology department, 7 (8%) respondents are from Customer Care, 8 (9.1%) are from Sales department, 3 (3.4%) respondents are from Finance departments, 6 (6.8%) respondents are from Transmission, 8 (9.1%) respondents are from Transmission Planning department, 3 (3.4%) respondents are from Switch department, 8 (9.1%) respondents are from Site Acquisition department, 6 (6.8%) respondents are fro Site Maintenance department, 7 (8%) respondents are from Radio Frequency Department, 13 (14.8%) respondent are from HR department while 6 (6.8%) respondents are from Fleet Management. The data implies that the respondents cut across all departments in the organization; they will make their contributions in this study to be perspectives. SECTION B Research Question 1: What are the employees awareness levels of Skannets Information Security policies? Organizational employees are to know about policies before such could be obeyed. This research question is to ascertain employees awareness of Information Security policies.

46

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Table 2: Skannets Employee Awareness of Information Security policies Question Is there any policy on Information Security at Skannet? Options Frequency (%)

Research Question 2: To what extent do Skannet employees comply with Information Security policies? Table 4. Employees Compliance with Information Security Question Do you comply with policies on Information Security Options Yes No Total Freq 61 27 88 (%) 69.3 30.7 100

Yes

71

80.7

No Total

17 88

19.3 100

Table 2 shows the data on Skannets employees Information Security awareness. It is obvious from the data that majority of the respondents are 71 (80.7%) affirm that there are policies on Information Security at Skannet while 17 (19.3%) respondents decline. These data is an indication that many of the employees are aware or rather they know about the policies guiding the use of Information Security at Skannet. It could therefore be inferred that Skannet Management does intimate the employee with the policies. However, these data do not indicate the degree of the employees awareness. Therefore the next question is asked and answered. Table 3: Degree of Information Security Awareness By Skannet Employees Question How much of the Information Security Policy do you know? Options Much Very Much Not Much Not Very Much Total Freq 54 21 3 10 88 (%) 61.4 23.9 3.4 11.4 100

Table 4. reveals that majority of Skannets staff do comply with Information Security policies of Skannet. In the table, 61 (69.3%) respondents affirm that they do comply while 27 (30.7%) respondents claim that they do not. The data indicates that workers compliance with the Information Security policies at Skannet is not doubtful. This implies that the staffs are not only aware of the policies but they do also comply. Meanwhile five out of seven head of units interviewed submit that the level of workers compliance with the Information Security policies is low. For example, some of them affirm that the staff secretly and sometimes openly flout the policies when they know that they are not being monitored or supervised. At a point, one of them points out that staffs are not mindful of the policies at all until a reorientation programme was organized in 2010. Table 5: Extent to which Skannets Employees Comply with Information Security Policies Question To what extent do you comply with Information Security policies? Options Great Extent Some Extent Little Extent No Extent Total Freq 10 (%) 11.4

In Table 3 data on the degree of Skannets employees Information Security awareness is revealed. It is evident in the data that 54 (61.4%) respondents know much of the policy, 21 (23.9%) respondents know very much of the policies, 3(3.4%) respondents know not much of the policies while 10 (11.4%) respondents know not very much of the policies. The employees generally speaking are expected to know details of the Information Security policies because of its importance to daily operation or organisational conduct of the workers. Meanwhile as it could be logically expected, the data reveals that vast majority of the workers are familiar with the details of Information Security policies at Skannet. Meanwhile, interviews conducted for six departmental heads reveals that Information Security policy orientation which acquaints employees with specific details of how Information Security policies are to be applied is a usual orientation course for all staffs that are deployed to the departments. According to some of these interviews, because Skannets operations are Information Security based, it is of utmost necessity for the staff to be oriented on how to and how not to use the Information Security

21 51 6 88

23.9 57.9 6.8 100

Table 5 shows that 31 (35.3%) respondents do comply with the Information Security policies to a large extent while 57 (64.7%) respondent do comply to little/no extent. The above finding is an indication that most staffs at Skannet do comply with the Information Security policy to a small extent. Therefore, there appears to be a gap between the knowledge of the policies and the compliance with the same. In essence, there is apparent disregard for the policy. These findings substantiate the submission unit heads earlier presented which indicated that Skannet staffs give little regards to Information Security policy implementation.

47

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Research Question 3: What are the organizational measures put in place by Skannet to ensure policy compliance by employees? It is an organizational practice to put measures that will ensure compliance to corporate policy in place. Therefore, this question is to identify the corporate measures put in place by Skannet to ensure Information Security policy compliance. Table 6: Organizational Measures Put in Place by Skannet to Ensure Information Security Policy Compliance Measures Frequency (%) Regular Communication from management to staff on Information 82 93.2 Security policies Regular training/re-training of staffs 61 69.3 on policies Regular surveys sent to all staffs to 68 77.3 test level of awareness Compulsory tests taken on 80.7 Information Security by staffs 71 Punitive measures by Human 68 77.3 Resources Department Table 6 shows data on organizational measures put in place to ensure that staffs comply with the Information Security policies. It is evident from the table that 82 (93.2%) respondents acknowledge that regular communication from the management to the staffs on policies is to ensure that policies are adhered to, 61 (69.3%) respondents affirm that a regular training and retraining is another measure, 68 (77.3%) respondents affirm that a regular survey is usually sent out to test level of staffs awareness to Information Security policies; also, 71 (80.7%) respondents affirm that compulsory tests are taken by staffs on Information Security policies; the last item on the table reveals that Punitive measures are used by Human Resources Department to ensure compliance with Information Security policies. Research Question 4: Table 7: What are the impacts on Information Security policy on the attainment of Skannets organizational goal? Question Do you think Information Security policy of Skannet positively affect attainment of organisational goals Options Frequency (%)

According to him, policies are put in place because both the organization and its workforce will be beneficiaries; therefore, it is self-evident that the policies are there to take care of the work effectiveness in the organization. Table 8: Impacts of Information Security policy on Skannets Organizational Goals Impacts Frequency (%) It improves quality of service and products in the organization 59 67

It promotes information sharing, transparency and accountability among staffs in the organization It provides information and communication facilities and services at reasonable costs It provides individuals and organization with adequate Information Security knowledge

56

63.6

62

70.5

71

80.7

In table 8, 59(67%) respondents affirm that Information Security policy of Skannet improves quality of services and products in the organization, 56(63.3%) respondents affirm that it promotes information sharing, transparency and accountability among staffs in the organization; it also provides information and communication facilities and services at reasonable costs and lastly, it provides individuals and organizations with adequate Information Security knowledge. The findings in the above data reveal that the policies on Information Security in Skannet have impacts on attainment of to ensure compliance with Information Security policy (Ibrahim, 2010; Davis, 2008. The findings in the above data reveal that there are organizational measures in place at Skannets to ensure implementation of Information Security policies. Among measures put in place are regular Information Security policies and preventive measures. Customer care unit head, sales unit head and Human relations, rear to corroborate the findings of the questionnaire; According to the trio, the attention of Skannet is focused on awareness of Information Security policy. According to them when revile round and are frequently reminded they would do, this is the organizational belief of Skannet. However, as it has been earlier established in this study that through there is awareness of the policy but the policy adherence by staff is low, according to one of the interviews the reason behind this is because Skannet as an organization care less about adherence because, according to him, he has never seen anybody punished for flouting the policy.

Yes

69

78.4

No Total

19 88

21.6 100

In Table 7, majority of the respondents 69 (78.4%) respondents affirm that Information Security policy positively affect the attainment of Skannets organizational goal. Human Relation head of Skannet, when interviewed affirmed in consonance with the findings of the staff respondents.

48

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Hypothesis 1: There will be no significant difference between Skannets employees compliance with Information Security and attainment of organizational goals. Table 9: ANOVA test on Skannets employees Information Security policies Attainment of Organizational Goal.
F t Df Mean Difference

ANOVA test is used to test the above hypothesis. The data is represented below:

Std. Error Difference

Sig (2-tailed)

Decision

627.291

10.067 -6.648

86

-6296 -6296

0.6254 0.09471

0.000

*Sig

The above table reveals that the test result .000 is less than 0.05 level of significance (P.000<0.05 alpha level). This result is an indication that there is a significant difference between the variables tested. Therefore, it is established that there is a significant difference in Skannets employees compliance with Information Security and attainment of organizational goals. Hypothesis 2: There will be no significant relationship between Skannets employees department and employees policies compliance. A Chi-square test is used to test the above hypothesis and the result is represented in Table 10. : Table: Chi-square test on Skannets Employees Department and Employees Information Security Policy Compliance. Cross tabulation On Employees Department and Compliance with Information Security. The chi-square test reveals that the result is .000 which is less than 0.05 level of significance (p.000<0.05 alpha level).

This result is an indication that there is a significance difference between the variables tested. It is therefore evident that there is a significant relationship between Skannets employees departments and employees policies compliance. The hypothesis is therefore rejected. In essence, there is a significant relationship between Skannets employees departments and employees policies compliance. This finding is an indication that Skannets employees department determines employees compliance with Information Security policies. This finding is understandable when the fact that some departments are more prone to use Information Security in the organization than the other(s). Hypothesis 3: There will be no significant difference between Skannets employees Information Security policies awareness and compliance. A test is used to test the above hypothesis and the data are presented below;

Table 10: A T-test on Skannets employees Information Security policies awareness and compliance. F 827.291 t -10.067 -6.648 Df 86 Mean Difference -.6296 -.6296 Sig (2-tailed) .000 Decision *Sig

The above table reveals that the t-test result is .000 which is less than 0.05 level of significance (p.000<0.05 alpha level). This result is an indication that there is a significant difference between the variables tested. The hypothesis is rejected. It is therefore evident in this study that there is a significant difference between Skannets employees Information Security policies awareness and compliance. This is an indication that Skannets employees awareness of information security policies does not translate into compliance of information security policies 5. CONCLUDING REMARKS A policy is a deliberate plan of action to guide decision and to achieve organizational goals. Though it (policy) has lofty roles in any organization, it is doubtful if employees pay strict adherence to it, especially Information Security policies. In Skannet, the observation reveals a likelihood of employees flouting Information Security policies. Our study appraised the impact of Information Security policies on attainment of organizational goals.

Findings from data gathered revealed that Skannets Information Security policy positively affects attainment of organizational goals; among impacts of the Information Security policy includes improvement in the quality of services and products in the organization; promotion of information sharing, transparency and accountability among staff in the organization, provision of information and communication facilities and services at reasonable cost and provision of individuals and organization with adequate Information Security knowledge. Employees compliance with the Information Security policy in the case study significantly affect attainment of Skannets organizational goals. Also, the department of the employees, it is revealed, determines employee compliance with Information Security policies and lastly, it is discovered that employees awareness of Information Security policies does not translate to compliance with the same policy.

49

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Attainment of organizational goals is always a paramount issue in the heart of the corporate body managers. To ensure this attainment, policies are instituted by organizations to guide the conducts of the workforce. This practice, it is found out in this study, is not an exception at Skannet. However though it is revealed in the study that Information Security policies do influence attainment of Skannets organizational goals, it appears that little attention is paid to its implementation or adherence by both management and the workforce. REFERENCES Boubakar, B (2008). ICT Policy; Development Workshop Kigali Kenya. www2.aau.org/ledev/new_ledev2_report.pdf David, C. (2009). Out of the Shadows: Preventive Detention, Suspected Terrorists, and War. Carlifornian Law Review. scholarship.law.georgetown.edu/cgi/viewcontent.cgi?articl e=1370... David, S. (2008). Association of Progressive Communication ICT Policy handbook. www.apc.org/en/system/files/APCHandbookWeb_EN.pdf Folayan, S. (2010). Security Policy Handbook General Data Engineering Services PLC (GDES),www.skannet.com ISP US (2004). Policy on Information Security Program US Department of Health and Human Service, HHS IRM Policy 2004-002.001 Published on 15th of Dec 2004 . www.hhs.gov OCIO Home Ibrahim, H. (2010). Kenya Teachers Service Commission ICT Policy. www.tsc.go.ke/downloads/Policies/ict.pdf ISP US (2004). Policy on Human Services, on Rules of Behavior. www.hhs.gov OCIO Home Kate, W (2012). Handout for Multimedia Training on ICT Policy. www.apc.org/english/capacity/.../mmtk_ictpol_intro_hand out.doc Microsoft ISP (2010). Information Security Policy for Microsoft Trustworthy Computing USA. http://www.microsoft.com/twc OECD (2002). OECD Guidelines for the Security of Information Systems and Networks: Towards a Culture of Security were adopted as a Recommendation of the OECD Council at its 1037th Session on 25 July 2002. http://www.oecd.org Steve, T. (2008). Information Security Handbook from University of Auckland. www.cs.auckland.ac.nz/~cthombor/ Toni, B. & Tusubira F.F (2002). How to Roll Out an ICT Policy in Your Organization. www.techrepublic.com/...roll-out...policy-in-yourorganization/1051...

Wikipedia (2012) . ICT http://en.wikipedia.org/wiki/access-control

Policy.

50

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Computing, Information Systems & Development Informatics Journal


Volume 3. No. 2. May, 2012

Policing the Cyberspace Is the Peel Theory of Community Policing Applicable ?


WADA, Friday Josiah Nelson Mandela School of Public Policy Southern University Baton Rouge, LA, USA friwada@creativenetworks.org

Reference Format:

Wada, F.J. (2012). Policing the Cyber Space Is the Peel Theory of Community Policing Applicable? Computing, Information Systems & Development Informatics Journal.
Vol 3, No .2. pp 51-56

51

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Policing the Cyberspace Is the Peel Theory of Community Policing Applicable?


WADA, F.

ABSTRACT
The current model for policing and law enforcement as proposed by Peel in 1829 is based on four characteristics that shaped how conventional offline crimes are committed. The theory is founded on the fact that criminals and victims are proximate; there are limitations in the scale and extent of crime that can be committed per time; physical constraints such as planning the crime and visiting the crime scene prior to the crime poses a challenge to criminals; and the ability of law enforcement to profile or study the crime pattern can aid detection and apprehension. Although Cyber crimes share a few of the attributes of conventional crimes, it deviates completely in terms of its operation. For instance, cyber crime is automated, thus it has in its intrinsic nature, the potential to attack multiples of victims per time at different location. Spatial confinement is therefore negated as a means for detection and apprehension. Another interesting but disturbing phenomenon on the webscape is that cyber criminals can subtly turn their victims to criminals by hijacking (using anonymous proxies) their systems. Such systems are used to propagate the crime in order to reach more victims and escape detection. Crimes of this nature are committed across international boundaries, hence sovereignty of states are violated making prosecution extremely difficult. Building on a previous work by Longe et al (2010), this paper takes a critical look at the Peel theory of policing in the context of cyber crime. We identified the Achilles heel in the model and make recommendations that will assist in scaling up the theory to be able to respond appropriately to the challenges of fighting crime in the information age. Keywords: Cyber crime, Law Enforcement, Policing, Proxies, Peel Model.

1. INTRODUCTION The definition of crime from different schools of thought varies as much as there are differing perceptions of the issue in different societies. For our discussion, we view crime as act(s) committed or omitted in violation of ethics, norms and (or) laws forbidding or commanding it and for which punishment is imposed upon conviction. These acts threaten social, economic, political and other social structure in a society. Crime could be against persons, organizations, institutions, states and even global. Examples are theft, rape, assault, murder, fraud, arson etc. Crime has plagued societies from time immemorial and new forms of crimes evolve with societal advancement. Crimes such as terrorism, espionage, spying etc can implicate a societys relations with other societies and create international disorder and tension. Law enforcement remains a potent means for maintaining order and dealing with the crime problem in conventional society set-ups. Cybercrimes are crimes committed on the cyberspace using computer and networking technology provided by Information and Communication infrastructures. In a century where everything runs on the internet, cybercrime is a new wave of criminal activities that, if not controlled, threatens the very usefulness and survival of the cyber space as a tool for socio-economic development of nations (Chawki, 2009, Longe et al, 2009; Longe et al, 2010). The remaining part of the paper is organized as follows: In the next section we discussed the conventional crime control. This is followed by a section on the challenges of fighting cybercrime with the Peel Model of community policing. In the next section we present the Real-Time Cybercrime Response Model. The paper ends with recommendations for research and practice and conclusion.

2. CRIME CONTROL Crime control refers to a theory of criminal justice that places emphasize reducing crime in society through increased police and prosecutorial powers and. Before 1829 crime control is based on social structures that: (a) Use general societal condemnation of violations and the violators (b) Exact punishment on affront and appease the victim (c) Deter future violations by sanctions and new pronouncement appropriate to the instance or new instances or genre of crime (d) Reconcile violators and victim(s) The disorganization of primary societies, urbanization and increase in the scale of crime rendered this method in inadequate in dealing with crime on a large scale. In 1829 Rob peel came up with the current conventional model of law enforcement and policing (Longe & Osofisan, 2011).

CRIMINALS

POLICE

VICTIMS

Fig. 1: The Peel Model of Community Policing

52

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

The bane of the Peel model of law enforcement is that it makes citizens assume minimal responsibility for crime and internal order. Citizens in the twenty-first century therefore see this as the sole responsibility of law enforcement agents and quasi-military police forces who maintain internal order by reacting to completed crimes (Brenner, 2006). 3. CHARACTERISTIC OF REAL WORLD CRIME Four characteristics of real-world crime shaped the way the way the Peel model approach the issue of crime and criminality. These are: (a) Proximity between criminals and victims (b) The scale of the crime (c) Physical constraints that can discourage the criminal(s) (d) Patterns of crime with which investigator are familiar.

4. THE CYBERCRIME CHALLENGE Cybercrime poses a lot of challenges to the Peel Model. These challenges are not more of adopting or creating new laws that criminalize certain cyber activities but more of law enforcements ability to react to cybercrime. This is because cybercrime does not share some of the characteristics of conventional crimes that shaped the current Peel model of law enforcement. The cyberspace invalidates the very basic tenets on which the Peel Model is built. For instance in the cyberspace, the following statements are valid: (a) No proximity is required between victims and violators. These crimes are committed, in various forms and guises, across continents. Anonimity is a factor on the cyberspace that the criminal use to their advantage. (b) Cybercrime are faceless crimes unless the criminal chose to meet the victims as is the case with fraudulent cyber transactions, pedophiles and online pornography. (c) One-to-one victimization therefore becomes invalid as the crime process is automated (d) The criminal(s) can move from one location to another to beat the best internet address protocol location tools or to avoid phone call traces. They can engage a proxy server to mask or masquerade their actual locations. (e) The criminal(s) can commit crimes against individual or individuals in multiple of places at the same time therefore there is multiple victimization from multiple locations or from a single location. (f) Therefore we have a one-to-many scenario in this case which clearly invalidates the conventional crime tenets. (g) These criminal(s) can use systems and other unsuspecting people or organization systems as zombies or detours without their knowledge thus incriminating other victims in the course of committing these crimes. This means victims can be turned to criminals even without their knowledge and further used to commit multiple crimes in multiple locations. This creates a chain effect and more victims through a singular event.

Proximity Crime Scale

CRIME CONTROL

Proximity

Physical Activities

Fig 2: factors that shape the Peel Model of Law Enforcement These facts can be explained by looking at how conventional crimes occur. Violators or criminals and their victims are usually physically proximate in most occurrences. Theft cannot occur neither is rape unless the victim and the criminals have contact. The extents of the crimes are limited by the number of criminals and victims. Also reality and distance can impose constraints on physical human activities such as breaking a safe room in a bank robbery, increasing the exertion and resources needed to commit crime thereby aiding the apprehension of criminals and contributing to evidence needed for prosecution (Brenner & Clarke, 2005). For instance, criminals can drop business cards or purchase receipts at the crime site unknowingly; they can even leave DNA evidence which can aid law enforcement in tracking them down. Demography and criminal profiling can contribute to apprehension and tracking criminals. For instance, there are certain forms of crime common to resource-poor environment or individuals while others can only be committed by economically vibrant individuals or organizations and economically advantaged individuals. . Spatial limitation is an aid to real world crime and the oneto-one relationship between victims and violators yields the assumption that crime is committed on a limited scale.

53

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Victim (A1) Victim (A) Victim (A2) Victim (B) CYBER CRIMINALS Victim (C) Victims reached through Victim A

Victim (C1) Victims reached through Victim C

Victim (C2) Victim (d)

Victim (C3)

Fig 3: Multiple Effect of Reach of Cybercrime

(h) In cybercrime, the theory of pseudo-resource ownership is established as the cybercrime scenario since these criminals posess the potential to defraud individuals and organizations of different kinds of resources without their knowledge while such individuals or organizations still hold the physical or evidence that the resources is still in their possession. This is the case when credit card information are hacked and funds transferred in sequence out of such accounts over a period of time without the owner of such account suspecting any foul play since the so called cards and the security or pin numbers are still in their possession. From the forgoing, it is obvious that the current model of law enforcement cannot effectively militate against cybercrime as cybercrime deviates in nature, radically, from the characteristics of conventional crime. Anonymity, jurisdiction of law, the pseudo-resource ownership theory, global reach and multiplication of crime and its victims in multiple locations posses a great challenge to the application of current policing strategies to address cybercrime.

Victim (A) R E A C T I O N S

Events People Happenings Reports Localized Response

Victim (B)

Victim (C)

Fig. 4: Localized Crime Response The response of law enforcement to conventional crime is subtly patterned after how the military responds to external aggression. Law enforcements effectiveness is aided by the stochastic but localized occurrence of these crimes. Unfortunately, technology, especially the internet, now necessitates the need for a paradigm shift from policing concepts and models that focused on localized crime to those that can deal with crimes that radically deviate from the localized trend. Technology has produced a new social structure that flattens conventional crime structure thereby eroding the boundaries between internal and external threats.

54

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

5. THE REAL-TIME RESPONSE MODEL Our opinion is that, amongst other, in the efforts to fight cybercrimes, users must be empowered as the last line of defense as compared to the Peel model where the most responsibility rests on the Police and other law enforcement agents. We propose a User-centric sociotechnological model that employs technology, social theory, policy and education (awareness) as tools to mitigate the cybercrime problem. This model offers an interactive real-time challenge-demand-response platform that aids identification, reporting, apprehension and prosecution. Our model takes into consideration the fact that cyber crime does not share the common characteristics on which the Peel theory rests and that the criminals plug into the webscape through remote proxies. The model provides valves that assist users identify malicious intentions through a multi-level access control mechanism. We employ the cyber infrastructure as a tool that can provide synergistic interactions between users and law enforcement. These infrastructures consist of the user interface, web browsers, the ISP etc. We propose that ISPs should send report about suspicious traffic to law enforcement and e-mail interface should have facilities that can automatically connect users to law enforcement to report suspicious cyber invasion and criminal activities.

Infact, users should be empowered through e-mail interfaces to report phishing. Scamming and other cyber violations in real-time. Law enforcement should be connected through a distributed network such that cyber criminals can be tracked anywhere in the world thus providing a global response. The reaction to information input into the system will create an effective mechanism for tracking, identification and apprehension of cyber criminals.

6. RECOMMENDATIONS FOR POLICY AND PRACTICE Cyber crime has added to the dilemma of the Peel theory for crime control. Though another form of crime, it does not have a one-to-one mapping nature to conventional crimes nor does it share the common characteristics on which the Peel theory rests. The ubiquitous nature of the web coupled with the cloud of users presents a new form of challenge to system security and demand a paradigm shift in the perception, design and implementation of security measures (Straub & Welke 1998, Schlienger & Teufel 2002).

User (A)

Identification User (B) Cyber Infrastructure Apprehension Prosecution Profiling User (C) Global Response

Real-Time Reporting and Interraction between Law Enforcement and Users

POLICE/LAW ENFORCEMENT Location (A)

POLICE/LAW ENFORCEMENT Location (B)

POLICE/LAW ENFORCEMENT Location (C)

POLICE/LAW ENFORCEMENT Location (D)

R E A C T I O N

Fig. 4: Real-Time Cybercrime Response Model (Source, Longe et al, 2010)

55

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Unfortunately, Internet security and web design issues have continued to toe the lines of previous approaches that concentrated on technicalities and usability without scaling security issues in the light of todays challenges thereby providing a fertile ground for cyber crime to breed. To secure the internet from cyber crime and other abuses, users must not only be made aware of the existence of security flaws and vulnerabilities on the webscape, they must be empowered in a holistic manner through design, policies, practices and technology to mitigate against these risks and to understand the criminals. The performance of such protective schemes and policies must also be measurable as this will provides the basis for enhancements and improvements. Law enforcement must also be willing to revisit its mechanism for reporting, apprehension and prosecution in the light of emerging technologies, issues and concerns. Ignoring this important 7. CONCLUSION

4.

5.

Longe, O.B. & Osofisan, O.A. (2011). On the Origins of Advance Fee Fraud Electronic Mails: A Technical Investigation Using Internet Protocol Address Tracers," The African Journal of Information Systems: Vol. 3: Iss. 1, Article 2. http://digitalcommons.kennesaw.edu/ajis/vol3/iss 1/2 Longe, O., Ngwa, Wada, F., Mbarika, V. & Kvasny, L. (2009). Criminal Use of Information and Communication Technologies in SubSaharan Africa: Trends, Concerns and Perspectives. Journal of Information Technology Impact, Vol 9, (3). www.jiti.net Straub, D. W., and Welke, R. J. (1998). Coping with systems risk: security planning models for management decision making. MIS Quarterly, 22(4), 441-469. Schlienger and Teufel, 2002 Schlienger T, Teufel S. Information security culture the sociocultural dimension in information security management. In: IFIP TC11 International Conference On Information Security, Cairo, Egypt; 79 May 2002.

6.

7. The challenge in fighting cyber crimes stems from the fact that cyber crimes have been in existence for only as long as the cyber space exists. This explains the unpreparedness of society in general towards combating them. We have shown in this paper that the Peel model of community policing suffer some inadequacies with regards to facing the challenges posed by cybercrime. The Internet community must engage in a collective effort to curb the Internet of the demeaning crimes it is helping to fuel. We ignore these important issues at our own risk. Acknowledgement The author acknowledges the contributions and inputs of Longe Olumide (PhD) to the scientific contents of this work as well as permission to use some of his previous models for the study and assessment of cyber crime and criminality. REFERENCES 1. Brenner, S (2004). Distributed Security: A New Model of Law Enforcement. John Marshal Journal of Computer and Information Law VOL. XXIII . 2005. (4) . Retrieved December 2, 2009 from http://www.jcil.org/journal/articles/434.html Chawki, M. (2009). Nigeria Tackles Advance Free Fraud. Journal of Information Law & Technology (2009) No. 1. Retrieved January 12, 2010 from http://go.warwick.ac.uk/jilt/2009_1/chawki> Longe, O., Osofisan, A., Kvasny, L., Jones, C. and Nchise, A. (2010). "Towards A Real-Time Response (RTR) Model for Policing the Cyberspace", Information Technology in Developing Countries, Vol. 20, No. 3.

2.

3.

56

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Computing, Information Systems & Development Informatics Journal


Volume 3. No. 2. May, 2012

An Intelligent System for Detecting Irregularities in Electronic Banking Transactions


1

Adeyiga,, J.A, Ezike, J.O.J & Adegbola, O.M. Dept. of Computer Science Bells University of Technology Ota, Ogun State, Nigeria 1 jadeyiga, 2 josephezike, 3tanwaadegbola [@]yahoo.com

Reference Format:

Adeyiga,, J.A, Ezike, J.O.J & Adegbola, O.M. (2012). An Intelligent System for Detecting Irregularities in Electronic Banking TransactionsComputing, Information Systems & Development
Informatics Journal. Vol 3, No .2. pp 57-66

57

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

An Intelligent System for Detecting Irregularities in Electronic Banking Transactions


Adeyiga,, J.A, Ezike, J.O.J & Adegbola, O.M.

ABSTRACT Frauds have historically been the major cause of bank losses. It has led to failures of some banks in the pasts, contributing to shareholders losing their investments in the banks. Information technology is a critical component in creating value in the banking sectors, it provides decision makers with an efficient means to store, calculate, report and predict bank frauds and security failures. Information system security views this challenge as a prediction problem that attempts to detect irregular transactions in the banking sector operations scenario. This study applies neural network techniques to the bank fraud prediction problem. Using Nigerian banks as a point of reference, we design a Neural Network-Based Model that employs multilayered Feed Forward Artificial Neural Network on database system for collecting training data for the Artificial Neural Network. The Intelligence of the system is being tested on data extracted from statements of accounts from three different banks in Nigeria and the results were discussed.
Keywords: Artificial neural network, transactions, bank fraud, financial institutions & cyber security

1. INTRODUCTION The Concise Oxford Dictionary defines fraud as criminal deception; the use of false representations to gain an unjust advantage'. Economic growth has been the major objective of any successive governments and it is the engine of growth in any economy, given its function of financial intermediation. Banks as financial intermediaries are expected to provide avenue for people to save incomes not expended on consumption however most of these banks failed as a result; they were unable to contribute to the growth of their economy. Banking flaws are caused by several reasons and one of the most common and persistent reason is the issue of security. In order to reduce the stress caused as a result of conventional banking practise, banking sector came up with electronic banking so as to avail customers the opportunity of doing their transaction without having to walk into the bank but as good as their arrangement is, it came up with a lot of security issues. Through its function, banks facilitate capital formation and promote economic growth. However, banks ability to render economic growth and development depends on the health, soundness and stability of the system [2]. It is, therefore, not surprising that the banking industry is one of the most regulated sectors in any economy. It is against this background that the Central Bank of Nigeria outlined as part of the first phase of its banking sector reforms, the assurance of a diversified, strong and reliable banking industry [15] The main objective of the reforms is to guarantee an efficient and sound financial system. The reforms are designed to enable the banking system develop the required resilience to support the economic development of the nation by efficiently performing its functions as the fulcrum of financial intermediation [11]. The objective is to ensure the safety of depositors money, position banks to play active developmental roles in the Nigerian economy, and become major players in the sub-regional, regional and global financial markets. But banking industry started given their staff untenable targets for mobilizing profits, mobilizing deposits and in the process they threw caution to the winds and got very careless. [19]. As more financial institutions in Nigeria moves towards information technology driven services, security issues need to be addressed before banks and customers can confidently take advantage of these platforms. Banks are afraid of losing their cash to fraudsters who can manipulate their system and get undue advantage, while customers are still not convinced that banking is totally secured. There is need for customers transactions to be monitored so as to notice and alert the bank officials of irregular and suspicious transactions on customer accounts. Currently, transactions are usually manually monitored by bank personnel who look through the customers statement of accounts when unusual transactions are noticed. The process is often very tedious and inefficient mainly because of the number of transactions and customer base. With the advent of computers and Information systems there is a possibility of an automated approach to the analysis of the customers statement of account for detecting irregularities in banking activities. However, considering the fast pace at which mainstream business rules changes, the definition of fraudulent transaction changes rapidly thereby making the design and development of such a system a rather complex process [1]. A solution paradigm is to explore automated approaches to irregularity detection using algorithmic approach and artificial Intelligent System. This paper is aimed to create a Neural Network-Based System to detect irregular transactions based on certain parameters for the banking industry in Nigeria. The system should provide a means of automatically alerting relevant officers as soon as irregular transactions are detected. The officers will then investigate further and make a decision. A multilayered Feed Forward Artificial Neural Network will be created and a reliable database system of collecting training data for the Artificial Neural Network. The system was tested using real life customers statement of account which was collected from three commercial banks in Nigeria

58

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

2. RELATED WORKS It is evident from our investigations that one of the causes of frauds in the financial institutions was a combination of too much money in the possession of banks. The too much money in their hands was as a result of Banking consolidation by the former CBN governor, Prof Soludo which made them go from N2 billion of shareholders fund to N25 billion and then on to N100 billion of share capital. Moreover, there was a candy held that any one that got to N100 billion will be able to manage the Nigerias external reserve. The drive to meet the target made banks grow tremendously in terms of capital base and assets without a corresponding growth in their risk management and compliance IT system; People were given untenable targets for mobilizing profits, mobilizing deposits and in the process they threw Caution to the winds and got very careless. [19]. In [7][12] an approach to fraud detection that is based on tracking calling behaviour on an account over time and scoring calls according to the extent that they deviate from patterns that resemble fraud are described. Account summaries are compared to threshold each period and an account whose summary exceeds a threshold can be queued to be analyzed for fraud. Thresholding has several disadvantages; it may vary with time of day, type of account and types of call to be sensitive to fraud investigation without setting off too many false alarms for legitimate traffic [12]. Fawcett and Provost [7] developed an innovative method for choosing account-specific threshold rather than universals threshold that apply to all accounts or all accounts in a segment. In the experiment, fraud detection is based on tracking account behaviour. Fraud detection was event driven and not time driven, so that fraud can be detected as it is happening. Second, fraud detection must be able to learn the calling pattern on an account and adapt to legitimate changes in calling behaviour. Lastly, fraud detection must be self-initializing so that it can be applied to new accounts that do not have enough data for training. The approach adopted probability distribution functions to track legitimate calling behaviour. Other models that have been developed in research settings that have promising potential for real world applications include the Customer Relationship Model, Bankruptcy Prediction Model, Inventory Management Model, and Financial Market Model. In [4] it was stated that many financial institutions see the value of ANNs as a supporting mechanism for financial analysts and are actively investing in this arena. The models described provide the needed knowledge to choose the type of neural network to be used. The use of techniques of decision trees, in conjunction with the management model CRISP-DM, to help in the prevention of bank fraud was evaluated in {5}. The study recognized the fact that it is almost impossible to eradicate bank fraud and focused on what can be done to minimize frauds and prevent them. The research offered a study on decision trees, an important concept in the field of artificial intelligence.

The study focused on discussing how these trees are able to assist in the decision making process of identifying frauds by the analysis of information regarding bank transactions. This information is captured with the use of techniques and the CRISP-DM management model of data mining in large operational databases logged from internet bank. The Cross Industry Standard Process for Data-Mining CRISP-DM is a model of a data mining process used to solve problems by experts. The model identifies the different stages in implementing a data mining project while, A decision tree is both a data representing structure and a method used for data mining and machine learning. [3] Describes the Use of neural networks in analyzing the great increase in credit card transactions; credit card fraud has become increasingly rampant in recent years. This study investigates the efficacy of applying classification models to credit card fraud detection problems. To increase the body of knowledge on this subject, an indepth examination of important publicly available predictors of fraudulent financial statements was offered. They tested the value of these suggested variables for detection of fraudulent financial statements within a matched pairs sample. Self-organizing Artificial Neural Network (ANN) AutoNet was used in conjunction with standard statistical tools to investigate the usefulness of these publicly available predictors. The study resulted in a model with a high probability of detecting fraudulent financial statements on one sample [1][6]. The study reinforced the validity and efficiency of AutoNet as a research tool and provides additional empirical evidence regarding the merits of suggested red flags for fraudulent financial statements. [10] Reviews the various factors that lead to fraud in the Nigerian Banking industry. The problem of fraud in our banking system may have some attachment. Therefore, there must be some factors that may have led to this fraudulent act. [17] Discuses the approaches used by fraudsters, and identify phishing have the most common forms for stealing account details for authentication from the customers. Social engineering is the most common method used in phishing. Social engineering usually comes in the form of e-mails trying to convince users to open attachments or by directing them to some fraudulent site, and most of the time it is so well designed that many customers are led to informing their account details. [17] Also, present a framework, and the corresponding system, for online banking fraud detection in real time. It uses two complementary approaches for fraud detection. In the differential analysis approach, the account usage patterns are monitored and compared with the history of its usage, which represent the users normal behavior. Any significant deviation from the normal behavior indicates a potential fraud In this paper, we present a model for detecting irregularities in e-banking transactions to address the above challenges in Nigeria. The proposed System, a continuation of our work in [18], would assist bank personnel the opportunity to notice irregularities in transactions which will normally be successfully carried out oblivious of abnormal patterns. The system is convenient and easy to use and increase the integrity of the bank. It also gives customer some level of trust transacting with the bank.

59

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

3. METHODOLOGY We perceive neural networks as tools that can recall and learn patterns of behaviour, detect changes in patterns, and detect fraud in a payment card environment. We research to do the following: Generate/ create a unique network for each account in the bank Generate patterns for the network using the client history. Train the network at user defined epoch value or till there error value is approximately 0. Once the network has learned, feeds the current transaction into the network to know if it is a fraud or not. Components of The Irregularity Detection System i. Neural network based detector ii. Database iii. Computer System 3.1.1 Neural Network Based Detector The neural network based detector is a mathematical model or computational model based on biological neural networks, in other words, it is an emulation of biological neural system. A neural network is a massively parallel distributed processor that has a natural propensity for storing experiential knowledge and making it available for use. It resembles the brain in two respects [8]. 1. Knowledge is acquired by the network through a learning process. 2. Interconnection strengths known as synaptic Weights are used to store the knowledge. 3.1.2 Database The data are typically organized to model relevant aspects of reality. Structured query language will be used for the data store. This is the region where the account statements will be lodged and details of all transactions that take place in each account are stored in the database. All transactions carried out in a customers account are stored in the database and can be retrieved when needed. The database serves as the knowledge base to the neural network, the network trains and generates results based on the information in the knowledge base. 3.1.3 Computer System The Computer system consists of the platform in which the irregularity detection system will operate, neural network cannot exist on its own. It is usually implemented in computer systems and related devices. The computer also serves as the interface between the neural network based detector and the system user. The computer presents the input data (user query) to the neural network. 3.2 Layout of the Irregularity Detection System The simplified activities that take place in the irregularity detection system

2. 3. 4.

The network is fed by the knowledge base. The network trains based on the data in the knowledge base If the network based detector detects any irregularity, the transaction is disallowed else the transaction is allowed. Whatever decision the network based detector arrives at is stored at the knowledge base (database)

Fig 1. Proposed System components

Fig 2. Proposed System Model 1. The input data (customer credit/debit transaction) is presented by the interface (computer) to the database (knowledge base).

60

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

3.3 The Network Training Model Modeling was done using the unified modeling language (UML). The system is modeled to be able to look through a clients transaction history and generate an intelligent pattern which will be used as standard for testing the next transaction. For example consider i. A client that has made withdrawals about 1000 times from a particular branch of a bank is suddenly withdrawing from a farther (another) branch. ii. A client transaction history can denote his maximum withdrawal/Deposit to be 5000 naira, and suddenly he/she withdraws/deposits 5,000,000 naira. iii. A client withdrawal/deposit mode and transaction date difference can also be used as a pointer in identifying irregularities. In training the multilayer perceptron network (MLP), five different entries will be inputted which are as follows;

3.4 The Training Algorithm Supervised Back propagation training algorithm was used to train the neural network because of its effectiveness towards pattern recognition. Training set is a collection of training samples gathered. A training sample is a pair of input vector plus a desired output value (0.8 or -0.8). The network was provided with the training set and allowing it to learn by adjusting weights of its synapses by back propagating the error calculated as the disparity between the output neuron to the expected/target value. 3.5 1. 2. The Actual Algorithm Identify number of input neurons (same as number of elements in the input vector) Identify number of hidden layers and number of neurons on each layer. (Minimum required for back propagation is one hidden layer but for faster training we used two hidden layers each with 10 neurons each). Identify number of output neurons. We used one neuron on the output layer because we have one target value. Initialize random weight values for the synapses connecting each neurons of preceding layer to the next layer. With back propagation, weight values should be restricted to between -0.5 and +0.5. Choose a random training set from the training sample and assign input vector to the input neurons. Propagate all neurons in the forward direction to obtain output at the output layer. a) The output of each neuron is a function of its inputs. In particular, the output of the jth neuron in any layer is described by two sets of equations on the right: (Xi . Uj = wij).(1) For every neuron, j, in a layer, each of the i inputs, X i, to that Yj = Fth (Uj + tj) layer is multiplied by a previously established weight, wij. These are all summed together, resulting in the internal value of this operation, Uj. This value is then biased by a previously established threshold value, tj, and sent through an activation function, F th (tanh function). The resulting output, Yj, is an input to the next layer or it is a response to the neural network if it is the last layer.

3. 4. Entry 1 :difference between the current and transaction based on the number of days. last

Entry 2: percentage of last withdrawal/deposit to current transaction Entry 3: the branch the transaction is taking place (location) Entry 4: patron that is; self or different person? Entry 5: transaction date. The network produces reasonable outputs for inputs it has not been taught how to deal with. The various entries are presented to the network in form of numeric values. Transaction mode for withdrawal can only be either cheque/slip, ATM, Credit Card. These values can only be presented with numeric values in order to generate a pattern. Fixed numbers are chosen at random with the condition of being centralized around zero. The numbers are chosen in order to have distinctive output when presented to the network. These random values assist in reducing the learning time of the MLP else the network might take a longer time to learn (training a Neural Network involves trial an error till the result generated matches the target output). The same process is followed for branches though each branch a bank is uniquely identified by codes. Random values between -1 and +1 are used to represent the parameters. MLP learns well with values cantered on zero. The amount involved in the transaction is represented as the percent of the last transaction amount to the current transaction amount; the date is captured by the system based on the systems calendar and clock. The training process and algorithm is thus described below:

5. 6.

b)

c)

7.

Evaluate error values at the output neuron as the difference between obtained output and the desired output of the training set chosen. Backpropagate the error, all the way up to the input layer. a. Back propagation starts from the output layer with the following equation.

8.

ij =ij + LR . ej .Xi.(2)

61

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

b.

For the input of neuron in the output layer, the weight w is adjusted by adding to the previous weight value, w'ij, a term determined by the product of a learning rate, LR, an error term, ej, and the value of the input, X i. The error term, ej, for the output neuron is determined by the product of the actual output, Yj, its complement, 1 - Yj, and the difference between the desired output, dj, and the actual output.

wnew = w old+ (desired output)*input(6) where is the learning rate (Cheung and Cannons, 2002) Using tanh activation function on all neurons the network, the output of each neuron ranges between -1 and +1. Some training set in the training sample has a target value of 0.8 and some -0.8. Once the error at the output layer is at an acceptable level, the test data is fed into the network from the input layer and the output value is derived. The sensitivity bar ranges from -0.8 to 0.8 although presented to range from 0 to 100%. The transaction is committed if the output of the test data is greater than the value of the sensitivity bar, otherwise rolled back.

Ej = Yj . (1 - Yj) . (dj - Yj)..(3) 9. Calculate and update weight values for all synapses such that the sum squared value of the error are minimized. a. Once the error term is computed and weights are adjusted for the output layer, the value is recorded and the next layer back is adjusted. A revised weight adjustment process was adopted for updating the weights following the Equation below.

ij = wij + (1 - M) LR ej Xj + M (w'ij - wij).(4) Momentum (M) basically allows a change to the weights to persist for a number of adjustment cycles. The magnitude of the persistence is controlled by the momentum factor. If the momentum factor is set to 0, then the equation reduces to that used to adjust the weight of the output layer. If the momentum factor is increased from 0, increasingly greater persistence of previous adjustments is allowed in modifying the current adjustment. This can improve the learning rate in some situations, by helping to smooth out unusual conditions in the training set. b. The error term is generated by a slightly modified version of Equation in steps 9 above. This modification is:

Fig 3. An illustration of the training sets for the multilayer perceptron network. Source: (Werbos [16]; Rumelhart [14]) 3.7 Training and Verification The training set is not fixed based on all previous transactions with the bank. The training set (initials training set) is 70% of the previous transactions while the testing set (cross validation in the training process) is 30% of the transactions. Cross validation patterns are a fraction of the client history not used for training rather to check if the network has learned, the error ratio (cross validation error) is shown before training. The function of the current transaction is therefore to test against the neural network to determine its performance. A sensitivity bar will be included so that transactions with values heading towards accepted level that should be suspicious will not be successful. The training set is the set of all known samples is broken into two orthogonal (independent) sets: 1. 2. Training set 1: A group of samples used to train the neural network Testing set 2: A group of samples used to test the performance of the neural network. It is also used to estimate the error rate

ej = Yj (1 - Yj) (k jk).(5) 10. Choose another random training set form the training sample and repeat the steps above. 11. Train all training set in the training sample in a random selection order. A cycle through the training sample is called an epoch. 12. Stopping criterion is until the error obtained at the output layer is at an acceptable value. (0.02) 3.6 The Transfer Functions of the Neural Network. Activation function is used to obtain output from each neuron. Tangential Activation function was adopted. The equation is Y=tanh(x). Input nodes shall be presented which are expected to yield a target output. If the output is not correct, the weights are adjusted according to the formula:

Verification; i. Provides an unbiased test of the quality of the network ii. Common error is to test the neural network using the same samples that were used to train the neural network.

62

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Based on the testing set; 1. The network is optimized based on these samples, and will obviously perform well on them. 2. Verification doesnt give any indication as to how well the network will be able to classify inputs that werent in the training set Epoch This is basically the number of times the neural network should train for a particular transaction. i. One iteration through the process of providing the network with an input and updating the network's weights. ii. The amount of epochs needed for the neural network is not fixed. Typically many epochs are required to train the neural network. Irregularities detected by the system based on the training set provided to it will be made known to the bank official, it then becomes the duty of the bank officials to decide if such transactions should continue or be discontinued. 4.1. SYSTEM IMPLEMENTATION Fig 4.2: The Transactions Page

This section discusses the implementation of the system developed. It describes the functionality of the various interfaces provided and how the interfaces are used to achieve the aim of the system. Java programming language was used to implement the system due to its strong data manipulation control structures and great support online. My SQL server will be the relational database management system (RDBMS) since it is a robust RDBMS that provides multi-user access to a number of databases. It can manage the large amount of data generated in an inventory management system and can be integrated with the development platform used for this project. The system is flexible and easy to use by the user. The interface, menu arrangement, the language used ensures ease of operation without the supervision or training. The system interfaces are explained below.

Fig. 4.3: Trainning and Cross Validation Screen

Fig 4.1: Home Page of the System

Fig. 4.4: Transaction Screen (Success)

63

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

FIig 4.5: Value of a sucessful transaction

Fig 4.9: Inclusion of epoch value DISCUSSION OF RESULTS Figure 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7 and 4.8 shows the result of the tested system. Fig 4.1, The Home Page of the system contains a menu strip which provides navigation to all other interfaces on the system. The menu strip includes New Account and Transactions. Each menu strip is further explained below (i) New Account The New Account menu strip provides navigation to a page containing forms which are filled in the process of creating new bank accounts. This form requires certain bio information about the customer, as well as account types, account number and the branch. The irregularity detection system can only work when there are valuable statements of account that is; the customer most have an account with the bank.

Fig. 4.6: Unsucessful Transaction (ii) Transactions The Transactions menu strip provides navigation to a page containing all the transactions (credit/debit) carried out by the customers with inclusion of the account numbers, transaction dates, amount debited or credited, ledger balance, withdrawal or deposit mode, branch and creditor Fig 4.3, Show the Percentage for MLP Training and Cross Validation. This is a typical description of the percentage used for the training pattern and that used for the cross validation in the Multi-Layer Perceptron Network during withdrawals. Out of the 100% training patterns presented to the network, 70% is for the actual network training while 30% is the test pattern used in testing the accuracy of the results generated by the training sets. While figure 4.4, signifies the success of a debit transaction, when the Training and Cross Validation pattern has been specified, the neural network then resumes training based on the information its been fed and previous experiences to determine the continuity or discontinuity of a transaction.

Fig 4.7: Sensitivity bar

64

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Fig 4.5 shows the descriptions of a successful transaction. After training, the neural network describes the success or failure of transactions based on numerical values. The screen shows a successful transaction with numerical value heading strong towards +0.8 which is the value specified by the network as successful (very low or complete absence of errors). Fig. 4.6: show an unsuccessful transaction due to the change in some parameters and inclusion of maximal sensitivity bar. With the inclusion of a maximal sensitivity bar in Fig 4.7 and a change in some parameters, the same amount debited successfully in fig 4.5 will not be successful. Reason been that after training, the network might allow a transaction to go through simply because of its seemingly high level of accuracy (0.7 is very close to +0.8), the sensitivity bar then steps in and notices the change in location, the patron, the withdrawal mode then stops the transaction. Also Fig 4.9, show the Inclusion of an epoch value which signifies the number of time the ANN should train. In situations whereby the time required to train the network for a particular transaction is prolonged, number of epoch can be specified. Epoch signifies the number of time a network should train to yield results. The limitation in specifying the number of epoch is that suspicious transactions might be successful because the number of epoch specified might not be enough for the network training process. CONCLUSION The irregularity Detection system implemented has sought to reduce the amount of irregular transactions that take place in the Nigerian banking industry, thereby aiding in the decrement of bank fraud. The system gives bank personnel the opportunity to notice irregularities in transactions which will normally be successfully carried out oblivious of abnormal patterns. The system is convenient and easy to use and increase the integrity of the bank. In course of implementing this project the current method of detecting irregular transaction (manual method) was reviewed, its flaws shown and ways to improve it by introducing the detection system outlined. Other areas of future research therefore include application of biometrics techniques into the system to handle the authentication problems. REFERENCES [1] Idowu, A., An Assessment of Fraud and its Management in Nigeria Commercial Banks. European Journal of Social Sciences. Vol. 10, No 4.,2009, pp 628-640, [2] Adeyemi, k,Banking Sector consolidation in Nigeria: issues and challenges,2005. Retrieved from www.unionbankng.com [3] Aihua Shen, Rencheng Tong, Yaochen Deng, (2007). Application of classification Models on Credit Card fraud detection. Service Systems and Service Management, 2007 International Conference on 07/2007; DOI: 10.1109/ICSSSM.2007.4280163

[4] Amit Khajanchi.. Artificial Neural Networks: The next intelligence,2003,. Available at www.globalriskguard.com/resources/market/NN.pdf [5] Bruno Carneiro, Rafael Sousa, Identifying Banks Fraud Using CRISP-DM and Decision Trees. International Journal of Computer Science and Information Technology (IJCSIT) Vol.2, No.5, October 2010. [6] Fanning. K, Cogger. K. O and Srivastava. R. Detection of management fraud: A neural network approach International Journal of Intelligent Systems in Accounting, Finance and Management .1995. 4 113-126. [7] Fawcett, T. and Provost, F. Adaptive fraud detection. Data Mining and knowledge Discovery,1997. 1:291-316 [8] Haykin, S , Neural Networks; A Comprehensive Foundation, 2nd ed. (Englewood Cliffs, NJ: PrenticeHall). 1999. [9] Idolor Eseoghene Joseph .BANK FRAUDS IN NIGERIA; underlying causes, effects and possible remedies. African Journal of Accounting, Economics, Finance and Banking Research Vol. 6.No. 6. 2010. [10] Ivor Ogidefa . Fraud in Banking System in Nigeria, November 19, 2008. Available at http://socyberty.com/law/fraud-in-banking-system-innigeria/ [11] Lemo: T ,Regulatory Oversight and Stakeholder Protection A paper presented at the BGL Mergers and Acquisition Interactive Seminar, Held at Eko hotels $ Suits. V.I, on June 24.2005. [12] Micheal, H., Diane L, Jose C.P and Don X. Detecting fraud in the real World.Available at stat.bell-labs.com/cm/ms/departments/sia/doc/ HMDS.pdf. 2001.pp. 7 12 [13] Cheung and Cannon An Introduction to Neural Networks, Signal and Data Compression Laboratory Group Meeting, University of Manitoba, May 27, 2002. [14] Rumelhart, D. E., G. E. Hinton, and R. J. Williams: Learning internal representations by error propagation, in D. E. Rumelhart and J. L. McCleland, eds. (Cambridge, MA: MIT Press), vol. 1, Chapter 8, page 318-62, 1986. [15] Soludo, C., Consolidating the Nigerian banking industry to meet the development challenges of the 21st century. Being an address delivered to the Special meeting of bankers committee held on july 6, 2004 at the CBN headquarters Abuja. [16] Werbos, P.J.,Beyond Regression: new tools for prediction and analysis in the behavioural sciences, Ph.D Thesis, Havard University, Cambridge, MA.1974

65

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

[17] Stephan Kovach, Wilson Vicente Ruggiero: Online Banking Fraud Detection Based on Local and Global Behavior. Fifth international Conference on Digital Society2011. [18] J.A Adeyiga, J.O.J Ezike, A. Omotosho & W. Amakulor (2011). A Neural Network Based Model for Detecting Irregularities in e-Banking Transactions. Afr J. of Comp & ICTs. Vol 4, No. 3. Issue 2 pp 14-22 [19] Chioke(2009), Season of Bank logs. Thisday, Vol.14, No 5319. Nov

66

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Computing, Information Systems & Development Informatics Journal


Volume 3. No. 2. May, 2012

Management Issues and Facilities Management In Geographic Information System The Case of the Activities of the Lagos Area Metropolitan Transport Authority (LAMATA)
Nwambuonwo. J.O & Mughele.E.S Department of Computer Science Delta State School of Marine Technology Burutu, Delta State Nigeria 1 nwambuonwo@yahoo.com 2 prettysophie@yahoo.com

Reference Format:

Nwambuonwo. J.O & Mughele.E.S (2012). Management Issues and Facilities Management In Geographic Information System The Case of the Activities of the Lagos Area Metropolitan Transport Authority (LAMATA). Computing, Information Systems & Development Informatics Journal. Vol 3, No .2. pp 67- 74

67

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Management Issues and Facilities Management In Geographic Information System The Case of the Activities of the Lagos Area Metropolitan Transport Authority (LAMATA)
Nwambuonwo. J.O & Mughele.E.S

ABSTRACT This paper reviewed work and the progress made so far with the megacity project of the Lagos state government with special emphasis of the role of Geographic Information Systems and its application to the project. The milestones made as well as the challenges ahead for the project are identified. The Our review showed that the project is still a work-in-progress and a great deal of planning is required if Lagos State, Nigeria is to attain a world class mega-city status. The paper elucidate the production of thematic maps by the Lagos Metropolitan Area Transport Authority (LAMATA) that displays diverse scenarios ranging from land use pattern to transport route pattern thereby leading to the production of land use maps and route maps respectively. Challenges facing the project such as high cost process involving largely manual data conversion from paper maps or survey information , especially when high precision (meter or sub-meter) is required as is the case for utility and cadastral applications are mentioned. The paper conclude by recommending that the success recorded in using geographical information system in transport route in parts of Lagos State should be extended to other parts of the state and its environs.
Keywords: GIS, Lagos State, Mega City, Project, maps and LAMATA 1. INTRODUCTION A Geographic Information System is a computer system that records, stores, and analyzes information about the features that make up the earth's surface. A geographic information system is characterized with the capacity to generate two- or three-dimensional images of natural features such as hills and rivers and with artificial features such as roads and power lines. Scientists use geographic information system images as models, making precise measurements, gathering data, and testing ideas with the help of the computer (Encarta, 2008). GIS is an acronym which is sometimes used to mean geographical information science or geospatial information studies or geographical information system; these later terms refer to the academic discipline or career of working with geographical information systems (Wikipedia, 2011). For some organizations, such as local governments, geographic information system may represent a data and operational framework that affects and ties together most activities of the organization (Somers, 2008). Some organizations utilize Geographic information system with respect to volume of task they have to handle which ranges from using single tools to complete single task to using various tools to accomplish multiple task. Geographic information system operations are widely varied and characterized by diverse levels of operational tasks, user number, technology and organizational structure. GIS requires high expertise in its organizational and management approach, which differentiates it from other technologies. The distinctive attributes that separates GIS from other technologies include but not limited to, the character and role of geographic data in organizational business administration, the present and future status and direction of geographic information system, the fusion, workability or compatibility of Geographic information system with other technologies in an organization, and the multifarious applicability of Geographic information system data.

The introduction of Geographical information system to an organization has organizational and technological impacts and implications that should be simultaneously addressed.A Geographic information system can become the driving force that ties your organizations together, especially for organizations that utilizes single department capabilities of providing data. (GIS Primer, 2008). A combination of a geographic information system organizational model and specific Geographic information system management strategies, characterizes an organizations particular geographic information system management approach (Somers, 1998). The organizational role of a Geographic information system is highly varied. In some organizations, a new technological input redefines operations with corresponding organizational impacts. A businessoperating environment is greatly influenced by a Geographic information system particularly in applications classed as business Geographic information system. The Geographic information system is limited by function, which necessitates the application of different relevant models. The relevance of a geographic information system (GIS) to an organization, could be either subtle or attention drawn to geographic information system (GIS). In as much as geographic information system plays extensive variety of roles in an organization, few Geographic information systems implementation types exist. Each situation characterized by varied impacts and implications. (Journal for housing research, 1998). The Lagos Metropolitan area transport Authority (LAMATA) is charged with the responsibility of providing a quality transport system that will address that persistent problem of traffic congestion associated with a megacity. This is to be achieved by redefining the Lagos transport system (which includes road, light-rail, and water transport systems).

68

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

2. THE LAMATA INITIATIVE The Lagos Metropolitan Area Transport Authority (LAMATA) is charged with the responsibility of providing policy direction for the advancement of the transport system of metropolitan Lagos was established in 2002 which is governed by a 13-member board, inaugurated in December 2003. The Lagos Metropolitan Area Transport Authority (LAMATA) aim to deliver improved and costefficient transport services for the teeming population of Lagos inhabitants. To achieve this, Lagos metropolitan area transport authority works closely with the Ministry of Works (MOW), the Ministry of Transportation, the Lagos State Government, and the World Bank (Lagos state Govt, 2011). The ever increasing population in Lagos estimated between 12.5 and 15millionwith a population growth rate of 6 percent per year is continually characterized with severe traffic congestion (Lagos state Govt., 2011). The gradual over-population of the Lagos metropolis is leading to expansion into neighbouring Ogun State, with increased volume of commuter trips. In recent times, Lagos state has been battling with varying socio-economic challenges which are traffic congestion, bad roads, poor and unattractive road based public transport system, sky rocketing transport fares; ineffective rail and water mass transit transport; unprecedented levels of road accidents and environmentally unacceptable levels of traffic related emission and atmospheric pollution; growing menace of anti-social behaviour amongst transport operators which is obviously attributed to rapid and unplanned urbanization. The economic, commercial and industrial status of Lagos has further increased population and aggravated the public transportation challenges that are directly proportional to the sprawling urban growth of Lagos. Lagos is socioeconomically characterized with an active and vibrant local trading tradition, which is dominates Nigerias commercial sector. Lagos is home to a high proportion of Nigerias manufacturing industry with 45% of Nigerias skilled workforce residing in Lagos. Consequently, Lagos still maintains the long-term status of being Nigerias economic gateway by virtue of housing Nigerias principal commercial sea and airports (Lagos state Govt, 2011). Existing inadequate infrastructure in Lagos has been pressurized to breaking point by the foremost status of Lagos. Outdated and dilapidated social and physical infrastructure required to support the escalating population and productive sectors has lead to abysmal levels of inefficiency and unproductively. For instance, in 1985, production costs in Lagos appreciated by 30%, which was instrumental in offsetting inefficacies of public sector services and infrastructure, transportation inclusive. The relocation of the federal capital of Nigeria from Lagos to Abuja has not been in the best interest of Lagos. This has lead to the loss of some good revenue sources for public services. Inadequacies in the transportation system of Lagos has further worsened the plight of the poor in urban Lagos with an estimated domestic transport expenditure of about 20% of the house hold budget, which is second to expenditure on food (Lagos metropolitan area transport authority, 2008). 3. GEOGRAPHIC INFORMATION SYSTEM (GIS) IN LAGOS METROPOLITAN AREA TRANSPORT AUTHORITY(LAMATA) Lagos metropolitan area transport authority (lamata) geographic information system (GIS) consists of raster and vector data and metropolitan Lagos is the extent of coverage. Vector Data Include: Lagos State Road Network - Federal and State Highways totalling about 860 kilometres (Attributes: - Road Names, Road Classification, Length, Ownership, Local Government Area, e.t.c.) Railway Lines Lagos State Facilities e.g. Markets, Schools, Hospitals, Bus Stops, Petrol Stations, Hotels, Churches, Mosques, e.t.c. Lagos State Water Bodies (Canals, Lagoons) and the Coast Line. Lagos State Land Use Lagos State Jetties & Landings (Functional, Nonfunctional)

Raster Data: Satellite Imagery - Digital Globe Quick Bird and Space Imaging IKONOS Satellite Imagery with a spatial resolution of about 0.7 metre and 1 metre respectively.

69

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Fig. 1: Map and attributes of road network in Ikeja area of Lagos metropolis. SOURCE Geographic Information system department (2009).Lagos area metropolitan transport authority, Lagos, Nigeria

Fig. 2: Satellite Imagery of Surulere area of Lagos metropolis. SOURCE: Geographic information system Department (2009). Lagos area metropolitan transport authority, Lagos, Nigeria

Fig. 3: Lagos Island LGA Road Network. SOURCE: Geographic information system department (2009). Lagos area metropolitan transport authority, Lagos, Nigeria

70

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Fig. 4: DEM Image of Nigeria (Lagos State) Land Mass and Coast Line SOURCE: Geographic information system department (2009).Lagos area metropolitan transport authority, Lagos, Nigeria

Fig. 5: Lagos Mainland LGA Road Network SOURCE: Geographic information system department (2009). Lagos area metropolitan transport authority, Lagos, Nigeria 3.1 The Mandate Of The Lagos Area Metropolitan Transport Authority (Lamata) Gis Includes: Developing A Robust On-Line Database Structure For Lamata Declared Road Networks: This includes spatial and attribute data along declared road network that will be regularly reviewed and updated from time to time, in concomitance with prevailing changes in land use occasioned by infrastructural development Geographic Information System (Gis) User (In House) Training: getting staff to regularly update themselves with current trends and functionality of geographic information system through the organisation of seminars, workshops and interagency interactions with geographic information system firms.

Produce Roads Furniture Map: The roads furniture map is used to reflect important sign posts on the road such as zebra crossings, stop signs, lane-diversion maps e.t.c. the road furniture map is a vital geographic information system tool needed for efficient traffic management. Produce a Population Density Map: A population density map will assist the Lagos area metropolitan transport authority (lamata) to visualize the degree of population distribution in the Lagos metropolis with a view of to bringing services where needed, and returns on investment.

71

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Produce A Simulation Map Of The Bus Rapid Transport Bus Routes: Simulation is a method of viewing the existing transportation performance in a city and seeing how this performance may change with growing demands. It is not possible to accurately predict the spatial processes on specific streets or intersections in detail due to varied factors. Looking at the performance of individual nodes at a citywide, geographically aggregated level is a better option. First, a model is built to simulate existing conditions, and then performance can be assessed as traffic demands or controls. (Daganzo et al, 2007) Produce Local Government Area Maps: Needed to know the areal extent that will be serviced by the Lagos area metropolitan transport authority (LAMATA). Produce a Traffic Management Map (Route Planner): Traffic is a major challenge in Lagos state. A traffic management map is to assist the Lagos area Metropolitan Transport Authority to planning transport infrastructure towards traffic management. Produce a Nearest Facility Map: A nearest facility map entails the geographic location and spatial distribution of service facilities such as petrol stations, banks, car parks etc along transport routes or proximity to transport routes.

3.2 The Lagos Area Metropolitan Transport Authority (LAMATA) GIS Mandates Objectives 1) To Develop a Database, Which Contains Data on all Elements of the Lagos Metropolitan Area Transport Authority (Lamata) declared Road Network: Deploying geographic information system in transportation at an enterprise level presents an opportunity to eliminate the traditional application-specific development pattern of information systems by providing a common data structure centered on transportation features. The solution is to embrace the diversity of applications and data requirements within a unifying enterprise data model for geographic information system that allows each application group to meet the established needs while enabling the enterprise to integrate and share data. The primary objective of this model is to allow frequent transaction-based data exchanges and updates, the type that an interactive organization is likely to need. As transportation agencies move toward a more integrated manner of doing business such as involving design units earlier in the project planning cycle, the need for data to cross former institutional or jurisdictional barriers will become greater. (J. Allison et al, 2000) 2) To Train End Users on the Use of Geographic Information System in Their Departments: Department-specific professional staff positions that require geographic information system usage need to be addressed separately.

3) Once geographic information system has been effectively integrated into the business processes of individual departments, employees with geographic information system skills become more valuable 4) Mapping Traffic Furniture in Lagos E.G. Traffic Lights, Zebra Crossings: the concept of highway asset management is becoming increasingly important for those responsible for attaining best value for managing highway networks. Asset management is not a new concept and most highway authorities are practicing elements of asset management 5) 6) already. However, the service wide application of asset management is a relatively new concept. Some key elements of asset management that need to be defined for each asset are: specification and location performance, condition and criticality planned maintenance Information on these elements is required to develop and refine financial and risk models, as well as contributing to the asset valuation process. More recently, with georeferenced data being available within GIS systems, the concept of associative spatial maintenance has become a reality. This concept aims to identify, for any location where an asset requires maintenance, any adjacent assets in close proximity that do not necessarily require immediate attention, but where the maintenance can be simultaneously carried out within a budget efficient time frame (www.roadcodes.org,2008) 7) Mapping the Population Density of Lagos State by Local Government Areas: Population density (in agriculture standing stock and standing crop) is a measurement of population per unit area or unit volume. It is frequently applied to living, and particularly to humans. It is a key term used in geography. 8) Producing a Directional & Locational Guide Map of Local Government Areas in Lagos State: A map gives a miniature "picture" of a very large space. A map is a guide to a space you have not encountered before. Maps have distance, mountains, rivers, and shapes of places or destinations. 9) Production of A Map That Provides A Representation of Traffic Situations on Lagos Roads: Interaction between road users during road crossing involves decision-making and prediction of future states of the scene. To predict cooperation and competition of the temporary limited road space a dynamic driver decision model is needed. (Wieringa, 2004) 10) Capturing data of Agencies/Organizations with Sizeable number branches on a Map, for easy identifying and locating: Government agencies use public funds to collect and maintain data including location based or spatial data (geodata) in pursuit of their missions. The value of this important investment is often best realized by the widespread distribution and use of government held data

72

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

3.3. Mandate Implementation Strategies. A critical element of a geographic information system (GIS) operation is Data which informs the vigorous policy quest and pursuit of accurate and current spatial data by the Lagos Metropolitan Area Transport Authority (LAMATA).This should result in a build-up, continual populating and maintenance of a reliable spatial database. The current infrastructural projects in Lagos is redefining the metropolitan landscape with a dire need for current and accurate spatial data that will be highly instrumental to planning, executing and delivering a world-class transport system befitting of a megacity. The data required by Lagos Metropolitan Area Transport Authority (LAMATA) is sourced by reliable geographical information system (GIS) vendors with proven track records of accuracy. It is worthy of note that geographic information system data (GIS) is usually characterized by accuracy issues, meaning that it is very unlikely for two geographic information system(GIS) operators to go to the field and obtain perfectly same spatial accuracy in their data, for reasons ranging human to natural factors. With a current and accurate spatial database, the Lagos Metropolitan Area Transport Authority (LAMATA) is saddled with the responsibility of locating, identifying and collaborating with the end-users of geographic information system so as to bring about an effective transport system. The end-users of Geographic information system (GIS) include but not limited to transport systems developers (government and private agencies), construction companies, estate developers, Land registries etc. Based on efficient geographic information system, the Lagos Metropolitan Area Transport Authority (LAMATA), act as moderators to every institution whose activities hinges on transport development. This done by training the end-users of Geographic information system (GIS). Consequently, the end product of a Geographic information system (GIS) is usually the production of a map to reflect the spatial dimension of acquired data. A detailed map is required for effective transport planning, which will include plot and overlay data on road network. 4. POSITIVE IMPACT OF GIS ON LAMATA The utilization of geographic information system (GIS) in the realization of a world class transport system is a remarkable working progress in metropolitan Lagos. Major successes has been recorded at pre-planning and post planning stages of the light rail and bus rapid transport(BRT) scheme. This has lead to the ease of vehicular mobility thereby reducing the previously high incidence of traffic congestion and its socio-economic consequences. In further elucidating on the successes recorded in transport system of metropolitan Lagos through the use of geographic information system (GIS), it is noteworthy to say that the most organized places or regions of the world are the most well-mapped-out. It is in recognition of this fact that the Lagos Metropolitan Area Transport Authority (LAMATA) has produced thematic maps, displaying diverse scenarios ranging from land use pattern to transport route pattern thereby leading to the production of land use maps and route maps respectively. Abler et al in 1971 postulated that spatial patterns and spatial process are circularly causal.

This means that spatial pattern leads to spatial processes and vice versa. Understanding spatial patterns and spatial processes is very important in transport planning. it is on the strength of this that the Lagos Metropolitan Area Transport Authority (LAMATA) has use satellite imageries to monitor human and vehicular traffic which has positively influenced transport planning. The expansion of its influence in the Lagos metropolis is being felt with the parallel creation of bus rapid transport route alongside new road constructions and expansions. 4.1 Challenges With the high cost of data by geographic information system vendors, the creation of spatial data has been a very high-cost process involving largely manual data conversion from paper maps or survey information, especially when high precision (meter or submeter) is required as is the case for utility and cadastral applications. It is highly challenging getting end users interested in learning geographic information system software: most geographic information system software is written from the vendor's perspective; oftentimes, a company selling geographic information system software has to 'idealize' a vast array of municipal geographic information system needs. The result is complex and overwhelming geographic information system software that does more than necessary for any given task. to the average municipal employee, geographic information system is just too complex. in addition, there can be a growing scepticism towards learning geographic information system technology since most municipal employees may have already developed some computer skills by using non-spatial 'data storage software' and are not interested in investing more time in learning new skills. this is exacerbated by the fact that geographic information system is often so different from the software they are accustomed to using (Cardenas, 2007). Another challenge is inaccurate data from different geographic information system vendors, the problem with this business process is that getting the field-collected information back into the system of record can take anywhere from three days to three weeks to three months. All of the maps produced and printed during that time have the potential to be inaccurate or incomplete. All of the collaboration that occurs and business decisions that are made during that time are based on inaccurate or incomplete information. So not only is the current process time-consuming and prone to human error, it is also exposing organizations to greater risks. Populating the geographic information system database up to date: database development is a crucial requirement for the adoption and integrated use of geographic information system in any organization or establishment. Database development is very crucial in geographic information system modelling, analysis, and mapping as it allows all possible features in the specific map area to be captured, described and allow the possibility of running basic query languages for geospatial evaluation of the area of study (gumos, 2005). In the lagos area, metropolitan transport authority (lamata), high cost of data, inaccurate data from different vendors and inadequate geographic information system proficient manpower, makes an up-to-date status of the geographic information system database an uphill task.

73

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

5. CONCLUDING REMARKS The megacity project of Lagos state is still a working progress, and a great deal of planning is required if Lagos state is to attain a world class mega-city status. The success recorded in using geographical information system in transport route in parts of Lagos, should be extended to other parts of the state and its environs, which includes Ogun state. Geographic information system should also find relevance in every structural and infrastructural development of Lagos state. The implication of this is that the harsh effect of every form of migration to Lagos which results in congestion will be brought to the barest minimum. Also, environmental issues which are prevalent in Lagos such as flooding can also be mitigated, land registry documentation system improved, facilities mapping and inventory improved and updated for effective and efficient taxation. In this 21st century, the importance of geographic information system in the structural and infrastructural development of any nation cannot be overemphasised. References: 1. Daganzo, C. F. (1997). Fundamentals of transportation and traffic operations. Oxford, UK: Elsevier Science, Ltd. 2. Encarta encyclopaedia, 2008 3. Fannie Mae Foundations (1998), Journal of Housing Research, Volume 9, Issue 1 157 4. Lagos area metropolitan transport authority, Lagos, Nigeria. Geographic information system department 2009 Publication 5. J. Allison (Al) Butler and Kenneth J. Dueker (2000), Implementing the Enterprise GIS in Transportation Database Design 6. Wieringa, P.A. (2004). Fuzzy model for decision making support maximum transport. 2004 IEEE International Conference on Systems, Man and
Cybernetics,

7. 8. 9. 10.

www.lagosstate.gov.ng www. wikianswers.com www.roadcodes.org, ANZLIC (Australia and New Zealand Land Information Committee) (1996), Spatial Data Infrastructure for Australia and New Zealand. http://www.anzlic.org.au/spdianz.html

74

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Computing, Information Systems & Development Informatics Journal


Volume 3. No. 2. May, 2012

Strategies for Effective Adoption of Electronic Banking In Nigeria


Orunsolu A.A, Bamgboye O.O. & Alaran M.A Department of Computer Science Moshood Abiola Polytechnic Abeokuta, Nigeria

orunsoluabdul@yahoo.com
Aina-David O.O 2 Department of Business Studies Moshood Abiola Polytechnic Abeokuta, Nigeria

Reference Format:

Orunsolu A.A, Bamgboye O., Aina-David O.O & Alaran M.A (2012). Strategies for Effective Adoption of Electronic Banking In Nigeria. Computing, Information Systems & Development Informatics Journal. Vol 3, No .2. pp 75- 81

75

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Strategies for Effective Adoption of Electronic Banking In Nigeria


Orunsolu A.A, Bamgboye O., Aina-David O.O & Alaran M.A

ABSTRACT A major trend in modern industrial and commercial systems is the integration of electronic services into different levels of business operations. This has resulted into an explosion in the global exchange of monetary value of products and services electronically. The continued upward trend in this electronic revolution is dependents on a number of factors. The aim of this research is to investigate the factors which influence the adoption of e-banking in Nigeria. The research is particularly timely and urgent considering the new cashless regime of the Nigeria Apex Bank. Key factors impacting e-banking adoptions are identified from the literature review. The research is structured into research survey using questionnaire, online responses from social websites and face-to-face interviews. The research results provide a foundation for the need for an enhanced adoption of e-banking strategy and for the practical development of e-banking products.
Keywords: E-Commerce, Adoption factors, E-payment systems, E-banking, Drivers However, findings from previous studies suggest that the economic distribution of Nigeria business environment is largely informal. In addition, investigation by one of the electronic payment initiative in Africa (Card Technology Today, 2008) indicates that only 20% of families in Africa have bank accounts. These figures present a challenging platform for e-transaction initiatives as majority of the population are unbanked The objective of this study is to explore customer adoption of e-banking by detecting the adoption determinants and strategies that are relevant for the effectiveness of retail banking in Nigerias Cashless Regime. The adoption strategies are based on the results and implication of data obtained through the administration of questionnaire, online feedbacks from a social network website and faceto-face interview. In addition, the study identifies some emerging issues which provide useful information for all the stakeholders in the e-banking initiatives 2. RELATED WORKS The wider patronage of e-commerce services provides reasons for extensive research ranging from security of transactions to acceptability and effectiveness of ecommerce solutions. Electronic banking has emerged as one of the most vibrant aspects of e-commerce where banks are turning to IT to improve business efficiency, service quality (Kannabiran and Narayan, 2005), supporting growth, promoting innovation and enhancing competitiveness (Gupta, 2008). The E-banking experience has become an important channel to sell products and services and is perceived to be necessity in order to stay profitable and successful in the new financial world (Christopher et al. 2006). Gikandi and Bloor (2010) investigated the adoption and effectiveness of retail banking in Kenya. The authors noted that there is explosive growth of e-banking adoption especially in the area of ATM. The study identified the drivers and emerging issues necessary for increased adoption of e-banking solutions. Ayo (2006) examined the prospects of e-commerce based on ability, motivation and opportunities model.

1. INTRODUCTION The advent of internet has caused profound technological innovations in the delivery of personal and business services. The birth of electronic commerce is as a result of the development of internet. Electronic banking is built upon efficient e-commerce and e-payment services. Globally, the introductions of e-commerce and e-payment services have injected over US$7trillion into monetary value of products and services (Sanders 2000). As ecommerce plays prominent roles in the implementation of most business operations, the financial sectors in both developed and emerging economy like Nigeria enjoy unparallel patronage of e-commerce products and services. On the management side, e-commerce enables business and financial houses to reduce telecommunication costs, minimize warehousing expenses, encourages extensive market penetration, global presence, and cut down the distribution chain (Mallat 2007). On the customers side, the attractions include convenience, queue avoidance, time compression, flexibility, personalized services and 24h7 service availability. These benefits are great cause of attractions especially for banking sectors which forms the cornerstone of all business transactions. It is evident that banks and other financial institutions are embracing e-banking. In the developed economy, the rate of adoption presents a geometric growth due to adequate infrastructural support, legal legislation and large representation of formal sector in the economy. In Africa, e-banking penetration presents a promising future (Au and Kauffman 2008). For example, in Nigeria the value of card transactions stand at US$1.2 million in a week (www.punchng.com). In addition, the Central Bank of Nigeria projected the deployment of 150,000 POS terminals and ATM points from the current number of 5300 across the country to boost the penetration of financial e-services. Since Nigeria is one of the fastest growing IT markets in Sub-Sahara Africa, the deployment of extensive e-banking is urgent to facilitate the maximum benefit of telecommunication revolution and to reduce the handling cost of cash-based transaction.

76

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

The author submitted that virtually all companies have online presence and that motivation and opportunities for e-commerce is low based on lack of e-payment infrastructure and access to ICT facilitates. The work of Mahdi and Mehrdad (2010) used chi-square to determine the impact of e-banking from the view points of customers in an emerging economy. The authors specifically considered the case of Iranian banks to determine the level of customers satisfaction with particular reference to the use of e-banking. Agboola (2006) studied the impact of electronic payment systems and tele-banking services in Nigeria. The author revealed that there is enormous potential for tele-banking if attendant barriers identified in the research were taken care of. Similarly, Nwaolisa and Kasie (2011) investigated user acceptability and payment problems in electronic retail payment systems in Nigeria. The study examined the contribution of electronic retail payment in the elimination of inherent problems with payment process in Nigeria by using secondary and primary sources of data. In a related research, Zulu (2006) examined the challenges to e-payment system in Africa. The study identified inadequate communication infrastructure, low internet bandwidth, frequent power interruption, lack of proper legal and regulatory framework and low level of credits access as some of the challenges militating against the survival of e-payment solution in Africa. Adeoti and Oshotimehin (2011) investigated the factors influencing the decision to adopt Point of Sale terminals by customers. The study identified ease of use, availability, convenience and nativity as some of the factors motivating adoption. The security of transactions and complexity of the PoS technology were identified as areas that need improvement to drive customers interest. On the security aspects, Ayo and Ukpere (2010) proposed the design of unified e-payment system to reduce the number of ATM cards carried by customers with more than one account. The authors submitted that such unified epayment solution would reduce identity theft if coupled with biometric-based cash dispenser. Similarly, Yang J (2009) investigated online payment and e-commerce security as the foundation condition with which ecommerce can smoothly develop. The study identified validity of information, non-repudiation of information, authenticity of transaction status, reliability of the system and integrity of information as some of the e-commerce security elements. In addition, Kim (2010) investigated the empirical study of customers perceptions of security and trust in e-payment systems. The study proposed a conceptual model that delineates the determinants of customers perceived security and perceived trust, as well as their effects on the use of e-payment systems. The work provided a theoretical foundation for security aspects of epayment system. 2.1 PAYMENT SYSTEMS IN E-BANKING E-banking is built on efficient e-payment systems. The survival of e-banking is a function of usability, convenience, complexity and security of e-payment systems. As e-banking becomes a major driver of financial industry operations, different e-payment methods have been devised. In general, five e-payment methods can be

identified (Guan and Hua, 2003, Dai and Grundy, 2007, Schneider, 2007), which are discussed below: 1. Debit cards: This is one of the most widely and acceptable form of e-banking payment system. A customer maintains a valid account is issued a debit card which makes automatic deduction from the account when a debit transaction is performed. It is the most common e-banking products used by Nigerians owing to convenience, queue avoidance, and ease of use. 2. Credit cards: This involves an irreducibly complex transaction structure (Hsieh 2001) which is inappropriate for small-value transaction. A server authenticates the credit card holder and verifies with the creditor (Bank) whether adequate funds are available for the transaction. Complexity, transaction charges and privacy are key issues in credit card transaction 3. Pre-paid card: This is issued for a particular value by a particular merchant and is frequently used in store transactions (Kim, 2010). This payment method is characterized by ease of use and convenience. 4. Electronic cash: This is a method of payment where a unique identification is associated with a specific amount of money and transactions are settled via the exchange of electronic currency. This method is mostly popular for internet purchases of goods and services. 5. Electronic checks: This method makes it possible for an institution to settle transaction between the buyers bank and the sellers bank electronically. Debit cards, otherwise called ATM cards are still the most common e-banking product used by most Nigerians. Credits cards are gaining popularity especially among the core formal sector employees and employers in settling internet payment of goods and services. However, most Nigerians have low awareness of the difference between credit and debit cards. The complexity of credit card system, Electronic cash and Electronic checks demands a lot of attention and awareness in order to give Nigerians a wider platform of e-banking solutions. 3.0 MATERIALS AND METHODS The objective of this study is to explore the adoption factors for extensive penetration of e-banking in Nigeria and also to identify the barriers to such adoption. A threestage measurement assessment was used in investigating the research question. The first stage is conducted through a review of relevant literature (Gefen et al. 2000). In stage two, a set of sampled variables from the population consisting of gender differences, education level, types of organization and other related items are identified. Stage three proceeds to data collection using questionnaire, online responses for social websites and face-to-face questions session. The questionnaire has two sections covering personal details, types of organization, perceived security in E-commerce, E-commerce use and technical support. Pilot survey was conducted to ascertain that the questionnaire was adequate in content. The efficiency of the questionnaire method was further investigated with face-to-face interviews of sample population.

77

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

The research questions were further investigated using online social network (Figure 1). Questions bordering on ebanking security, awareness level and technical supports were posted on social network for visitors to answer. The questions were structured in a simple manner to motivate and generate visitors interest. To improve the response rate, comments are made on some social website users wall on their posting while soliciting their supports for voting on the research questions. In terms of age, 56.1% of participants were under 25 years, 24.4. % between 25 and 35 years, 16.3% between 35 and 45 years and 3.2% above 45 years.

The composition of the sample could limit the generalizations of the results. However, Lin and Lu (2000) argued that results obtained from the analysis of this type of sample can still reflect true phenomena and provide significant outcomes because young and middle-aged population are the most important strata for a technological driven-research like e-commerce. Hence, the sample can be regarded as being representative of the whole population of prospective e-banking target population now or in future.

Fig 1: Sample Online responses from a social network website All the respondents in the survey considered queue avoidance, convenience and time compression as drivers of extreme importance for e-banking initiatives based on their present experiences with the automatic teller machine. This is consistent with the finding of Mohamed and Didi (2005), Adeoti and Oshotimehin (2011). The desire to carry out transaction at any time of the day has attracted the attention of both formal and informal sector to eservices. In addition, the high risk associated with cashbased transaction in interstate business trips has reduced significantly because more Nigerians are patronizing the electronic service option. However, more than 75% of the core informal sector comprising of market women, small stores owners and petty traders do not consider these advantages because of low awareness and perceived complexity associated with electronic services. The submissions of more than 91% of the respondents showed that this trend could be reversed with the availability of local languages on PoS and ATM facilities. Poor internet penetration, low level of computer education, high illiteracy level and technological factors were identified by more than 60% of the respondents as likely barriers to e-banking initiatives. This is consistent with the works of Gikandi and Bloor (2010) and Zulu (2010) where the author identified internet infrastructures as a major challenge for banks and customers in e-banking adoption.

4. DISCUSSIONS OF FINDINGS & IMPLICATIONS Table 1 shows the results from data analysis on drivers and barriers to e-banking adoption in Nigeria based on participants responses. The items were ranked in order of importance ranging from 1, which represented the least importance to 5 representing items of extreme importance as contributor (+) or inhibitor (-). Table 1: Ranking of E-banking contributing items Contributing items Queue Avoidance Convenience Automatic availability Customers trust Complexity of e-transaction Security of e-transaction E-banking legislation support Internet infrastructure in Nigeria Local dialect based e-services Exactness of payment bills Transaction error and e-service failure Contributor (+) 5 5 4 3 2 2 3 2 4 5 2 Inhibitor (-) 1 1 1 3 3 4 4 4 1 1 5

78

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Lack of legal regulation is seen by 67% of the sample population as a barriers to e-banking initiatives. This is expected as in the case of Nigeria where prolong legal litigation is common. The problem of who takes the liability in case of cash lost should form the core part of the legislation as majority of the respondents holds the reservation that bankruptcy rate will be high in e-banking regime. The submission of one of the social network participants provides a good direction for solution in this regard. The participant commented that e-banking will work in Nigeria if banks that liability of any transaction lost through e-banking and that customer should only bears liability in case of cash lost. This is an important motivation for e-banking adoption in Nigeria. Security of transaction on e-banking platform features prominently in the submissions of more than 95% of the respondents as a contributing factor. More than 60% of the respondents submitted that they have had negative experience with ATM. This a negative indicator for the future of e-banking initiatives as most of the respondents see the current password regime on Debit cards as largely inadequate.

This is supported by Gupta (2001), Aladwani (2001) and Hwang (2003) who cited security and customer related issues as contributors of extreme importance. In addition, the Gartner Group reports that 95% of customers in ebanking platforms in developed economy are somewhat concerned with security and privacy arising from the use of e-payment services (Kim et al. 2010). Similarly, previous research proposes that perceived security and trusts contributed significantly to electronic commerce success (Siau et al. 2003; Xu and Gutierrez, 2006). From the discussion and table 1, a survival quotient modeled as: is

> 1 (e-banking growth indicator) = 0 (equilibrium of e-banking and cash-based transaction) < 0 (e-banking collapse indicator) Therefore, the success or failure of is largely determined by the ability to minimize the composite effect of ( and to maximize the . The implication of this survival quotient is given in table 2 considering the description and expectations of major contributing factors from the survey

Table 2: Emerging implications of e-banking initiative in Nigeria Emerging patterns Description Vagueness of transaction Lack of transaction record, receipt, documentation and error in payment transaction. Device and network reliability is common concern because of likely failure in the middle of transaction Security of transaction Increase the customers trust in electronic banking services. Authentication, confidentiality and privacy are issues of great concern in e-banking adoption Benefits of e-banking in terms of safety, efficient documentation of transaction, simplicity and openness. Clear implications of errors in transaction procedures need to be explicitly stated. Regulations on e-banking with emphases on financial operators as chief security of e-based transactions so much that customers will only fear cash lost in case of non-e-banking transaction. Regulations of B2B and B2C need to be explicitly implemented Degree to which innovation is perceived as difficult to understand and use. Sophisticated format of operation and machine design will continue to be a major challenge to ebanking adoption for Nigeria with largely representation of core informal sector.

Expectation Offer customers sufficient payment documentation; communicate the implications of errors in transaction procedures and possible solutions. Upgrade of service and network connectivity. Free toll-calls for customers complaints Customers will continue to patronize e-banking services that guarantee security. This will continue as a challenge of extreme importance. Awareness will progressively continue as a driver of extreme importance especially among the core informal sector

E-banking awareness

Legal regulation

The government is expected to provide the legal and constitutional framework for banks and customers.

Technological Complexity

Availability of local dialect on ebanking solution not excluding ATM programming and PoS. Simpler and faster transaction procedures. The use of colored codes on machine will complement identification and usability for prospective uneducated users

79

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

5. CONCLUSIONS This paper evaluates the adoption strategies for the effectiveness of e-banking initiatives in Nigeria for sustainable cashless regime. Past studies were reviewed for the development of questionnaire used in the research. The study captures the drivers and barriers to e-banking adoptions as well as the implication of the study on the future fortunes of e-banking initiatives. Although the research has come up with some significant findings from the viewpoint of customers adoption, it does not consider factors such as competitive forces within the financial sectors, specific e-payment functions, intra and interbanking transactions etc. Despite this limitation, the research constitutes an important stepping stone for future research as it unearths customers perception of cashless regime. It also provides the foundation for future research by extending the findings of the present study to more general research questions that guide future research on the adoption of ebanking initiatives, services, solutions, products and technologies. With the exponential expansion of internet technology, extensive investment in infrastructural development and increasing popularity of e-commerce, e-banking may become an important part of an individuals life. Thus, banks should design efficient strategies with great emphasis on enhanced security of transaction, increased customers awareness, simplification of e-transaction and indigenization of e-banking solution to reflect the local content in order to keep and get more customers in an increasing competitive financial business environment.

REFERENCES Adeoti and Oshotimehin, (2011). Factors Influencing Customers Adoption of Point of Sale Terminals in Nigeria. Journal of Emerging Trends in Economics and Management Sciences. Agboola, A. A. (2006). Electronic Payment Systems and Tele-banking Services in Nigeria. Journal of Internet Banking and Commerce, Vol. 11, No 3. Online Source: http://www.arraydev.com/commerce/jibc/ Aladwani, A.M. (2001). Online banking: a field study of drivers, development challenges and expectations. International Journal of Information and Management, 213-225. Au Y and Kauffman, (2008), The economics of mobile payments: understanding stakeholder issues for an emerging financial technology applications, Electronic Research and Applications, 141-164. Ayo, C.K. and Ukpere, W.I. (2010) Design of a secure unified e-payment system in Nigeria: A case study. African Journal of Business Management, Vol 4(9), Pp 1753-1760. Ayo Charles, K. (2006). The Prospects of e-Commerce Implementation in Nigeria, Journal of Internet Banking and Commerce, Vol. 11, No.3,o nline source http://www.arraydev.com /commerc e/jibc/ Card Technology Today, 2008, ePayment: powering West Africa pp10-11 Christopher, G. C., Mike, L. Visit and Amy, W. (2006). A Logit Analysis of Electronic Banking in New Zealand, International Journal of Bank Market, pp. 360-383 Dai X and Grundy J., (2007). NetPay: an off-line decentralized micro-payment system for thinclient applications, Electronic Commerce Research and Applications. Gefen D, Straub D and Boudream M, (2000). Structural equation modeling and regression: guidelines for research practice, Communications of the Association for Information Systems, Article 7, pp 1-30 Gikandi J and Bloor C., (2010). Adoption and effectiveness of electronic banking in Kenya, Electronic Commerce Research and Applications, pp 27784

80

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Guan S and Hua F, (2003). Multi-agent architecture for electronic payment, International Journal of Information Technology and Decision making, pp 497-522 Gupta, U. (2001). Information Systems: Success in the 21st Century, Prentice Hall Gupta, P. K. (2008). Internet Banking in India: Consumer Concern and Bank Strategies, Global Journal of Business Research, Vol. 2, No. 1, pp. 43-51 Hsieh C., (2001). E-Commerce System: critical issues and management strategies, Human Systems Managements, pp. 131-138 Hwang .J, Yeh .T and Li .J, (2003). Securing On-line Credit Payment without disclosing privacy information, Computer Standards and Interfaces, pp. 119-29 Kamel, S. (2005). The Use of Information Technology to Transform the Banking Sector in Developing Nations. Information Technology for Development, Vol.11, No. 4, pp. 305-312 Kannabira G. and Narayan, H. (2005). Deploying internet Banking and e-Commerce: Case Study of a Private Sector Bank in India. Information Technology for Development, Vol. 11, No. 4, pp. 363-379 Kim C, Tao W., Shin N and Kim K, (2010), An empirical study of customers perceptions of security and trust in e-payment systems, Electronic Commerce Research and Applications, pp. 8495 Lin J. and Lu T., (2000). Towards an understanding of the behavioral intention to use a website, International Journal of Information Management Mahdi, S. and Mehrdad, A. (2010). E-Banking in Emerging Economy: Empirical Evidence of Iran. International Journal of Economics and Finance, Vol. 2, No. 1, pp. 201-209 Mallat N., (2007). Exploring customer adoption of mobile payments- A qualitative study, Journal of Strategic Information Systems, pp. 413-432 Muhammed .Q and Didi .A, (2005). A model of electronic commerce success, Telecommunications policy, pp. 127-152 Nwaoliza E.F and Kasie E.G., (2011). Electronic Retail Payment Systems: User Acceptability and Payment Problems in Nigeria, Arabian Journal of Business and Management Review, Vol 5

Sanders M, (2000). Global e-Commerce Approaches Hyper Growth, Forrester Research, Cambridge MA. Schneider, (2007). Electronic Commerce, Thomson Course Technology, Canada Siau K, Sheng H, Nah F and Davis S, (2004). A qualitative investigation on customers trust in mobile commerce, International Journal of Electronic Business http://www.punchng.com, accessed in 2011 Xu G., and Gutierrez J, (2006). An exploratory study of killer applications and critical success factors in M-Commerce, Journal of Electronic Commerce in Organization, pp. 63-79. Yang J, (2009). Online payment and Security of Ecommerce, Proceedings of International Symposium on Web Information Systems and Applications, China. Zulu Brenda (2006). E- Payment a challenge for Africa, Available at: http://brendait.blogspot.com/2006/03/epaymentchallenge-for-africa.html.

81

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

Computing, Information Systems & Development Informatics Journal


The Computing, Information Systems and Development Informatics Journal (CISDI) provides a distinctive international perspective on theories, issues, frameworks and practice at the nexus of computing, information systems Developments Informatics and policy. A new wave of multidisciplinary research efforts is required to provide pragmatic solution to most of the problems the world faces today. With Computing and Information Technology (IT) providing the needed momentum to drive growth and development in different spheres of human endeavors, there is a need to create platforms through which breakthrough research and research findings that cuts across different discipline can be reported. Such dissemination to a global audience will in turn support future discoveries, sharpen the understanding of theoretical underpinnings and improve practices. The CISDI Journal publishes cutting edge research in computing, short communications/reviews and development informatics activities that appropriate design, localization, development, implementation and usage of information and communication technologies (ICTs) to achieve development goals. We also promote policy research that seeks to employ established (and proposed) legal and social frameworks to support the achievement of development goals through ICTs - particularly the millennium development goals. The CISDI Journal is published four times in a year. Special issues are also published periodically from papers presented in conferences or other academic meetings. Technical reports are welcomed and published when available. Authors should submit manuscripts for consideration as e-mail attachment to the Managing Editor at info@cisdijournal.net or longeolumide@fulbrightmail.org. Submissions should not be longer than 5,000 words including abstract, keywords and references. The CISDI Journal will publish research articles, short
communications, empirical research, case studies, conference proceedings and reviews (including book reviews) in the following focus areas (and other allied themes): General Computing * Hardware Technology * Software Engineering * Web Technologies * Information Technology * Information Systems * Information Science * Data & Information Management * Information Security * Business Computing * Business Information Systems * Computer Networks * Artificial Intelligence * Theory of Computation & Automata * Software Metrics and Measurements * Knowledge-based Systems * Database Management * Data Mining & Data Warehousing * Knowledge Management e-Government portals * Computer Forensic & Data Privacy * E-Systems (Webocracy, e-Democracy, e-Learning, e-Commerce, e-Government & e-Health, e-Agriculture) * citizen centric information systems * Web-enabled knowledge management * ICT enabled systems in the public and private service * Internet Governance * Information Systems Policy * TeleHealth & Telemedicine & Telemarketing * Design Structures & Annotations * Computer Graphics & Games * Multimedia & Mixed Media Systems * E-Library and Virtual Library Systems * Wireless Networks & Applications * Economic Intelligence Systems * Development Informatics * Mobile Applications & Technologies * Information Technology Policies * Web Usage Ethics & Policies * Enterprise Informatics and Policies * Social Informatics, Social Media and Policies

Only electronic submissions are accepted. We welcome submissions on a rolling basis. Authors of accepted papers will be required to pay pagination fees.

82

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

CISDI JOURNAL AUTHORS PUBLICATION TEMPLATE Computing, Information Systens & Development Informatics Paper Title (font Size 24)
Space (14 points font size)
Authors names (12 Points Font Size Bold ) Affiliations (10 Points Font Size )
E-mal Address and Phone Numbers (10 Points Font Size Italicize )

(Single column format for the above section please)

1. INTRODUCTION 3.4 Paragraphing All manuscripts must be A4 size paper, one inch top and buttom margin and 0.75 left and right margins. It should be written in English. These guidelines include complete descriptions of the fonts, spacing, and related information for producing your manuscripts for publication. Follow the style in this template for preparing your articles/papers/manuscripts for submission. 2. TYPE STYLE AND FONTS Times New Roman 10 points is the acceptable font for typesetting the entire body of the manuscript. If this is available on your word processor, please use the font closest in appearance to Times. Pictures and graphics, preferably in Jpeg format or Bitmap format can be embedded as well as mathematical and scientific symbols. Equations should be numbered and acronyms/abbreviations should be explained /defined or written fully on first use. 3. CITATIONS IN THE BODY OF THE WORK Referencing style in the body of the work should use square [7 ] braces. Works cited should be listed in the bibliography/referencing section at the end of the paper chronologically. Footnotes are not encouraged. 3.2 Main and Sub-Headings Main Headings should be in UPPERCASE and proceeded by a point number. For Sub-Headings, use toggle case or capitalize the first letter of each sentence/sub heading. Sub headings should be bold and italicized. Provide a space between each section and sub-sections. 3.3 Columns Submission MUST follow the two column format depicted in this document. Equal width of 3.38cm and column spacing of 2.5cm. Lines are not allowed between columns. Where necessary or unavoidable, the two column format can be merged to a column to accommodate pictures, tables or graphics. The block structure is adopted for this journal. 4. TABLES AND FIGURES Table headings/titles should be written above the tables. Figure headings should be written below the tables. These titles should proceed sequentially within the body of the text. Fig. 1 for the first figure, the next Fig 2 etc. Figures should not be numbered based on subsections of the manuscripts. Tables should also be labeled in the same sequence. Centralize Figures. Align Tables to the LEFT.

Table 1: How To Set Tables Table Head Table Column Head Table column subhead Subhead Subhead

copy More table copya Source (Adefolake, 2012)

Fig.1. Example of a figure Caption

83

Computing, Information Systems & Development Informatics Journal Vol 3. No. 2, May , 2012

4.1 Units The SI Unit is the acceptable Unit for specifying scientific measurements in this journal. 4.2 Equations Authors are encouraged to use the equation editor facilities provided in MS Word to prepare equations for the manuscript. In very extreme situations, you may use either the Times New Roman or the Symbol font (please no other font) to prepare equations for publications. To create multileveled equations, it may be necessary to treat the equation as a graphic and insert it into the text after your paper is styled. Equations should be numbered equations consecutively. Equation numbers, within parentheses, are to position flush right, as in (1), using a right tab stop. To make your equations more compact, you may use the solidus ( / ), the exp function, or appropriate exponents. Italicize Roman symbols for quantities and variables, but not Greek symbols. Use a long dash rather than a hyphen for a minus sign. Punctuate equations with commas or periods when they are part of a sentence. For example:

REFERENCES List and number all bibliographical references in 10 point Times, single-spaced, at the end of your paper. When referenced in the text, enclose the citation as (Charles, 2009); for two authors (Laud & Laud, 2011); for more than three authors (Segun et al, 2010). Where appropriate, include the name(s) of editors of referenced books. Square braced referencing [3] is also allowed for authors who are more comfortable with that style of referencing. Unless there are more than 3 authors or more give all authors names; do not use et al.. Papers that have not been published, even if they have been submitted for publication, should not be cited [4]. Papers that have been accepted for publication should be cited as in press. Capitalize only the first word in a paper title, except for proper nouns and element symbols.

For papers published in translation journals, please give the English citation first, followed by the original foreignlanguage citation.. APA style referencing available http://www.apastyle.org/manual/index.aspx acceptable
[1]

Align equations to the left.

online at is also

Acknowledgement A section on acknowledgement for sponsored research, collaborations, funds, grants/ other research material sources as well as individuals or organization that contributed to the success of the research.

[2]

[3]

[4] [5]

Eason, B. G. Noble, and Sneddon, K (2011) On certain integrals of Lipschitz-Hankel type involving products of Bessel functions, Phil. Trans. Roy. Soc. London, vol. A247, pp. 529551, April 1955. (references) Clerk , K and Maxwell, U (2010) A Treatise on Electricity and Magnetism, 3rd ed., vol. 2. Oxford: Clarendon, 1892, pp.6873. Olumared, J., Kay, L., nd Bean, O. (2004) Fine particles, thin films and exchange anisotropy, in Magnetism, vol. III, G. T. Rado and H. Suhl, Eds. New York: Academic, 1963, pp. 271350. Jauna, E. (2003). Title of paper if known, unpublished. Electronic Publication: Digital Object Identifiers (DOIs): Article in a journal:

Template Adapted from http://www.computer.org/portal/web/cscps/frmatting

Kindly visit the journal website at www.cisdijournal.net or contact the Managing Editor for additional information

84

You might also like