Professional Documents
Culture Documents
Vol1 No1 2013 All
Vol1 No1 2013 All
Corresponding Author:
National Dairy Research Institute, Karnal, India
Email- thesumitgoyal@gmail.com, gkg5878@yahoo.com
1. INTRODUCTION
Artificial Neural Networks (ANN), also known as “artificial neural nets” or “neural nets”, are
computational tool modeled on the interconnection of the neuron in the nervous systems of the human brain
and that of other organisms. The term “neural net” refers to both the biological and artificial variants,
although typically the term is used to refer to artificial systems only. Mathematically, neural nets are
nonlinear. Each layer represents a non-linear combination of non-linear functions from the previous layer.
Each neuron is a multiple-input multiple-output (MIMO) system that receives signals from the inputs,
produces a resultant signal, and transmits that signal to all outputs. Practically, neurons in an ANN are
arranged into layers. The first layer that interacts with the environment to receive input is known as the input
layer. The final layer that interacts with the output to present the processed data is known as output layer.
Layers between the input and the output layer that do not have any interaction with the environment are
known as hidden layers. Increasing the complexity of an ANN, and thus its computational capacity, requires
the addition of more hidden layers, and more neurons per layer [1]. Processed cheese is very nutritious and
manufactured from ripened cheddar cheese. This variety of cheese has several advantages over raw cheese,
such as tastier and longer shelf life. It is a nourishing high protein food, i.e., a welcome supplement to meat
protein [2].
w w w . i a e s j o u r n a l . c o m
2 ISSN: 2252-8776
2. METHOD MATERIAL
Soluble nitrogen, pH, standard plate count, yeast & mould count, and spore count were taken as
input parameters and sensory score as output parameter for developing feedforward soft computing models
(Fig.1).
Soluble nitrogen
pH
Sensory Score
Spore count
The data samples consisted of 36 observations, which were divided into two subsets, i.e., 30 were used for
training the network and 6 for validating the feedforward neural network.
N
2
Q exp Q cal
MSE
1 n
(1)
N
2
1 Q exp Q cal
RMSE
n 1 Q exp
(2)
N
2
Q Q cal
R
2
1 exp
1 Q exp
2
(3)
N
2
Q exp Q cal
E
2
1
1 Q Q exp
exp (4)
Where,
Mean Square Error: MSE (1), Root Mean Square Error: RMSE (2), Coefficient of Determination: R2 (3) and
Nash - Sutcliffo Coefficient :E2 (4) were used in order to compare the prediction ability of the developed
models.
Soft Computing Methodology for Shelf Life Prediction of Processed Cheese (Sumit Goyal)
4 ISSN: 2252-8776
Soft computing feedforward model’s performance matrices for predicting sensory scores are
presented in Table 1.
The comparison of Actual Sensory Score (ASS) and Predicted Sensory Score (PSS) for soft
computing feedforward multilayer models are illustrated in Fig.2.
17.4
17.2
17
Sensory Score
16.8
16.6
ASS
16.4
PSS
16.2
16
15.8
0 2 4 6 8
Validation Data
Different algorithms were tried like Polak Fletcher Reeves update conjugate gradient algorithm,
Levenberg Marquardt algorithm, Gradient Descent algorithm with adaptive learning rate, Bayesian
regularization, Powell Beale restarts conjugate gradient algorithm and BFG quasi-Newton algorithm.
Backpropagation algorithm based on Bayesian regularization mechanism was finally selected for training the
feedforward models, as it gave better results.Several combinations were tried and tested, as there is no
defined rule of getting good results rather than hit and trial method. As the number of neurons increased, the
training time also increased. The network was trained upto 100 epochs with multiple hidden layers; transfer
function for hidden layers was tangent sigmoid while for the output layer, it was pure linear function. The
Neural Network Toolbox under MATLAB software was used for development of the models. Multilayer
feedforward model with 5771 topology (MSE: 2.2053E-05, RMSE: 0.004696063, R2: 0.995303937,
E2: 0.999977947) gave the best fit, reflecting that the developed models are excellent for estimating shelf life
of processed cheese stored at 30o C.
4. CONCLUSION
Soft computing feedforward multilayer models were developed for predicting shelf life of processed
cheese stored at 30oC. Soluble nitrogen, pH, standard plate count, yeast & mould count, and spore count were
taken as input parameters. The output parameter was sensory score.Several experiments were conducted. It
was observed that 5771 topology performed the best, suggesting that the developed models are able to
analyze non-linear multivariate data with excellent performance, fewer parameters, and shorter calculation
time. Therefore, from the investigation it can be concluded that feedforward multilayer models are good for
predicting shelf life of processed cheese stored at 30o C.
REFERENCES
Soft Computing Methodology for Shelf Life Prediction of Processed Cheese (Sumit Goyal)
Institute of Advanced Engineering and Science
Corresponding Author:
Samah Osamah M. Kamel,
Computers & Systems dept., Electronics Research Institute,
El Behoth Str.- National Research Center - Doki – Giza – Egypt.
Email: samah_n2003@yahoo.com
1. INTRODUCTION
IP Telephony is transport of telephone calls over the Internet and It has been rapidly replaced public
switched telephone networks (PSTN). There are three protocols of IP Telephony which are signaling protocol
(H.323 and SIP), media transport (RTP and RTCP) which transmits voice samples and Supporting Services
(DNS, ENUM, TRIP, RSVP and STUN) which improves performance and ease of use. The Real Time
Protocol (RTP) is used to transport voice media and it carried encoded voice message between two callers. It
must protect RTP packet from many attacks in the network. This paper will discuss the implementation of
SRTP in minimum time by using a novel TEA encryption Algorithm which takes minimum processing time.
In the first we used key derivation to implement SRTP that the key derivation function is used to
derive the different keys used in a crypto context (SRTP encryption keys and salts, SRTP authentication
keys) from one single master key in a cryptographically secure way. Thus, the key management protocol
needs to exchange only one master key, all the necessary session keys are generated by applying the key
derivation function. The master key and master salt provide by an external key management protocol as input
to PRF to derive a set of session key. The set of session keys are session encryption and salt keys which are
used to generate the keystreams that used for encryption/decryption SRTP packet and session authentication
key is used to calculate and prove the MAC of the SRTP and SRTCP packets. We will discuss it in section
3.1.
The scenario of SRTP implementation consists of three steps. The first step is in the SRTP sender
that the SRTP encryption and salt keys are used for the encryption and decryption SRTP packet which
encrypt the RTP payload to produce the encryption portion of the packet by using a novel TEA encryption
algorithm that it will be discussed in details in section 2.6.
The second step is authentication process to authenticate encrypted SRTP packet. The message
authentication is used to calculate and prove HMAC of the SRTP packets. The sender side computes
authentication tag for authenticated portion of the packet. In the SRTP receiver side will generate HMAC and
compare between authentication tags in the SRTP sender side if two tags are equal, then message ||
w w w . i a e s j o u r n a l . c o m
IJ-ICT ISSN: 2252-8776 7
authentication tag pair is valid otherwise; it is invalid and error audit message “AUTHENTICATION
FAILURE” must be returned which will discuss in details in section 3.3.
The final step is in the SRTP receiver side which will decrypt the encryption portion of the packet
by using novel TEA encryption algorithm in section 2.6. In Fig. 1 shows us SRTP implementation in short
time.
Master Key
RTP packet Original
data
Master Salt
Encryption
Algorithm
Authentication
Encrypted
Authenticated
SRTP Packet
SRTP Packet
NO /discard
Same?? HMAC
18.1548 ms
YES / Accept Original
Decryption
Algorithm data
To select the algorithm which takes minimum time, we must evaluate and compare among six
encryption algorithms to minimize the processing time and we select the algorithm which will take less time.
There are many examples of encryption algorithms such as AES, Blowfish, IDEA, RC5, CAST-128 and
TEA. The strength of symmetric key encryption depends on the size of keys, number of rounds and the round
function. For example, for longer key is the hardest to break or attack. The comparison will examine the
processing time of the six encryption algorithms to minimize the processing time. Our paper will examine a
Novel TEA Algorithm that it will take less time.
There are two encryption algorithm categories: symmetric and asymmetric key algorithms.
Symmetric key algorithm bases on a shared secret and Asymmetric key algorithm bases on pairs of two types
of keys private and public. Symmetric encryption algorithms are divided into stream ciphers and block
ciphers, stream ciphers encrypt a single bit of plaintext at a time but block ciphers take a number of bits and
encrypt them as a single unit.
AES is a symmetric key encryption technique which will replace the commonly used Data
Encryption Standard (DES). AES is a block cipher which uses three key sizes: a 128, 192, or 256 bit
encryption key. In each encryption key size causes the algorithm to behave a little differently, so the
increasing key sizes not only offer a larger number of bits with which you can march the data, but also
increase the complexity of the cipher algorithm. It encrypts data blocks of 128 bits in 10, 12 and 14 round
depending on the key size. AES encryption is fast and flexible. AES is based on a design principle known as
a substitution permutation network and AES doesn’t use a Feistel network. AES has a fixed block size of 128
bits and key sizes in any multiple of 32 bits with a minimum of 128 bits. The blocksize has a maximum of
256 bits but the keysize has no theoretical maximum and AES operates on a 4×4 column major order matrix
of bytes termed the state. Most AES calculations are done in a special finite field. The AES cipher is
specified as a number of repetitions of transformation rounds which convert the input plaintext into the final
output of ciphertext. Each round consists of several processing steps including one which depends on the
encryption key. A set of reverse rounds are applied to transform ciphertext back into the original plaintext
using the same encryption key [12].
Blowfish is a symmetric block cipher just like DES or IDEA which designed by Bruce Schneier
1993. It takes a variable length key, from 32 to 448 bits. The algorithm consists of two parts one of them is a
key expansion part and a data encryption part and the other Key expansion converts a key of at most 448 bits
into several subkey arrays totaling 4168 bytes. Blowfish encrypts 64 bit blocks into 64 bit blocks of cipher
text. Blowfish is based on Feistel rounds, and consist of the f function used amount of the facilitation of the
principles used in DES to provide the same security with greater speed and efficiency in software [8].
IDEA was developed by Dr. X. Lai and Prof. J. Massey in Switzerland in the early 1990s to replace
the DES standard. IDEA uses the same key for encryption and decryption. IDEA encrypts a 64 bit block of
plaintext to 64 bit block of ciphertext and it uses a 128 bit key. The algorithm consists of eight identical
rounds and a half round final transformation. It is a fast algorithm and has been done in hardware chipsets
making it even faster [8], [11], [16], [34].
RC5 was developed by Ron Rivest which is block cipher and it is fast that is a simple algorithm and
its word oriented the basic operation work on full word of data at a time. It encrypted block of plain text of
length 64 bits into blocks of ciphertext of the same length and key length range from 0 to 2040 bits [8], [6].
Cast-128 was developed by Carlisle Adams and Stafford Tavares and key size varies from 40 bits to
128 bit increments. CAST-128 has structure of classical feistel network which consisted of 16 rounds and 64
bit blocks of plaintext to produce 64 bit blocks of ciphertext. CAST-128 has two subkeys in each round
(32bit of km (i) and 5 bit kr (i)) and the function F depends on the round [3], [4], [5], [8].
The Tiny Encryption Algorithm (TEA) was designed by David Wheeler and Roger Needham of the
Cambridge Computer Laboratory. It is a symmetric private key encryption algorithm and TEA one of the
fastest and most efficient cryptographic algorithms in existence and TEA operates on 64 bit blocks and uses a
128 bit key. It has a Feistel structure with a suggested 64 rounds, typically implemented in pairs termed
cycles and TEA has an extremely simple key schedule, mixing all of the key material in exactly the same
way for each cycle. The Feistel network uses a group of bit shifting XOR, and adds operations to create the
diffusion and confusion of data [19], [20], [25].
LE(0) = A + S[0]
RE(0) = B + S[1]
LE(i) = ((LE(i-1) xor RE(i-1) <<<RE(i-1)) + S[2× i ]
RE(i) = ((RE(i-1) xor LE(i) <<<LE(i)) + S[2×i +1]
For decryption, the 2w bits of ciphertext are initially assigned to the two one word variables LD(r) and RD(r)
But we achieved the simulation; in the round 1, 2, 14, 16 two but we noticed that there are long
number of zeros of I so we use the padding operation for I .We noticed that some times the function is being
negative value so it must be converted to positive value. All these notices will be happened according to type
of voice file which converted to bits.
The decryption algorithm, we implement a novel decryption algorithm to get the original data. Because the
encryption algorithm is very long and complicated so we modified and determined the decryption algorithm
The Novel TEA Algorithm: We implement novel TEA encryption and decryption algorithm after
our modification TEA algorithm to get less time.For encryption algorithm: we divided the plaintext into two
parts (y and z) and used delta which identified 2654435769 or 9E3779B9. The K is gives the key of four
words that defined (k0, k1, k2, k3) and n is the number of cycles which equal 16 cycles that use 32 Feistel
rounds.The process:
1. Plaintext is divided into two parts right and left part (Y, Z).
2. The left part Z is shifted left by (4) and added to k0.
3. The Left part Z is added to the sum.
4. The left part Z is shifted right by (5) and added to k1.
5. Bitwise XOR the result of steps 2, 3 and 4.
6. The result of step 5 is added to the right part Y to produce Yi which swapped.
7. Yi is shifted left by (4) and added to k3.
8. Yi is added to sum.
9. Yi is shifted right by (5) and added to k4.
10. Bitwise XOR the result of steps 6, 7 and 8.
11. The result from step 10 is added to the left part Z to produce zi, and then swapped and so on.
For decryption algorithm, It is the same way of the encryption but its inverse and sum= shift left of delta by
5. Note that: in the final decryption we use padding to produce the original message.
The equation of Tea decryption algorithm according to the figure:
y (i1)= [(y(i) <<< 4) + k2] xor [y(i) + sum] xor [y(i) >>> 5 + k3]
z(i-1) = yi1 + zi
zi1 = [(z(i-1) <<< 4) + k2] xor [z(i-1) + sum] xor [z(i-1) >>> 5 + k3]
y(i-1) = zi1 + y(i)
AES 92.06 ms
CAST-128 42.057 ms
Blowfish 7.359 ms
IDEA 8.609 ms
RC5 7.152 ms
The Novel TEA
1.744 ms
algorithm
In Figure 10, we show the taking time of each encryption and decryption algorithm. The results
show hat the novel TEA algorithm after our modification of TEA algorithm which takes less time than other
algorithms and in addition to it is faster and saves bandwidth, end to end delay and powerful algorithm which
gives the best meets half way between security and efficiency. It provides diffusion and confusion properties.
3. SRTP IMPLEMENTATION
3.1. KEY DERIVATION
The SRTP uses the maser key and the master salt which provided by an external key management
protocol as input to PRF to derive a set of session keys which consisting of an SRTP encryption key, an
SRTP salting key and an SRTP authentication key. For encryption, the SRTP are used to generate keystreams
which are used for SRTP and SRTCP packets encryption and decryption algorithms [15].
For authentication, SRTP authentication keys are used to compute and prove the MAC of SRTP and
SRTCP packets. The PRF is used for the session keys derivation that based on AES-CRT encryption
algorithm. The master key is used as the AES encryption key and the initial value which generated using
concatenation, shift and XOR operation. There are several families of KDFs, we use KDF in Feedback Mode
and the output of the PRF is calculated by using the result of the previous iteration and, optionally using a
counter as the iteration variable (s) [28].
3.3. AUTHENTICATION
Message authentication is the next process after the encryption process and protects the entire RTP
packet by using session authentication key for the message authentication which is used to calculate and
prove HMAC of the SRTP packets. The sender side computes authentication tag for authenticated portion of
the packet. This step uses the current rollover, the authentication algorithm (SH1) and the session
authentication key. The authentication tag is used to carry message authentication data. The authentication
portion of SRTP packet consists of RTP header followed by the encrypted portion of RTP packet [13].
HMAC is used between two parties that share a secret key in order to authenticate information transmitted
between these parties [13]. This standard defines a MAC that uses a cryptographic hash function in
conjunction with a secret key; this mechanism is called HMAC and is a generalization of HMAC [13].
HMAC should be used in combination with an approved cryptographic hash function [13]. The hash function
includes SH1 and MD5 but, we use SH1because the SH1 is more securing than the MD5.
The SRTP receiver verifies message|| authentication tag pair by computing a new authentication tag
over data using associated with the receives message if two tags are equal, then message || authentication tag
pair is valid otherwise; it is invalid and error audit message “AUTHENTICATION FAILURE” must be
returned.
hfirst= E0E4D8DC767970736064585C2024181C36363732
hash= 7BDDCBFD736363732
Elapsed time is 7.5343ms.
4. CONCLUSIONS
We selected six symmetric cryptographic algorithms (AES, Blowfish, IDEA, RC5, CAST-128, and
TEA) and compared between them. AES algorithm is stronger than the other algorithm but it takes 92.06 ms.
Blowfish algorithm has less power but more time, so the blowfish has disadvantages in the decryption
algorithm over other algorithms in terms of time consumption and serially in the output. IDEA uses a 128 bit
key that its length makes it impossible to break by simply trying every key . RC5 uses a pseudorandom
initialization sequence followed by a complex set of operations involving variable length (rotations) and mod
2 additions, so it is difficult to say which of these approaches is superior and also for large key size, the
security of RC5 strong. CAST-128, we implement the novel algorithm of decryption to get the original data.
CATS-128 is complicated algorithm and takes 42.057ms to implement the encryption and decryption
algorithm. TEA processing takes time 200 ms, so we need to modify TEA algorithm to produce a novel
algorithm that it takes 1.744 ms. We select this novel TEA algorithm to implement encryption and decryption
for SRTP implementation in minimum time that it is suitable for IP Telephony traffic.
We implemented SRTP by three phases; the first phase is the encryption process when we use the
novel TEA algorithm (TEA encryption algorithm after our modification) which takes less time. After the first
phase, the second phase is the Authentication process which authenticates the data by generating
authentication tag and sends it to the receiver which it generating anther authentication to verify it. If two
tags are equal, then it is valid, otherwise; it is invalid and error audit message “AUTHENTICATION
FAILURE” must be returned. The third phase is the decryption process to get the original data by using novel
TEA decryption algorithm.
Acknowledgment
The authors would like to thank anonymous reviewers for their valuable comments and suggestions
that improve the presentation of this paper.
REFERENCES
[1] Mihir Bellere’s and Roch Guerin and Phollip Rogaway , IBM T.J.Walson Research Center , Dept. of Computer
science university of California, XOR MACs: New Methods for Message Authentication Using Finite Pseudorandom
Functions, 1995.
[2] Lecture 10: Message Authentication Code Yuan Xue, October 1995.
[3] H. M. Heys and S. E. Tavares, Department of Electrical and Computer Engineering Queen’s University Kingston,
BIOGRAPHY OF AUTHORS
Samah Osamah M. Kamel Received the B.S. degree in electronics and communications from Zagazig Faculty of
Engineering, Zagazig University, in 2001. Her research interests Secure IP Telephony Attack Sensor. She is a Network
Engineer at computers & systems dept., Electronics Research Institute since 2002.
M. Saad El Sherif His M.Sc., and Ph.D. degrees from the electronics & communications dept. & computers dept.,
faculty of engineering, Cairo University, at 1978, and 1981, respectively. Dr. M. Saad El Sherif is a Prof., at computers
& systems dept., Electronics Research Institute (ERI). He is supervising 3 Ph.D students, and 5 M.Sc. Students. Dr. M.
Saad El Sherif had published more than 26 papers in communication and computer area. He is working in many
communication and computer systems hot topics such as; A Novel Representation of Artificial Neural Networks with
Dynamic, Automatic processing of bank checks Synapses, Improving the performance of random early detection
algorithm forinteractive voice applications, Anew IP multicast QoS model forreal - time audio / vidio traffics on the IP
based networks, Neural networks in forecasting models: nile river application and Anew internet videoconference
transport protocol. Also he is a technical reviewer for many international journals. He is heading the Electronics
Research Institute from 2009 to 2011.
Adly S. Tag Eldien Received the B.S. Degree in Electronics and communication, Benha University in 1984 and the
M.Sc. in computer based speed control of single phase induction motor using three level PWM with harmonic
elimination, Benha University, in 1989. The Ph.D. in optimal robot path control, Benha University, in 1993. He is
currently an Association prof. in shoubra faculty of engineering and Manager of Benha university network and
information center. And his research interests include Robotics, Networks, and Communication.
Sahr Abd El_Rahman Ismail Hassan, Her M.SC degree in in Electronics and communication, Benha University an an AI
Technique Applied to Machine Aided Translation in 2003 and Phd. , Benha University in Reconstruction of High-
Resolution Image from a Set of Low-Resolution Images in Jan 2008. She is PhD/ MSc Supervision in Secure IP
Telephony Attack Sensors, Intelligent Zone Wireless Fire Alarm System and An Investigation & Reduction of PAPR in
MIMO-OFDM Systems. She is Graduation Projects Supervision in Smart Home Automation with J2ME, Sentry Robot
Navigation Using Internet Protocol, Store Humanized Robot Navigation, Touch Screen and Mobile Robot Guidance by
Using GPS.
Corresponding Author:
First Author,
Research and Development Department,
University of computer Studies, Yangon.
Email: myintucsy@gmail.com
1. INTRODUCTION
The reconstruction of the complete view of a virtual environment has been paid substantial attention
by the researchers. This paper presents the recent results of our research work related to construct the whole
views of an object from the sampled viewpoint images of a scene. Especially, a full view image of the large-
scale object is reconstructed. The whole view of a large-scale object is impossible to take the whole scene at
once without using sufficient distance between camera and object. The system of the whole view
synthesizing is developed base on the image registration approach. Image registration technique is very
useful for computing the transformation and constructing a panoramic image from the sampled images of a
scene. The multiple images are grabbed by a camera system from different camera viewpoints. The
corresponding pairs among the successive image pairs are determined from the relation of the camera
positions. The perspective projected geometry method is applied for detecting the relation between the
camera and image coordinate system from the image pairs. In this approach, it is possible to produce a lot of
intermediate views by using a few cameras. Then it will be provided to reduce the cost and number of camera
in security system by using the synthesizing the arbitrary view of images.
Some reports have already been presented concerning the research work of the view synthesizing of
an object and a scene[1-6]. N. Chiba [1] proposed the feature based image mosaicing technique for arbitrary
depth scene. Yamamoto et al.[2] proposed the trinocular stereo system for searching the optimal
correspondence among three images. S.E Chan and Williams [3] proposed the view interpolation method for
image synthesizing by using image morphing technique. M.M.Sein et al., [4] presents an approach for
reconstructing the arbitrary view of a large-scale object. This presents a new approach for synthesizing the
arbitrary view based on the image morphing technique. T.Takahashi et al. [5] proposed a method for
w w w . i a e s j o u r n a l . c o m
IJ-ICT ISSN: 2252-8776 21
rendering views for large-scale scenes. An omni directional camera is used to capture the panoramic image
running along a straight line. J. Mulligan et al [6] presented a technique to create virtual worlds using densely
distributed stereo views. In their systems, the location of a third camera is at the center of a pair of stereo
camera. The computation of dense trinocular disparity maps has been explored for non-planar camera
configurations that arise when cameras are set surrounding the object, which is to be modeled. We developed
a robust method for reconstructing the whole views of a large-scale object. Unlike the other virtualized
reality system, we don‟t only synthesize the virtual stereo images but also reconstructed the arbitrary and
virtual views of the unspecific configured object. It is also possible to create the new scene by merging the
synthesized views of multiple objects.
P(X, Y, Z)
p(x, y)
O
C Z
3. Image Registration
Image registration is one of the fundamental tasks in image matching. The accurate transformation
matrix is the key of the registration approach. It is the process of matching two images which are reference
image and operated image. At least three matching pairs are needed to guess the initial transformation matrix.
Let us consider the registration between two parts of an object. The first part is supposed to be a main part or
destination part, and the second part is supposed to be a current part or transformed part. Let P’ and P denote
the points on the main part and current part of an object and their relation can be expressed as:
P’= T P + D , (6)
where D is the translation parameter and T contains the scaling and rotation parameters. In 2D Case, T and
D can be defined as:
cos sin
T =a
sin cos
t
and D = [ d1 d2] , (7)
where a , θ and d1 ,d2 are the scaling, rotation and translation parameters, respectively. The initial
transformation matrix can be detected the above relation. The accurate transformation matrix is computed
iteratively by minimizing the distances between the control points of the two parts and that can express in the
following equation.
D i 2 = ∑ distance (Ti Pi, Pi’) (8)
where Ti= T ○ Ti-1. The difference δi from the (k-1)th to the kth iteration is defined as
1 k 1 (9)
i D
k
(D )
N
where i i and N is the number of control points on the curve. This process is continued until the
Successive images
Image Enhancing
Camera Calibration
Image Registration
The overlapping region between two images is extracted and computed a maximum displacement of
the overlapping region. After transform the operated image, it will be integrated to reference image. Then,
this integrated image is defined as a reference image and pick up a new image as operate image. Finally, the
whole view of a large scale object will be reconstructed by integrating the transformed object parts.
The corresponding feature points on the two input images are detected and the distance in the
overlapping region is calculated. The non overlapping region of the operated image is cut off and then
merged it to the reference image. Figure 3(b) shows the synthesis panorama view of a scene.
Another experiment is the whole view reconstruction of a large scale object. The successive image
sequence is grabbed by a camera moves with horizontal and vertical displacement. Both of the translation
and the rotation parameters will effect in this experiment. There are 45 successive images of a large object.
Some images are shown in figure 4.
Some images
Foreground images
(a) Background elimination (b) Integrated part (c) The whole view
Figure 5. The whole view reconstruction of a large scale object
6. CONCLUSION
The proposed system reconstructed the full-view of large-scale object from the successive images.
The camera calibration and the absolute coordinate transform are computed for each image. The
transformation parameters between each successive image are calculated and then used in blending operation.
For each image pair, we could get very good results but when we use these parameter in blending, it could
cause some artifacts. The perfect and good image has been received when blending the multiple images into
one. The advantage of this process is useful for showing irremovable objects from one place to another at
museum.
REFERENCES
[1] N. Chiba,“ Image mosaicing for arbitrary depth”, Image Labo, 11(8)220-230, August 2000.
[2] T. Yamamoto et al., “Correspondence Search of Trinocular Stereo Using Dynamic Programming”, IPSJ SIGNotes
Computer Vision, Vol. 046, 1986.
[3] S.E.Chan and Williams, “View Interpolation for Image Synthesis”, Proc.SIGGARPH 93, pp. 279-299, 1993.
[4] M.M.Sein et.al, “ Reconstruction the Arbitrary View of an Object Using the Multiple Camera System”, IEEE
International Symposiun on Micromechatronic and Human Science(MHS 2003), Nagoya, Japan,pp.83-88, Oct. 19-
22,2003.
[5] T.Takahashi et al., “Arbitrary View Position and Direction Rendering for Large-Scale Scenes”, IEEE International
Conference on Computer Vision and Pattern Recognition, pp.1063-6919, vol2.2000.
[6] Y.Chen and G.Medioni, “Object modeling by registration of multiple range images”, Image and Vision
Computing, 10(3):145-155, Apr 1992.
BIOGRAPHY OF AUTHORS
Myint Myint Sein received the Ph.D in Electrical Engineering from the Graduate
School of Engineering, Osaka City University, Osaka, Japan in 2001. She is presently
serving as a professor in the Research and Development Department, University of
Computer Studies, Yangon, Myanmar from 2005. She is a member of Program
Committees and Organizing Committees of many National/ International Conferences.
She has worked/ presently works as referees for many international journals and is a
member of the technical committee for a couple of international conferences. She has
published more than 70 research articles in peer reviews journals, book chapters and
international conferences. Her research interests are Pattern Recognition, Image
Processing, Soft computing, 3D reconstruction and 3D Image Retrieval.
Thin Lai Lai Thein received the Ph.D (IT) in 2007 from University of Computer
Studies, Yangon, Myanmar. She is currently works at University of Computer Studies,
Yangon, Myanmar as Associate Professor. She is interesting in pattern Recognition,
Image Processing and 3D virtual view reconstruction, etc.
Corresponding Author:
Owolabi B Abimbola
National Open University of Nigeria, Ibadan Study Centre
Email: aowolabi@nou.edu.ng
1. INTRODUCTION
Globally speaking, literacy, special education, formal and non-formal education instructors are
constantly challenged with new educational technology inventions, tools and resource materials, as well as
faced with Information and Communication Technologies (ICT) that aid in the training, learning, skill
acquisitions and applications by individuals; for independent living, employment, community integration
and attaining other forms of postsecondary options in the society. Both gifted and exceptional citizens
worldwide require modern ICT knowledge skills for growth, survival at home (whether it is in urban or rural
community), become productive in the workplace and achieve community development goals. Information
and Communication Technology (ICT) innovations have come to stay in this 21st Century and beyond; every
nation and people are constantly faced with the challenges of ICT in different sectors of human development,
community improvement and nation building.
The global quest for ICT for development is enormous to both urban and rural communities because
ICT skills are critical to the success of enhancing national development in a globalised era (World Bank,
2006). In this regards, governments in developed and developing societies strive to create opportunities for
citizenship participation in ICT training, creative knowledge, skills acquisition, general application and
usage of ICT tools to solve problems, promote their wellbeing and enhance national growth.
Furthermore, rudimentary intermediate-level ICT skills necessary to function optimally in basic
computer-related environments are crucial to national competitiveness in a developing context. The supply of
these skills provided predominantly by private, non-state institutions in most developing contexts is
considerably under-researched, argues Atchoarena and Esquieu (2002). Several attributes have been given to
Information and Communication Technology (ICT), Information Technology (IT) but they all focus
generally in one direction, i.e. to aid in human development, growth and facilitate standard and effective
living.
w w w . i a e s j o u r n a l . c o m
IJ-ICT ISSN: 2252-8776 27
Bialobrezeska & Cohen (2003) regarded ICTs as technologies that generally support an individual's
ability to manage and communicate information electronically, and include hardware such as computers,
printers, scanners, video recorders, television, radio, and digital cameras; as well as the software and systems
needed for communication, such as the Internet and e-mail. Information technology (IT) is "the study,
design, development, application, implementation, support or management of computer-based information
systems, particularly software applications and computer hardware", according to the Information
Technology Association of America (ITAA, 2008). Today’s professionals in information technology obtain
training skills and certification in performing several roles in the areas of installing applications to designing
complex computer networks and information databases – multimedia applications, processes, computer
software, computer hardware, programming, data constructs, among others.
The public/private enterprise, educational systems and non-governmental institutions are not left out
in the quest for ICT development, applications and usage in different facilities and environments.
Private/public enterprises, non-governmental agencies and industrial concerns have embraced ICT to solve
problems, earn revenue and improve work and productivity in the workplace. A few of the duties that IT
professionals perform may include data management, networking, engineering computer hardware, database
and software design, as well as management and administration of entire systems. Technology can help an
organization improve its competitive advantage within the industry in which it resides and generate superior
performance at a greater value (Bird, 2010). The personnel of these establishments integrate technologies,
such as the use of personal computers, assistive technologies, cell phones, televisions, automobiles, specific
electronic gadgets, and many others; to provide services, attend to problems, handle work demands and
increase productivity.
Communication involves the interactive exchange of information, ideas, feelings, needs, and desires,
states Heward (2009); adding that, communication involves a message, a sender who expresses the message,
a receiver who responds to the message. In this regards, communication functions solely to facilitate the
process of narrating, explaining/informing, requesting and expressing information, materials and items which
human beings encounter daily in life. It means that when a sender transmits a message to a receiver through
some medium – could be via word of mouth, telephone, text messaging, fax, telegraph, written expression
and other multimedia channels. The receiver then decodes the message and gives the sender a feedback. As
far as literacy, non-formal education and special education are concerned, individuals acquire life skills and
use various modes of communication; for instance there are verbal and non-verbal means of communication.
i.e. auditory means, like speech, song, and tone of voice, and the visual/nonverbal/physical means, like using
sign language, body language, eye contact, touch; through different media, such as, graphics, pictures,
writing and sound process.
This paper attempts to identify the development, usage and challenges of ICT in Africa, with
particular reference to ICT for literacy, formal and non-formal education, and special education development
of rural and urban Nigeria.
Discovery of ICT for the Growth and Development in Sub Sahara Africa (Owolabi B Abimbola)
28 ISSN: 2252-8776
broadcasting (radio, television, cable network),law enforcement ( police, military, firemen – signals and
communication gadgets though needs to be improved and made available to the officers for efficiency and
effective protection of lives and properties in urban and rural communities), postal services, etc., all use ICT
facilities.
Furthermore, several unemployed Nigerian youths rely on ICT training and facilities for their daily
livelihood, for instance, upon graduation from an institution of learning (both high schools and universities)
and staying at home for years without employment, many of these unemployed youths have the vision to
enroll for formal or non formal trainings (depending on the affordability) in computer programming, web
designs and word processing, cell phone and computer repairs, software installation, repairs, data entry
process, e-learning, e-marketing, e-trade, e-commerce, etc. The acquisition and application of such ICT skills
have opened opportunities for jobs and engagement of young adults in urban centres. Today, business centres
and cyber cafes are opened daily in many Nigerian cities like, Lagos, Abuja, Onitsha, Ibadan, Kano, Sokoto,
Kaduna, Owerri, Aba, Enugu, Benin, Port Harcourt, Calabar, among others towns, to meet the ICT needs of
the citizens and business organizations. These business centres and cyber cafes provide services such as
typesetting of documents, printing, production of different projects, company brochures, reports, proposals;
use of internet and web facilities, online blogging and social networking, making local and international
phone calls, selling telephone cards for MTN, Glow, Etisalat, Airtel, Zain, among other telecom firms.
The movie industry and telecom business is now a multi-billion naira business in Nigeria with the
participation of citizens from the urban and rural Nigeria – in acting, performing and enjoying mobile phone
services. The home movie industry for instance, have permeated into the mainstream Nigeria market and
overseas, cutting across interest in various multimedia levels; and so is the use prepaid phone cards which are
relatively affordable. It should be noted that with the advent of the privatization of telecommunication
industry and network in Nigeria, several young Nigerians are being employed by these multinational
corporations operating in Nigeria, and even self-employed youths setting up makeshift outlets for selling
prepaid cards in the marketplaces, bus stops and street corners in the urban and rural areas.
Furthermore, aside from the establishment of several under-funded universities of technology across
Nigeria; the National Information Technology Development Agency (NITDA) of Nigeria, among other ICT
groups and organizations are poised to promote ICT advancement in Nigeria alongside the government,
Discovery of ICT for the Growth and Development in Sub Sahara Africa (Owolabi B Abimbola)
30 ISSN: 2252-8776
provide information, support and contribute to ICT policy making at all levels for national growth. New ICT-
related tools have been known to make institutions and markets more productive, enhance skills and learning,
improve governance at all levels, and make it easier for services to be accessed (Opara & Ituen, 2009). The
supply of ICT skills represents an integral component of the overall national development trajectory of
countries in a globalised world, opines the World Bank, which had expressed in 2009 its readiness to kick-
start a $2m (N300m) investment on facilities to promote growth and employment projects which would
further strengthen ICT development, as well as the entertainment (music, movies and films) industry in
Nigeria, according to Opara & Ituen reported.
The Nigerian government should be commended for its recognition of the need for a wide usage and
exploitation of ICT tools in the country, as well as identification of capacity building as a paramount focus of
the government towards enhancing career progression and development among Nigerians, particularly those
in ICT and the industry, for a better and quality output that would have multiplier effects on the nation‘s
economy (Opara & Ituen, 2009). In this regards, the government should ensure that all school children will
be able to utilize ICTs by the year 2020 (in line with the national vision for an educated and industrially
developed society). However, some Nigerian universities are gradually soliciting ICT support from several
foreign agencies – computers, internet/web-based facilities and e-learning instructional resources, example
the University of Lagos, Covenant University, University of Ibadan, University of Nigeria, University of
Lagos, Nnamdi Azikiwe University, etc. Most of the libraries have internet facilities for students, faculty and
staff to access information, but they are not adequate.
Nigeria is, indeed, developing in the area of ICT but there are still some loopholes, which are
affecting its total advancement in this area, lamented Opara and Ituen (2009); referring to the International
Telecommunication Union (ITU) report which listed the following as indices for ICT compliant and
benchmarking tool globally, regionally and at the country level. ”These are related to ICT access, use and
skills, such as households with a computer as well as the number of Internet users; and literacy levels,” says
ITU. People in rural Nigeria are willing to learn new things through formal and non-formal education
settings, if such opportunities are provided. Nigerians are smart and resourceful no matter where they reside.
Nigerians are always ready to learn new things and open to change, as well as adapt easily to new
environment and technologies. The absence of non-formal education centers in rural communities of Nigeria
discourages ICT training and knowledge acquisition by the citizens. ICT resource availability and
affordability are major handicaps. Thus, the citizens are to fully utilize the potentials of ICT if they have
unlimited and poor access to the Internet facilities and computer education. Even in urban areas, access to
personal computers and internet is largely limited and expensive to most Nigerians.
All these issues, notwithstanding, the Federal Executive Council of Nigeria, approved a national IT
policy in March 2001, view to solve ICT problems in the country. Government established the National
Information Technology Development Agency (NITDA), charged with the implementation of ICT policies.
The policy recognized the private sector as the driving engine of the IT sector. There are calls for an ICT
policy reform in Nigeria so that individual citizens could access the training and services. In this light, the
governments setup the Nigerian National ICT for Development (ICT4D) Strategic Action Plan committee to
develop a new ICT policy for development as the ICT action plan / roadmap for the nation. The Nigerian
government should ensure that this agency provides the needed services for a sustainable intervention, and
environment for creating affordability and accessibility to ICT gadgets and trainings
4. CONCLUSION
1. Universities, national research centres and the National Information Technology Development Agency
(NITDA) should collaborate with international agencies to review and establish the needed special ICT
courses/programs and projects that would provide skill training opportunities for people in the field of
research, creativity and innovations on ICT infrastructure; content development, law, policy and
regulatory affairs, industrialization, governance, online distance services, including telemedicine,
distance education, and Internet marketing.
2. Federal government should allocate more funds to ICT education nationwide; provide adequate ICT
resources and training opportunities for people in rural and urban communities. In addition, power
supply must be steadily available for such program to be successful.
3. Special ICT personnel and special educators should be trained on the use of ICT facilities and assistive
devices so that individual learners and exceptional adults could benefit from such trainings whether in a
formal or non-formal settings in urban and rural Nigeria. Schools must be equipped with ICT gadgets
and tools, including assistive technologies, like Braille for the visually impaired, mobile wheelchairs,
cochlear implants and other hearing aids, among other devices for students with disabilities and
handicapped adults, etc., for the national ICT compliant dream to be accomplished.
4. Improvement of access to technological tools and the internet in urban and rural communities.
Advantage should be taken of organizations like the Free and Open Source Software Foundation for
Africa (FOSSFA) and LinuxChix Africa, which are promoting the use and development Free/Libre Open
Source Software (FLOSS).
5. Government/private firms should develop and promote the emergence of rural telecommunication
operators and encourage telecommunication operators either incumbent or new to provide services in
rural/remote communities with appropriate private investment incentives with pro-active regulatory
environment. .
REFERENCES
[1] Akoojee, S &Arends, F (2009) Intermediate-level ICT skills and development in South Africa: Private provision
form suited to national purpose! Education for Information Technology (14)189–204
[2] Atchoarena, David, and Paul Esquieu (2002, January). "Private technical and vocational education in sub-Saharan
Africa (SSA): Provision patterns and policy issues," revised final report, prepared by the International Institute for
Educational Planning for The World Bank, Paris.
[3] Bialobrzeska, M. & Cohen, S. (2005). Managing ICTs in South African schools: Aguide for school principals.
Braamfontein: South African Institute for Distance Education.
[4] Bellis, M ( 2010) The history of communication. Retrieved October 21, 2010, from
http://inventors.about.com/library/inventors/bl_history_of_communication.htm
Discovery of ICT for the Growth and Development in Sub Sahara Africa (Owolabi B Abimbola)
32 ISSN: 2252-8776
[5] Bird, M. (2010). Modern management guide to information technology. Retrieved October 21, 2010, from
http://harvardbookstore.biz
[6] Chitiyo, R & Harmon, W.W (2009). An analysis of the integration of instructional technology in pre-service teacher
education in Zimbabwe. Education Tech Research Development. 57: 807–830
[7] Department of Education (2004). Draft white paper on education: transforming education and learning through
information and communication technologies (ICTs). Government Gazette, 470:26734:1-44.[Online]:
http://www.info.gov.za/gazette/ whitepaper/2004/26734.pdf Accessed 26 March 2007.
[8] Heward, W (2009). Exceptional children: An introduction to special education. Upper Saddle River. Merrill
Pearson.
[9] Hodgkinson-Williams, C (2006, March). Revisiting the concept of ICTs as 'tools': Exploring the epistemological
and ontological underpinnings of a conceptual framework. A Hodgkinson-Williams Paper for ITFORUM 13-17.
Retrieved October 28, 2010, from http://it.coe.uga.edu/itforum/paper88/Hodgkinson-Williams-2006.pdf
[10] Holcroft, E. (2004). SchoolNet South Africa, in ]ames, T., ed. (2004). Information and communication technologies
for development in Africa: 3: Networking Institutions of Learning - SchoolNet. [Online]: http://www.idre.ea/en/ev-
33006-20l-l-
[11] Information Technology Association of America , ITAA (2008) Information technology. Retrieved March 3, 2008
from www.ITAA.org.
[12] Kinuthia, W (2008, July/August). ICT international: Another spotlight on the continent. TechTrends in Africa. 52, 4
[13] Moleke, P., Paterson, A., & Roodt, J. (2003). ICT and associated professionals. In HSRC Human resources
development review: Education, employment and skills in South Africa. Cape Town and East Lansing: HSRC Press
and Michigan State University Press.
[14] Mostert, J &Nthetha, M (2007). Information and Communication Technologies (ICTs) in secondary educational
institutions in the uMhIathuze municipality, South Africa: an insight into their utilisation, impact, and the challenges
faced. South African Journal of Library & Information Science, 74(1), 23-40. Retrieved from Education Research
Complete database
[15] Opara, S & Ituen, I (2009). Nigeria’s ICT sector: Growth, gains and challenges.The Punch Newspaper, Sunday, 5
Apr 2009. Retrieved on October 28, 2010, from
http://www.punchng.com/Articl.aspx?theartic=Art2009040422503735
[16] Paterson, A., McGrath, S., & Badroodien, A. (2005). The national skills survey 2003. Unpublished client report for
the Department of Labour. Pretoria: Human Sciences Research Council.
[17] World Bank. (2006). Information and communications for development. Washington: World Bank.
Sourabh Shrivastava
Institute of Technology Bhopal (M.P.) India
Corresponding Author:
Sourabh Shrivastava
Centre for Remote Sensing and GIS, Department of Civil Engineering,
Maulana Azad National Institute of Technology Bhopal (M.P.) India.
Email: sourabh.vds@gmail.com
1. INTRODUCTION
Wetlands represent the interface between land and water (Dugan, P.J., 1990). They have great
significance in terms of ecological, economical and social benefits. Wetlands in India are among the least
protected ecosystems and are threatened and fast disappearing. They are subjected to both natural and human
forces. The alarming loss of wetlands all over the globe had initiated an inter-governmental treaty which
provides the framework for national action and international cooperation for the conservation and wise use of
wetlands and their resources. This treaty was signed in Ramsar, Iran, in 1971 and is known as ‘Ramsar
Convention’. Total area of India so far designated under wetland ecosystems is 4050536 ha (1461 ha natural
and 2589265 ha. man-made) (Jain, et al, 2008). In India wetlands are distributed in different geographical
regions ranging from cold arid zone of Ladakh to wet Imphal, from the warm and arid zone of Gujarat –
Rajasthan to the tropical monsoon based regions of Central India and the wet and humid regions of southern
peninsula (Parikh, et al, 2003).
Wetlands are found throughout the world under many names and descriptions. They take many forms
including marshes, estuaries, mudflats, mires, ponds, fens, pocosins, swamps, deltas, coral reefs, billabongs,
lagoons, shallow seas, bogs, lakes, and floodplains. There are also human-made wetlands such as fish and
shrimp ponds, farm ponds, irrigated agricultural land, salt pans, reservoirs, gravel pits, sewage farms and
canals. Wetlands cover about 6 per cent of earth's land surface (Bazilevich, N.L., et al. 1971) and are
distributed in all climatic zones of the earth except Antarctica. The rapidly expanding human population,
large scale changes in land use/land cover and burgeoning development projects and improper use of
watersheds has all caused a substantial decline of wetland resources of the country. For the long-term
conservation planning of wetlands, spatial data and information is required for any intervention. India had
lost more than 80% of their original wetlands (G. Manju et al., 2005). Thus prevention of degradation of
w w w . i a e s j o u r n a l . c o m
34 ISSN: 2252-8776
wetland is becoming the major issue. Realizing the importance of the wetlands, the present study was taken
up with the specific objective of ;
To delineate the wetland in the extracted study area.
To characterize the wetland in different turbidity levels.
To characterize the wetland in different aquatic vegetation levels
To delineate the land use classes of study area.
To obtain a land use classification image with major land use classes and wetland classes.
2.3. Methodology
The toposheet is geo-corrected by selecting image geometric correction option from the data preparation
tool by using minimum 20 GCP’s. The toposheet is geo referenced by using UTM WGS 84, zone 43
projection. Once the RMS error is less than half a pixel, the transformation is saved and the toposheet is
resampled. Now the geo-referenced toposheet is used to geo-reference the LISS 4 image by interactively
examining the toposheet and LISS 4 image. Thus geo-referenced LISS 4 image is then used for further
process. As the LISS 4 image available to us carries a large area which is not required in the thesis therefore
the image is subsetted for extracting the study area. The boundary of study area from the Bhopal district map
is then extracted by using AOI tool. The extracted boundary which is the vector layer is now used to extract
the study area from the whole dataset by using the subset option, the extracted study area is shown in figure
No. (1). The obtained subset image is unsupervised classified in six land use classes. Obtained unsupervised
classified image in then regrouped.
For this both the image i.e. obtained unsupervised image and extracted FCC image of study area are
loaded in two different viewers, and both the images are then examined interactively for regrouping the land
use classes. All values belonging to the same class are changed to a single colour. The unsupervised
classified image is then recoded by selecting GIS analysis from Image Iterpreter by using the values sparse
vegetation-1, healthy vegetation-2, wetland-3, waste land-4, water body-5. Extracted FCC image of study
area is loaded in new viewer and a vector layer of major settelments present in the area are vectorised by
using AOI tool. The generated AOI is then recoded by using the recoded value 6 which is different from that
use in Land use classification. Obtained recoded classified image and recoded steelment image are then
overlayed to obtain one LULC classified image. For extracting the water mask BAND 3 of LISS 4 image is
used, as water bodies are efficiently visible in BAND 3 and for extracting BAND 3 of LISS 4 the image is
layer stacked. Obtained BAND 3 image is then density sliced by inetractively examinning it with the FCC
image of study area. The DN values of water body present in the study area are observed and threshold value
for water is noted. After obtaining the threshold value the BAND 3 image is recoded by assigning value ‘one’
all the pixels belonging to water body and making rest others to ‘zero’. The water mask image obtained
above is then masked with the extracted FCC study area image to obtain the water body image by selecting
the ‘mask’ option from ‘utilities’.
For turbidity classification the BAND 2 of extracted water body image, is extracted by using the ‘layer
stack’ option from ‘utilities’. The BAND 2 image extracted is then loaded in viewer for interactively density
slicing, to obtain the threshold values for three turbidity levels i.e. low, medium, high turbidity. Lower is the
pixel value heigher is the turbidity and respectively. After noting the threshold values for all the three levels
the BAND 2 image is recoded. The recoded values for all three turbidity levels used are 10 for low turbidity,
20 for medium turbidity, 30 high turbidity. The recoded image obtained is loaded in a new viewer to assign
different colours to all three turbidity levels. For aquatic vegetation classifiaction the NDVI of water body
image is extracted, as the NDVI is a parameter, which gives a measure of the vegetation density. Indices
option is selected from ‘spectral enhancement’ to obtain the NDVI image of water body image extracted.
Now the NDVI image is interactively density sliced to obtain the threshold value for all four aquatic
vegetation levels. These values are then used for recoding the NDVI image to new values.
The negative NDVI value represents negligible vegetation, while all the positive values are subdivided
in poor, moderate and high aquatic vegetation classes. Values used for recoding are 1, 2, 3 and 4 respectively
for negligible, poor, moderate and high aquatic vegetation. The turbidity classified image and aquatic
vegetation classified image are then added together by selecting the operator function from utilites to obtain
an image having 12 aquatic vegetation and turbidity classes together. All the 12 classes of turbidity and
aquatic vegetation image is recoded into 12 new values. For obtaining the final output the classified and
recoded image with 6 land use classes obtained is added with the image classified and recoded according to
the various vegetation classes amongst the three turbidity levels. The final output contains land use
classification image with 6 major landuse classes and 12 classes in the wetlands.
Analysis of Inland Wetland using advance Remote Sensing and GIS on IRS P6 (Sourabh Shrivastava)
36 ISSN: 2252-8776
3.2. Classification
The geometrically corrected LISS 4 image is then used for land use classification to carry out the
separation of study area into various land use categories. The unsupervised classification is adopted and
classes are merged on basis of spectral behaviour. By using supervised algorithm the entire study area is
classified into 5 classes namely water body, waste land, settlement, healthy vegetation and sparse vegetation.
The proper attention is given to the ‘marshy’ and ‘swampy’ areas, as these constitutes important components
of the wetlands, but could not be separated along with the water bodies’ which are being separated
independently. Interactive help from topographic maps was taken for locating swampy areas and the
classified map of the study area as shown in figure No. (2) and the area covered under each category is
summarized in table No. 2.
Analysis of Inland Wetland using advance Remote Sensing and GIS on IRS P6 (Sourabh Shrivastava)
38 ISSN: 2252-8776
into three groups for showing low, moderate and high turbidity regions. Now the extracted band 2 image is
then recoded to obtain three turbidity levels.
The area covered by turbidity levels is shown below in table No. 4 and turbidity classified image is shown
below in figure No.4
Table 4. Area of Turbidity in Study Area
recoded and thus an image classified into 4 aquatic vegetation classes is obtained, which is shown in figure
No. (5) and the area covered by all three aquatic vegetation levels is shown in table No. (5).
Used recode values are:
Nil/negligible coverage: 1
Poor vegetation: 2
Moderate vegetation: 3
High vegetation: 4
Analysis of Inland Wetland using advance Remote Sensing and GIS on IRS P6 (Sourabh Shrivastava)
40 ISSN: 2252-8776
3.7. Integration of land use classification with turbidity and aquatic vegetation
The classified and recoded land use map image is then combined to the recoded aquatic vegetation
and turbidity image obtained to obtain a final land use classification image containing six land use classes
and 12 classes in the wetland. Now the combined image of turbidity and aquatic vegetation is then further
recoded. The output obtained is shown below in figure No. (7).
4. CONCLUSION
From the study it can be concluded that the wetlands are majorly been effected due to absence of
reliable and updated information and data on extent of wetlands, and also their conservation values and
socioeconomic impotance has greatly hampered. Remote sensing data in combination with Geographic
Information System (GIS) methods have been found to be effective tools for wetland conservation and
management. Because by using digital remote sensing data for wetland mapping and analysis, information at
any scale of all wetlands will be available according to the management and conservation requirements. The
development of the remote sensing technology makes us obtain very abundant information of wetlands,
especially with the appearance of high resolution satellite imagery which extends the visual field of the
wetlands. Multiple GIS layers can easily be applied for zoning potential wetland restoration/ enhancement
sites using GIS packages such as ArcGIS. The zonation map prepared proves to be a greater efficacy for
wetland ecologists and geologists.
REFERENCES
[1] Archana Sarkar and Sanjay K Jain,(2008), Using Remote Sensing Data to Study Wetland Dynamics – A Case Study
of Harike Wetland, Proceedings of Taal 2007: The 12th World Lake Conference, 680-684.
[2] Aselman, I. and P.J. Crutzen,(1989), Global distribution of natural fresh water wetlands and rice paddies, their net
primary productivity, seasonality and possible methane emmisions. Journal of atmosphere and chemistry, 8, 307-
358.
Analysis of Inland Wetland using advance Remote Sensing and GIS on IRS P6 (Sourabh Shrivastava)
42 ISSN: 2252-8776
[3] Brian A. Clarke,(2005), Determination of turbidity in Kourris dam in cyprus utilizing Landsat TM remotely sensed
data, Water Resources Management, 20, 449–465.
[4] Bazilevich, et al, (1971), Geophysical aspects of biological productivity. Sovbiet Georg., 12, 293-317.
[5] Dugan P.J., (1988), The Importance of rural communities in wetlands conservation and development, the ecology
and management of wetlands, 2, 3-11.
[6] G. Manju et al, (2005), Maing and characterization of inland wetlands using Remote Sensing and GIS, Journal of
the Indian Society of Remote Sensing, 33, (1).
[7] Gorhman, E., 1991. Northen peatlands: Role in the carbon cycle and probable responses to climatic warming.
Ecolological Alication, 1, 182-195.
[8] Huan Yu and Shu-Qing Zhang, (2008), Alication of high resolution satellite imagery for wetlands cover
classification using object-oriented method, The International Archives of the Photogrammetry, Remote Sensing and
Spatial Information Sciences, 36, 521-526.
[9] Maltby, E., and R.E.Turner,(1983), Wetlands of world, Georgraphical Magzine, 55, 12-17.
[10] Matthews, E and I. Fung,(1987), Methane Emissions from Natural Wetlands: Global Distribution, area and
environment characteristics of sources, Global biogeochemical Cycles, 5, 3-24.
[11] Turner, R.E.,(1988), Secondary production in riparian ecosystems, Proceedings, 53rd North American Wildlife
Conference, 391-501.
[12] Whittaker, R.H. and G.E. Likens,(1973), Carbon and the Biosphere, USAEC symp. Series no. 30, Washington, DC,
281-300.
[13] http://www.wetlandsofindia.org/wetlands/introduction.jsp.
[14] http://csshome.com/wetlands.html.
Corresponding Author:
A.Z.Yonis,
Departement of Communication Engineering,
College of Electronics Engineering, University of Mosul
Iraq, Mosul, 00964
Email: aws_zuher@yahoo.com
1. INTRODUCTION
The LTE-A downlink transmission scheme is based on orthogonal frequency division multiple
access (OFDMA), which is a multiuser version of the OFDM modulation scheme. In the uplink, single
carrier frequency division multiple access (SC-FDMA) is used, which can be also viewed as a linearly pre-
coded OFDM scheme known as discrete Fourier transform (DFT)-spread OFDM.
However, SC-FDMA has been selected for the uplink due to the lower peak-to-average power ratio
(PAPR) of the transmitted signal compared to OFDM. Low PAPR values benefit the terminal in terms of
transmit power efficiency, which also translates into increased coverage. The processing sequence in the
signal generation process is quite similar in downlink and uplink, the main difference comes from the
elimination of the antenna mapping process and the addition of a DFT-spread block, which is the key process
for the PAPR reduction [1].
OFDMA and SC-FDMA are the multiple-access versions of OFDM and a similar modulation
scheme, Single-Carrier Frequency-Domain Equalization (SC-FDE). In order to compare the differences
between the multiple-access methods, it is important to first cover the differences between their underlying
modulation schemes. Section 2 of this paper discusses the current development of LTE-Advanced and the
limited research directions related to the development of communication systems, while section 3 describes
Long Term Evolution-Advanced downlink which is represent OFDMA. Long Term Evolution-Advanced
Uplink is included in section 4 with the block diagram for SC-FDMA. Section 5 explains the downlink data
transmission for ODFMA and in section 6 uplink data transmission LTE-Advanced technologies are
considered. Section 7 and 8 contain the capacity of OFDMA and SC-FDMA respectively .While section 9
contains the summary and discussions of the main points for this paper which can be consider for reader to
w w w . i a e s j o u r n a l . c o m
44 ISSN: 2252-8776
understand the current development for performance of Uplink and Downlink in wireless communication
technology. Finally, section 10 concludes some general observations and recommendations for this paper.
services. This is due to the lack of inter-symbol interference from multipath channels and the absence of
intra-cell interference because users are orthogonal (i.e. they do not interfere with each other) in the
frequency domain. In addition, the OFDMA transmission technique scales easily to different bandwidths, so
multiple system bandwidth configurations can be efficiently supported. In addition, low-complexity receivers
can be used with OFDMA.
In addition, frequency-domain scheduling and MIMO processing techniques can be used. An
example of frequency-domain scheduling techniques is frequency-selective scheduling. In frequency-
selective scheduling, users are assigned data only on good frequency bands (i.e. bands with large gain),
which are determined on the basis of channel quality feedback from the UE. For broadcast services, single-
frequency broadcast networks can be supported. In this case, multiple base stations transmit the same
broadcast signals. The signals are coherently combined at the user, thus improving performance at the cell
edge substantially.
A basic block diagram illustrating OFDMA signal generation for one OFDM symbol is shown in
Figure 1 Data symbols from different users are mapped to different subcarriers depending on the frequency
bands assigned to those users. This is done in the frequency domain. The information is then subjected to an
inverse fast Fourier transform (IFFT) to convert the frequency-domain subcarriers into time-domain signals.
A cyclic prefix is then added, and the signal is ready for transmission.
Note that the basic transmission unit for data is a subframe that spans multiple OFDM symbols. At the
receiver, the reverse operation is performed. The cyclic prefix is removed, and then the time-domain signal is
subjected to a fast Fourier transform (FFT) so that the modulation symbols on each subcarrier can be
extracted. Each user then extracts the frequency resource units corresponding to his assigned subcarriers.
Equalization is performed and the data is passed onward for decoding.
In LTE-A, frequency resource is assigned in units of resource blocks. Several factors must be
considered in the selection of the resource block size in frequency. First, it should be small enough that the
frequency selective scheduling (i.e. scheduling data transmission on good frequency subcarriers) gain is
large. Small resource-block size ensures that the frequency response within each resource block is similar,
thus enabling the scheduler to assign only good resource blocks. However, since the eNB does not know
which resource blocks are experiencing good channel conditions, the UE must report this information back to
the eNB. Thus, the resource-block size must be sufficiently large that the feedback overhead is not too high.
It also should be sufficiently large to minimize downlink control signaling, which must be used to inform the
UE of its resource allocation. In [6], performance analysis of frequency selective scheduling was performed.
It was found that a resource block of size 200–900 kHz provides good performance. Since, in LTE-A, a
subframe size of 1 ms is used to ensure low latency, the resource block size in frequency should be small so
that small data packets can be efficiently supported. As a result, 180 kHz (12 subcarriers) was chosen as the
resource-block bandwidth.
A cyclic prefix is needed for OFDMA transmission in order to prevent inter-symbol interference
from previously transmitted OFDM symbols. The OFDM symbol with cyclic prefix and data is shown in
Figure 3. Note that the cyclic prefix does not carry useful data and is removed at the receiver prior to
processing. As a result, it is desirable to have as small a cyclic prefix as possible in order to minimize the
overhead. In general, the length is chosen on the basis of the expected delay spread of the propagation
channel plus some margin to allow for imperfect timing alignment.
From Table 1, it can be seen that SC-FDMA has a significantly lower cubic metric than that for
OFDMA. For cell-edge users, where QPSK modulation is generally used, SC-FDMA enjoys a cubic-metric
advantage of 2.4 dB over OFDMA. This means that cell-edge users can transmit at 1.74 times higher average
power with SC-FDMA than with OFDMA for the same maximum-power limitation. As a result, for the same
uplink cell edge data rate, SC-FDMA can provide greater coverage. For example, at a distance of 0.8 km
from the cell, SC-FDMA can deliver a data rate of 200 kbit/s, compared with 70 kbit/s for OFDMA. This is
the primary reason why SC-FDMA is selected for the uplink. The low power back-off property is
accomplished by transmitting the data symbols serially rather than in parallel like in OFDMA, which results
in substantially reduced signal fluctuations. This helps conserve battery life or extend the range by reducing
the back-off due to non-linearity in the power amplifier. The performance of SC-FDMA, however, is not as
good as that of OFDMA given the same type of receiver. The performance for QPSK modulation is
approximately the same, while OFDMA outperforms SC-FDMA by 0.5–1 dB for 16-QAM [8]. Although this
negates the benefits of SC-FDMA somewhat, especially for indoor users, coverage and cell-edge data rate
were seen as the most important criteria in the uplink.
In LTE-A, discrete Fourier transform–spread–OFDM (DFT-S-OFDM) is used to generate the SC-
FDMA signal in the frequency domain as shown in Figure 4. Note that generation of the SC-FDMA signal
using DFT-S-OFDM is almost identical to that of OFDM, with the exception of the additional M-point
discrete Fourier transform (DFT). Although DFT processing is more computationally intensive than the FFT,
efficient implementation for certain DFT sizes is available. Specifically, DFTs of prime length can be
calculated using efficient FFT algorithms. The method shown in Figure 4 generates SC-FDMA signal in the
frequency domain.
This allows frequency-domain pulse shaping to be applied prior to the IFFT to further reduce the cubic
metric.
The first M-point DFT is used to provide frequency-domain precoding, which is mapped to M
contiguous-frequency subcarriers prior to the IFFT. To preserve the single-carrier property, transmission
from a user within an SC-FDMA symbol must be either contiguous or evenly spaced in the frequency
domain. Two different types of single-carrier transmission can be generated using DFT-S-OFDM, depending
on how the resource-element mapping is done. The mapping may be done such that a distributed or localized
frequency allocation is generated as shown in Figure 5. Localized mapping means that the entire allocation is
contiguous in frequency. This allows good channel-estimation performance since the pilots are contiguous,
thus interpolating techniques can be used in channel estimation.
In addition, it will be easy to multiplex different users together in the spectrum. However, frequency
diversity is poor. Distributed mapping means that the allocated bandwidth is evenly distributed in frequency.
This provides very good frequency diversity. However, the pilots must be distributed, and thus channel-
estimation performance suffers.
It can also be difficult to multiplex all the users together in the spectrum. In addition, frequency-
selective scheduling where a user is assigned only a selected portion of the spectrum (generally one that is
providing good radio conditions) cannot be taken advantage of performance comparisons of localized versus
distributed mapping using realistic channel estimation have been published in [9]. The results showed that the
two methods provide similar performance. The gain in frequency diversity from distributed transmission is
lost through poorer channel-estimation performance. Given these performance results and other difficulties
with scheduling of users, only localized mapping is supported in LTE-A. However, to provide frequency
diversity, hopping, whereby the user hops from one localized frequency assignment to a different frequency,
can be used.
At the receiver, the reverse operation of the transmitter functions is performed for data
demodulation. The received signal first undergoes RF processing and analog-to-digital conversion. Then the
cyclic prefix is removed and an FFT is performed. Channel estimation is performed on the basis of the pilots
that have been embedded into the transmission packet. In addition to channel estimation, frequency and
timing estimation and correction may also be performed. Subcarrier demapping and equalization is done
next, followed by an IDFT and finally an M-point IDFT.
Unlike in conventional FDMA, the addition of an M-point DFT/IDFT is used to spread out each
modulated data symbol onto all of the subcarriers used. This lowers the peak-to-average power of the
transmission signal, resulting in higher maximum transmission power. However, because of the M-point
IDFT, all the transmitted modulated symbols within the SCFDMA symbol have the same SINR. The
performance of the receiver depends on the type of receivers as well as channel estimation, frequency and
time tracking, and decoding algorithms. Several types of receivers can be used for SC-FDMA, including, in
practice, a minimum-mean-squared-error or interference-rejection combining receiver is usually used
because of its good performance and manageable complexity.
Figure 6 shows how the input bits are first grouped and assigned for transmission over different
frequencies (subcarriers). In the example, 4 bits (representing a 16 QAM modulation) are sent per
transmission step per subcarrier. A transmission step is also referred to as a symbol. With 64 QAM
modulations, 6 bits are encoded in a single symbol, raising the data rate further. On the other hand, encoding
more bits in a single symbol makes it harder for the receiver to decode the symbol if it was altered by
interference. This is the reason why different modulation schemes are used depending on transmission
conditions.
In theory, each subcarrier signal could be generated by a separate transmission chain hardware
block. The output of these blocks would then have to be summed up and the resulting signal could then be
sent over the air. Because of the high number of subcarriers used, this approach is not feasible. Instead, a
mathematical approach is taken as follows. As each subcarrier is transmitted on a different frequency, a graph
which shows the frequency on the x-axis and the amplitude of each subcarrier on the y-axis can be
constructed. Then, a mathematical function called Inverse Fast Fourier Transformation (IFFT) is applied,
which transforms the diagram from the frequency domain to the time domain. This diagram has the time on
the x-axis and represents the same signal as would have been generated by the separate transmission chains
for each subcarrier when summed up. The IFFT thus does exactly the same job as the separate transmission
chains for each subcarrier would do, including summing up the individual results.
On the receiver side, the signal is first demodulated and amplified. The result is then treated by a
fast Fourier transformation function which converts the time signal back into the frequency domain. This
reconstructs the frequency/amplitude diagram created at the transmitter. At the center frequency of each
subcarrier a detector function is then used to generate the bits originally used to create the subcarrier.
The explanation has so far covered the Orthogonal Frequency Division aspect of OFDMA
transmissions. The Multiple Access (MA) part of the abbreviation refers to the fact that the data sent in the
downlink is received by several users simultaneously. Control messages inform mobile devices waiting for
data which part of the transmission is addressed to them and which part they can ignore. This is, however,
just a logical separation. On the physical layer, this only requires that modulation schemes ranging from
QPSK over 16QAM to 64QAM can be quickly changed for different subcarriers in order to accommodate the
different reception conditions of subscribers [10].
The amplifier must be able to amplify the highest peak value of the wave. Due to silicon constraints,
the peak value determines the power consumption of the amplifier.
The peaks of the wave, however, do not transport any more information than the average power of
the signal over time. The transmission speed therefore does not depend on the power output required
for the peak values of the wave but rather on the average power level.
As both power consumption and transmission speed are of importance for designers of mobile
devices, the power amplifier should consume as little energy as possible. Thus, the lower the difference
between the PAPR, the longer is the operating time of a mobile device at a certain transmission speed
compared with devices that use a modulation scheme with a higher PAPR.
A modulation scheme similar to basic OFDMA, but with a much better PAPR, is SC-FDMA (Single
Carrier-Frequency Division Multiple Access). Due to its better PAPR, it was chosen by 3GPP for
transmitting data in the uplink direction. Despite its name, SC-FDMA also transmits data over the air
interface in many subcarriers, but adds an additional processing step as shown in Figure 7. Instead of putting
2, 4 or 6 bits together as in the OFDM example to form the signal for one subcarrier, the additional
processing block in SC-FDMA spreads the information of each bit over all the subcarriers. This is done as
follows: again, a number of bits (e.g. 4 representing a 16 QAM modulation) are grouped together. In OFDM,
these groups of bits would have been the input for the IDFT.
In SC-FDMA, however, these bits are now piped into a Fast Fourier Transformation (FFT) function
first. The output of the process is the basis for the creation of the subcarriers for the following IFFT. As not
all subcarriers are used by the mobile station; many of them are set to zero in the diagram. These may or may
not be used by other mobile stations.
On the receiver side the signal is demodulated, amplified and treated by the fast Fourier
transformation function in the same way as in OFDMA. The resulting amplitude diagram, however, is not
analyzed straight away to get the original data stream, but fed to the inverse fast Fourier transformation
function to remove the effect of the additional signal processing originally done at the transmitter side. The
result of the IFFT is again a time domain signal. The time domain signal is now fed to a single detector block
which recreates the original bits. Therefore, instead of detecting the bits on many different subcarriers, only a
single detector is used on a single carrier. The differences between OFDM and SC-FDMA can be
summarized as follows: OFDM takes groups of input bits (0s and 1s) to assemble the subcarriers which are
then processed by the IDFT to get a time signal. SC-FDMA in contrast first runs an FFT over the groups of
input bits to spread them over all subcarriers and then uses the result for the IDFT which creates the time
signal. This is why SC-FDMA is sometimes also referred to as FFT spread OFDM [10].
(1)
Where βi is the fraction of bandwidth allocated to user i. For the case where the bandwidth is equally divided
among the K users transmitting simultaneously, the above formula can be simplified as below:
(2)
(3)
Where Ts is the OFDM symbol duration and Δ is the cyclic prefix duration.
64-QAM modulations typically used at higher SINR [12]. Therefore, the uplink capacity limit for an SC-
FDMA system is given as:
(4)
Where: LSC-FDMA represents the SC-FDMA link loss in dBs relative to OFDMA. This loss occurs at higher
SINR when frequency-domain linear equalization is used. It should be noted that some or all of this loss can
be recovered by using a more advanced receiver [13] at the Node-B at the expense of additional complexity.
10. CONCLUSION
The main area where the flexible spectrum allocation of downlink (OFDMA) and uplink (SC-
FDMA) systems is exploited is enabling wideband transmission. LTE-Advanced is supporting transmissions
up to 100 MHz in downlink and 40 MHz in uplink; achieving this while being compatible with 3G networks
could be achieved through the carrier aggregation. Carrier aggregation refers to the possibility of
concatenating several basic (legacy) carrier components into a larger one that can be viewed and managed as
a single band. It involves multiple carriers being combined at the PHY layer to provide the user with the
necessary bandwidth. The utilization of guard band is possible for the actual data transmission, and utilizing
basic (legacy) carrier components achieves backward compatibility with LTE-Advanced. Single-Carrier
Transmission presents the key techniques for LTE-Advanced uplink as well as the baseline performance.
Radio access technology is the key aspect in LTE-Advanced uplink, and two radio access schemes, SC-
FDMA and OFDMA, are explained. The performance results are obtained from a detailed LTE-Advanced
uplink link-level design. The diagrams show that both SC-FDMA and OFDMA can achieve a high spectral
efficiency; however OFDMA has better performance with high order modulations. Meanwhile SC-FDMA
has better performance with low-order modulation specifically QPSK. Hence, OFDMA can offer higher cell
throughput, while SC-FDMA can provide larger cell coverage.
ACKNOWLEDGEMENTS
This research is funded by the Postgraduate Incentive Grant, University Tun Hussein Onn Malaysia
(Vot 0894). The authors are grateful to Faculty of Electrical and Electronic Engineering for the technical
support in carrying out this study.
REFERENCES
[1] B. Furht, S. A. Ahson, “long term evolution 3GPP LTE Radio and Cellular Technology,” CRC Taylor & Francis,
USA, pp.5, 2009.
[2] 3GPP Technical Report 36.913, Requirements for further advancements for Evolved Universal Terrestial Radio
Access (E-UTRA) (LTE-Advanced).
[3] A. M. Taha, H. S. Hassanein, N. Abu Ali, “LTE, LTE-Advanced and WiMAX towards IMT-Advanced networks,”
John Wiley, UK, pp.13, 2012.
[4] 3GPP TS 25.912, Feasibility study for evolved Universal Terrestrial Radio Access (UTRA) and Universal
Terrestrial Radio Access Network (UTRAN), v7.2.0, July 2006.
[5] H. G. Myung, J. Lim, and D. J. Goodman, "Single Carrier FDMA for uplink wireless transmission," IEEE
Vehicular Technology Magazine, vol. 1, no. 3, pp. 30-38, Sep. 2006.
[6] R1-050720, “Frequency selective scheduling resource block size for EUTRA downlink,” Motorola, RAN1#42, San
Diego, CA, Oct. 2005.
[7] R1-060385, “Cubic metric in 3GPP-LTE”, Motorola, RAN1#44, Denver, CO, February 2006.
[8] R1-051088, “Coverage comparison between UL OFDMA and SC-FDMA,” Nokia, RAN1#42, San Diego, CA, Oct.
2005.
[9] R1-051033, “Further topics on uplink DFT-S-OFDM for E-UTRA”, Motorola, RAN1#42, San Diego, CA, Oct.
2005.
[10] M. Sauter, “Beyond 3G – Bringing Networks, Terminals and the Web Together LTE, WiMAX, IMS, 4G Devices
and the Mobile Web 2.0,” John Wiley, UK, pp.51-54, 2009.
[11] F. KHAN, “LTE for 4G Mobile Broadband Air Interface Technologies and Performance,” Cambridge University,
New York, pp. 79-80, 2009.
[12] B. E. Priyanto, H. Codina, S. Rene, T. B. Sorensen, and P. Mogensen, “Initial performance evaluation of DFT-
spread OFDM based SC-FDMA for UTRA LTE uplink,” Proceedings of IEEE 65th Vehicular Technology
Conference, VTC2007-Spring, April 2007, pp. 3175–3179.
[13] D. Falconer, S. L. Ariyavisitakul, A. Benyamin-Seeyar, and B. Eidson, “Frequency domain equalization for single-
carrier broadband wireless systems,” IEEE Communications Magazine, vol. 40, no. 4, pp. 58–66, Apr. 2002.
BIOGRAPHY OF AUTHORS
Aws Zuheer Yonis graduated from department of Computer Engineering at the Technical
College in 2003 and completed his Master on Electrical and Electronics-Telecommunication
Engineering at University Tun Hussein Onn Malaysia (UTHM) in 2011. Currently he is studying
doctorate of Telecommunication Engineering at UTHM from 2011. Since 2006 he became an
engineer at the University of Mosul-Iraq. He has published about 10 refereed international
journal and conference papers. He is a member of IAENG, SCIEI, SIE, CBEES, SDIWC,
IACSIT, and Syndicate of Iraqi Engineers.
Corresponding Author:
I. A. K. Sita Laksmita,
Departement of Electrical and Computer Engineering,
Udayana University,
Kampus Bukit Road, Jimbaran, Bali – 80361, Indonesia.
Email: sita.nita.mita@gmail.com
1. INTRODUCTION
Rabies is a major public health problem in Bali. Government has declared Bali Island as rabies
outbreak area. The statement contained in the Minister of Agriculture Regulation Anton Apriyantono on
December 1, 2008 [1]. The threat of rabies is not only about the death of human or a pet, but also can lead
local residents to loss sense of safety. Rabies outbreak caused the deaths of thousands of dogs that tested
positive due to rabies, or suspected of carrying rabies because rabies symptoms or had close contact with
rabies positive dogs [2].
Human fatalities occur because of a lack of knowledge regarding rabies risk, the poor management
of dog bites, and the limited availability of Rabies immune globulin (RIG) [3]. Increasing public awareness
of dog bite management, increasing the ARV (availability of anti-rabies vaccine) and RIG, and implementing
an island wide dog vaccination campaign will help prevent human rabies cases.
Government of Bali has been collecting the data of rabies cases every period. However, current data
management system still uses spreadsheet technology that oriented to attribute data. That system is
inadequate for the efficient management and controlling the rabies outbreak.
Geographic Information System (GIS) is a computer-based tool or system for mapping and
analyzing spatial data [4]. GIS is an organized collection of hardware, software and geographic data designed
to efficiently capture, store, update, and manipulate all forms of geographically referenced information [5].
One benefit of GIS is that it can be used by the various fields of science, such as health or medical field.
w w w . i a e s j o u r n a l . c o m
IJ-ICT ISSN: 2252-8776 55
With this in mind, a web-based GIS application of rabies spread was created by using GeoServer to
record the spread of rabies, PostGIS database to stored and managed data from the maps, and OpenLayers as
a client application to display map data in a web browser. This application provides a new resource for the
rapid mapping and displays the dissemination of data on rabies cases in order to show information about any
rabies infected area.
The rabies cases are shown in the points form (markers) in the map based on the coordinates of
locations. Each region will be displayed in different colors according to the range number of rabies cases that
occurred. The point radius showed the radius running of a dog in relation to show rabies outbreak.
Submissions detail of rabies cases also can be viewed via popup that appears automatically when the rabies
cases location were selected on the map. This application is useful in determining vaccination schedule so
that Bali Animal Husbandry Department can easily track and perform priority control in areas with the most
widely spread. It also showed the data historically, which means the spread of rabies can be tracked not only
based on region but also time.
2. RESEARCH METHOD
2.2. Rabies
Rabies (hydrophobia) is a fatal neuropathogenic disease caused by the rabies virus, which is an
enveloped RNA virus of the Lyssavirus genus, Rhabdoviridae family [7]. Transmission of rabies usually
occurs through the bite of an infected animal, contamination of fresh wounds or mucous membranes with
saliva or brain of animals that have been infected. The main route of infection is the bite of rabid dogs. In the
specific case the transmission through the air may also occur [8].
Figure 1. Number of human rabies cases (persons) from November 2008 to November 2010 by month
Rabies in Bali Province disease reappeared on 17 November 2008, the first human death from rabies
virus infection occurred in Ungasan, Bali, following a dog bite [9]. By 27 December 2009, 27 human cases
had been reported occurring in widely separated parts of the island, but predominantly in the south, in
Tabanan and Ungasan [9]. As of 13 March 2010, the number of human deaths reported had surpassed 40
Design Web-based GIS Application for Rabies Spread in Bali Province (I. A. K. Sita Laksmita)
56 ISSN: 2252-8776
[10]. Prior to 2008, Bali was considered rabies-free. With the current outbreak and the current estimated dog
population on the island being 500 000, the Bali Veterinary Agency is attempting to control the canine rabies
outbreak through a mass vaccination programmed and the destruction of animals [10]. Figure 1 shows the
number of human rabies cases by month from November 2008 to November 2010 of the outbreak [3].
The increase in numbers of clinically diagnosed cases in 12 months after the initial case report can
be seen. Cases were reported in eight districts (Figure 2) with most coming from rural districts, including
Karangasem (28.8%), Buleleng (19.2%) and Tabanan (17.3%) [3]. The proportion from the Badung District,
where the index case occurred was 13.5% [3].
Figure 2. Map of Bali Province and distribution of human rabies during November 2008-November 2010
There were 104 human rabies cases in Bali during November 2008-November 2010 [3]. Patients’
average age was 36.6 years (range 3-84 years; SD 20.7), most were male (56.7. Almost all (92%) cases had a
history of dog bite [3]. Only 5.8% had their wounds treated and received an anti-rabies vaccine (ARV) after
the bite incident [3]. No patients received Rabies Immuno Globulin (RIG). The rabies virus genome was
detected in 50 of 101 patients (49.5%) with the highest detection rate from post-mortem CSF samples [3].
2.3. GeoServer
GeoServer is open source software that is built using Java that allows users to display and
manipulate geospatial data [11]. GeoServer is designed for interoperability that is published data from all
sources of spatial data using open standards. As a community-based project, GeoServer is developed, tested,
and supported by a diverse group of individuals and organizations from around the world. Geoserver is the
implementation of the Open Geospatial Consortium (OGC) Web Feature Service (WFS) and Web Coverage
Service (WCS) standards, as well as high performance Web Map Service (WMS).
2.4. PostGIS
PostGIS is an open source relational database system, extension of the PostgreSQL Database
Management System [12]. By adding PostGIS in Postgre SQL, the database will have the ability to store
spatial data as the data range, region, state, and in particular the intersection of the geometry data type into
the database as the data location of an object on the map. PostGIS functionality similar to other several
databases that support spatial database such as SQL Server 2008 that supports spatial data, ESRI ArcSDE,
Oracle Spatial and DB2 spatial extender [13]. Latest release of PostGIS is now packaged with the DBMS
PostgreSQL as an optional add-on. As of this writing, the latest release is 1.4 and PostGIS 1.5 in beta version.
2.5. OpenLayers
OpenLayers is a javascript based client application for displaying map data in a web browser and
does not depend on the web server used [14]. OpenLayers implements a JavaScript API used to build web-
based GIS applications. OpenLayers is similar to Google Maps and MSN Virtual Earth APIs, with one main
difference that OpenLayers is free software, developed for and by the community of Open Source software.
d e
Submission
Pet owner
sender
User
Each entity has a special duties and rights in the application The administrator have full access rights
to the master data processing, as well as the data manipulation is not possible for other users. The head of the
Animal Husbandry Department have a duty of validating the data in cases of rabies submissions received
from the reporting done by locals and then take a decision in determining the priority of the response to the
infected area. Users is an employee of the Animal Husbandry Department who access this geographic
information system applications, which is allowed to query and regional elections, in accordance with the
spatial query and navigation options and assigned to enter data reporting as results of public rabies
submissions. The rabies cases will be reported by the people whom we call submission sender and the report
will be recorded as data submission. Animal owners who have pets suspected of contracting rabies will be
questioned according to the condition and behavior of their pets.
The processes that occur between entities in the system and database involved in each process are
described in the Data Flow Diagram (DFD) level 0 systems as shown in Figure 4.
Design Web-based GIS Application for Rabies Spread in Bali Province (I. A. K. Sita Laksmita)
58 ISSN: 2252-8776
c Username, password 1
Confirmation
User identity
User D1 Master User Data
Username, password
Confirmation User Verification Username, password
Username, password
b Confirmation
Animal Husbandry
Department User identity
Preservative data
District Data
D8 District Master Data
Update Data District
Sub-district Data
D11 District Master Data
Update Data Sub-district
D14
Submission data Submission Data
Species data
D10 Species Data
Village data
e D12 Village Data
Specimen data
D15 Specimen Data
Preservative data
D6 Preservative Master Data
Breed data
D7 Breed Master Data
Village data
D12 Village Data
Submission data
5 D14
Submission Data
Stratification information
Village data
D12
Determining Village Data
Stratify site selection command, Stratification Area
determining stratification area data
There are five processes in the Data Flow Diagram (DFD) in this level, the user verification process,
the master data manipulation, submissions data manipulation, determining vaccination and elimination area,
and determining stratification area.
Non-Spatial Data
(Attribute)
on PostGIS database
village_table
PK village_ID
village name
subdistrict_ID
Spatial data (Map)
cases
shapefile Villages
subdistrict_table
District Layer
District
District_table
PK district_ID
district name
Figure 5 shows relationships between the spatial data and non-spatial data (attributes) which is
stored in PostGIS database. Relationships are between village table and village layer, sub-district table and
sub-district layer, district table and district layer.
Design Web-based GIS Application for Rabies Spread in Bali Province (I. A. K. Sita Laksmita)
60 ISSN: 2252-8776
The application comes with getfeatureinfo OpenLayers feature that allows users to obtain detailed
information of an area by selecting the area on the map. Detail information includes details of rabies cases
spread in the selected areas. In addition to showing a map of rabies spread, the application is also equipped
with data storage capabilities for reporting animal rabies carrier bites, specimen reports, specimens test
reports, and specimen test of animal rabies carrier results, as shown in Figure 7.
If the result of the test specimens is positive, then the system will automatically give color to the
place on the map where the case of rabies happened. The red color indicates high cases of rabies, yellow for
intermediate cases of rabies and green for low cases of rabies.
4. CONCLUSION
In this paper we can conclude that the application of rabies GIS can store and report submissions
data about dog bites in Bali Province that are suspected carriers of rabies into the database. This GIS
application also can show the spread of rabies on the map where the case of rabies happened, so users can
determine the condition of the spread of rabies in one area and can compare it to other areas and can assist in
the prioritization decision to focus on areas which has the most high of rabies outbreak.
ACKNOWLEDGEMENTS
The authors thank the Animal Husbandry Departement for their cooperation in the study.
REFERENCES
[1] Soeharsono; , “Mengatasi Wabah Rabies di Bali,” Kompas.com, 2008, 12 December, http://kompas.com Accessed 6
October 2011
[2] Suardana; , “Bali Target Bebas Rabies Tahun 2012,” detik.news.com, 2010, 21 September, http://news.detik.com
Accessed 6 October 2011
[3] Susilawathi et al.; , “Epidemiological and clinical features of human rabies cases in Bali 2008-2010,” BMC
Infectious Diseases 2012, 12:81
[4] “Overview of GIS,” geography.about.com, 2012,
http://geography.about.com/od/geographyintern/a/gisoverview.htm. Accessed 6 October 2011
[5] Saumini Kar, Ajanta De Sarkar, and Nandini Mukherje; , “An Integrated Framework in Geographic Information
System using Wireless Sensor Network,” IJCA Proceedings on International Conference on Recent Advances and
Future Trends in Information Technology (iRAFIT 2012), iRAFIT(2):13-18, April 2012
[6] Prahasta, Eddy; , “Konsep-konsep Dasar Sistem Informasi Geografis,” Bandung: Informatika, 2005
[7] De Mattos CA, De Mattos CC, Rupprecht CE; , “Rhabdoviruses. In Fields Virology,” vol.1..4 edition, Edited by:
Knipe DM, Howley PM, Griffin DE, Martin MA, Lamb RA, Roizman B, Straus SE; , Philadelphia: Lippincott
Williams, 2001:1245-1277
[8] Schnurrenberger, R. Paul; , “An Outline of the Zoonoses,” Alabama: The Iowa State University Press, 1991, 60:63
[9] Clifton M.; , “Rabies, canine, human – Indonesia (21): Bali,” ProMED-mail 2009, 29 December, 20091229.4373,
http://www.promedmail.org. Accessed 15 March 2010
[10] Clifton M.; , “Rabies, canine, human – Indonesia,” ProMED-mail 2009, 13 March: 20100313.0816,
http://www.promedmail.org. Accessed 15 March 2010
[11] “GeoServer,” geoserver.org, http://geoserver.org/display/GEOS Accessed 6 October 2011
[12] Momjian. B; , “PostgreSQL Introduction and Concepts,” zidluxinst.uibk.ac.at,
http://zidluxinst.uibk.ac.at/postgresql/aw_pgsql_book.pdf
[13] “PostGIS,” postgis.refractions.net, http://postgis.refractions.net. Accessed 6 October 2011
[14] “OpenLayers,” openlayers.org, http://openlayers.org. Accessed 6 October 2011
[15] Jesse D Blanton et al.; , “Development of a GIS-based, real-time Internet mapping tool for rabies surveillance,”
International Journal of Health Geographics 2006, 5:47
Design Web-based GIS Application for Rabies Spread in Bali Province (I. A. K. Sita Laksmita)
62 ISSN: 2252-8776
BIOGRAPHY OF AUTHORS
I. A. K. Sita Laksmita
Studied Informatics and Computer System in Departement of Electrical and Computer
Engineering Udayana University since August 2008, and now working her research for S.T.
degree in Informatics and Computer System.