You are on page 1of 375

Proceedings of

2-day National Conference on

Mathematical Sciences in
Engineering Applications
(NCMSEA - 2018)
th th
18 – 19 April, 2018

Sponsored by

National
Mathematical
Society of Pakistan

Organized by
Department of Basic Sciences & Islamiat
University of Engineering & Technology, Peshawar
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Vice Chancellor Message

Science & Technology are distinct, yet complementary fields of endeavor,


and their development is closely interrelated. Science attempts to predict
and explain the intricate relationship which exists between different
variables of the physical world. Technology applies the discoveries of
science, while contributing to its development by providing new
tools/instruments as well as new challenges and topics for research. The
coupled evolution of science and technology is imperative for the real
growth and development of the society. University of Engineering and
Technology, Peshawar, is committed to the development of both Science
and Technology through Education, Research and Innovation. 1st
National Conference on Mathematical Sciences in Engineering
Applications NCMSEA-2018 will provide an opportunity to Researchers,
Academicians and Technologists to present, discuss, and find solution of
challenging problems. I believe, that this conference will produce high
impact theoretical and applied research in the areas of Mathematics,
Physics, Statistics, Computing and allied Engineering Technologies.

i
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Preface

We are feeling honored to publish this Book of Proceedings of first two-day national
conference on "Mathematical Sciences in Engineering Applications" (NCMSEA-2018), which
was held on 18-19 April 2018, at the Department of Basic Sciences and Islamiat Main Campus
University of Engineering and Technology Peshawar Pakistan, in collaboration with Sarhad
University of Science and Information Technology, Peshawar.

The scope of the conference was to address the new developments and research results in
the field of Mathematics, Computational Physics, Computer Science, Mathematical Statistics and
their applications in different Engineering disciplines. The primary focus of this conference was
to bring together academicians, researchers and scientists for knowledge sharing in various areas
of Mathematics, Computational Physics, Computer Science and Statistics. The NCMSEA-2018
provided appropriate platform for the scientific community where almost 150 participants met to
exchange ideas. During the two days of the conference, the researchers presented the most recent
discoveries in Mathematics, Physics, Computer Science and Statistics, apart from establishing
networking for possible joint collaboration.

An over-whelming response and interest was shown by the scholars from various
universities and research organizations across Pakistan. The total number of technical submissions
received, were around 80. After a rigorous double-blind peer review, only 52% of the total
submissions qualified for technical oral presentations, which constituted this conference
proceedings book.
The conference secretariat is highly indebted to the worthy Vice Chancellor, University of
Engineering and Technology Peshawar, Prof. Dr. Iftikhar Hussain on his vision, motivation and
continuous support for this conference. We also express our gratitude to the national
keynote/invited speakers, technical participants, reviewers, session chairs, session co-chairs,
University Administration. We also appreciate the sincere efforts of Faculty and Staff members of
the Department of Basic Science & Islamiat, because without whose support, this conference
would not have been possible. Lastly, we are most indebted for the generous support given by the
HEC, Sarhad University of Science and Information Technology Peshawar, National
Mathematical Society Lahore Punjab and Habib Bank Limited Pakistan. We are grateful to them
and expect further co-operation for promotion of Science & Technology in KP Province.
The compilation and editing of this compendium was a very laborious task to be
accomplished over a relatively very short period of time. However, this up-hill task was completed
with the help of dedicated and committed Phd Scholars working under the supervision of Prof. Dr.
Siraj-ul-Islam and Dr. Noor Badshah.

Conference Secretariat
Prof. Dr. Siraj-ul-Islam (Conference Chair)
Dr. Noor Badshah (Conference Secretary)

ii
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Organizational Structure of National Conference -2018

I. Organizing Committees:
Steering Committee:

1. Prof. Dr. Iftikhar Hussian, Vice Chancellor (Patron)


2. Prof. Dr. Noor Muhammad, Dean of Engineering
3. Chairman of the Department, (Conference Chair)
4. Dr. Khizar Azam, Registrar
5. Mr. Nek Muhammad, Treasurer
6. Prof. Dr. Qaiser Ali, (Sec. BOASAR)
7. Dr. Abdul Shakoor, Dir. ORIC
8. Dr. Shumaila Farooq, Dir. Media
9. Dr. Noor Badshah, Conference Secretory
10. Prof. Dr. Afzal Khan, Chief Proctor
11. Dr Misbah Ullah, Dir. Admissions
12. Prof Dr Iftikhar Ahmed Khan, SUIT Peshawar
13. Engr Dr Javed Iqbal, SUIT Peshawar
14. Mr. Shahjahan, Administrative Officer

Proceedings Book Editorial Committee:

1. Prof. Dr. Siraj-ul-Islam


2. Dr. Noor Badshah
3. Mr. Fahim Ullah
4. Mr. Aurang Zeb

Scientific Committee:

1. Prof. Dr. Siraj ul Islam, UET Peshawar


2. Prof. Dr. Amjad Ali, UET Peshawar
3. Prof. Dr. Ali Muhammad, UET Peshawar
4. Prof. Dr. Zawwar Hussain, University of Punjab
5. Prof. Dr. Asif Ullah, PIEAS Islamabad
6. Prof. Dr. Shamsul Qamar, CIIT Islamabad
7. Dr. Muhammad Younis, UET Peshawar
8. Dr. Baseer Ullah, NESCOM Islamabad
9. Dr. Imran Aziz, University of Peshawar
10. Dr. Marjan Uddin, UET Peshawar
11. Dr. Noor Badshah, UET Peshawar
12. Dr. Iltaf Hussain, UET Peshawar
13. Dr. Rehan Ali Shah, UET Peshawar
14. Dr. Tufail Ahmad, UET Peshawar

iii
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

15. Dr. Farooq, University of Peshawar


16. Dr. Rubi Bilal, Shaheed Benazir Bhutto Women University Peshawar
17. Dr. Nudrat Aamir, Shaheed Benazir Bhutto Women University Peshawar
18. Dr. Saboor Khan, Kohat University of Science and Technology Kohat
19. Dr. Wali Khan Mashwani, Kohat University of Science and Technology Kohat
20. Dr. Arshad Ali, Islamia College Peshawar
21. Dr. Rashida Adeeb Khanum, Jinnah College for Women Peshawar
22. Dr. M. Idress, Islamia College University Peshawar
23. Dr. Shafiq, UET Peshawar
24. Dr. Karim Akhtar, UET Peshawar
25. Dr. Khan Shahzada, UET Peshawar
26. Dr. Nasru Minallah, UET Peshawar
27. Dr Zeeshan, SUIT Peshawar
28. Mr Tariq Abbas, SUIT Peshawar
29. Dr. Nasir Ahmad, UET Peshawar
30. Dr. Sakhi Zaman, UET Peshawar
31. Dr. Asmat ullah, ESE KP
32. Dr. Javid Iqbal, SUIT Peshawar
33. Dr. Ihtisham ul Islam, SUIT Peshawar
34. Dr. Murad Khan, SUIT Peshawar
35. Dr. Mahmood Khan, SUIT Peshawar
36. Dr. Shahid Mahmood, SUIT Peshawar
37. Mr. Imran Khan, SUIT Peshawar
38. Dr. Affaq Qamar, USPCASE UET Peshawar
39. Miss Hadia Atta, Islamia College Peshawar

Conference Publicity and Promotion Committee:

1. Dr. Tufail Ahmad, Convener


2. Dr. Shumaila Farooq, Member
3. Mr. Jamal Nasir, Member

Event Management:

1. Dr. Noor Badshah


2. Dr. Misbah Ullah
3. Mr. Anwar Shah
4. Mr. Gul Shed
5. Mr. Atta ur Rehman
6. Mr. Saddam Hussain
7. Mr. Iqbal Uddin
8. Ms. Shaista
9. Ms. Gul Andam
10. Mr. Ali Akbar Shinwari
11. Mr. Waseem Khattak
12. Mr. Kashif Jan, Asstt: Computer Programmer

iv
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Conference Treasurers:

1. Mr. Nek Muhammad


2. Dr. Noor Badshah

Sponsorship Management:

1. Prof. Dr. Siraj-ul-Islam


2. Dr. Noor Badshah
3. Dr. Abdul Shakoor
4. Dr. Iltaf Hussain
5. Mr. Anwar Shah

Logistics and Transportation Committee:

1. Dr. Shafiq
2. Mr. Ihtram ul Haq
3. Mr. Qadeem, (ATO)
4. Mr. Anwar Shah

Invited Speakers:

1. Dr. Baseer Ullah, National Engineering and Scientific Commission Islamabad


2. Prof. Dr. Malik Zawwar Hussain, University of Punjab Lahore
3. Prof. Dr. Muhammad Arshad, International Islamic University Islamabad
4. Prof. Dr. Shamsul Qamar, CIIT Islamabad
5. Prof. Dr. Asifullah Khan, PIEAS Islamabad
6. Prof. Dr. Wali Khan Mashwani, KUST Kohat
7. Prof. Dr. Abdus Saboor, KUST Kohat
8. Prof. Dr. Salim ur Rehman, VC SUIT Peshawar
9. Prof. Dr. Iftikhar Ahmad, VC, Abbottabad University of Science and Technology Abbottabad
10. Prof. Dr. Rubi Bilal, SBBWU Peshawar
11. Prof. Dr. Anisa Qamar, University of Peshawar
12. Prof. Dr. Abdullah Shah, CIIT Islamabad

v
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Table of Contents
Vice Chancellor Message ......................................................................................................................................i

Preface................................................................................................................................................................. ii

Organizational Structure of National Conference -2018 .................................................................................... iii

Suliman, Sakhi Zaman, Siraj-ul-Islam, Meshless collocation method for one-dimensional highly oscillatory
integrals ................................................................................................................................................................ 1

Syed Zufiqar Ali Shah, Iftikhar Ahmad, Fault Tolerant Suffix Trees ................................................................ 10

Kausar Ghawas Khan Yousafzai, Laiq Hasan, FPGA Implementation of single object tracking using mean shift
algorithm ............................................................................................................................................................ 16

M. Haroon, L. Hasan, Performance Analysis of Single Object Tracking Algorithms .......................................23

Zahid Ali Khan, Suhail Yousaf, Performance Evaluation of Database Technologies for Internet of Things
Device ................................................................................................................................................................ 28

Aysha Nayab, Naina Said, Syed Saddam Hussain Shah, Waleed Khan, Zaryab Ali Shinwari, Nasru Minallah,
Hierarchical Comparison and Classification of P2P Systems Based on Architectural Design ..........................37

Amir Khan, Gul Zaman, On Non-Associative Flexible Loops ..........................................................................47

Muhammad Ahsan, Iltaf Hussain, A Haar wavelet collocation method for recovering time space dependent heat
source .................................................................................................................................................................52

Farooq Khana, Masood Ahmad, Siraj-ul-Islam, A Comparative analysis of meshless and sinc-collocation
method for some PDEs.......................................................................................................................................57

Iqrar Hussain, Sakhi Zaman, Siraj-ul-Islam, On numerical evaluation of the Bessel oscillatory integrals of Bessel
Type ...................................................................................................................................................................63

Manzoor Ahmad, On magnetohydrodynamic stagnation point flow of third order fluid over a lubricated surface
with heat transfer ................................................................................................................................................70

Mehwish Saleem, Siraj-ul-Islam, Sakhi Zaman, Numerical evaluation of two-dimensional highly oscillatory
integrals .............................................................................................................................................................. 80

Khawaja Shams-ul-Haq, Muhammad Ahsan, Siraj-ul-Islam, Identification of unknown heat source in inverse
problem by Haar wavelet collocation method ..................................................................................................95

Shomaila Mazhar, Siraj-ul-Islam, Sakhi Zaman, On numerical computation of three dimensional highly
oscillatory integrals ..........................................................................................................................................101

Wajid Khan, Siraj-ul-Islam, Baseer Ullah, A global weak form meshless method for the numerical solution of
elasto-static problems .......................................................................................................................................113

vi
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Yahya, Siraj-ul-Islama, Sakhi Zaman, A Comparative study of the approximations of singular and hyper singular
integrals ............................................................................................................................................................ 123

Mati-ur-Rahman, Zaheer-ud-Din, Siraj-ul-Islam, Meshfree methods of 1D Fredholm integral equation having


oscillatory discontinuous kernel .......................................................................................................................130

Ali Ahmad, Noor Badshah, Fuzzy Selective Image Segmentation Model Hybrid with Local Image Data and
Target Region Energy ......................................................................................................................................134

Muhammad Taj, Orthotropic – Winkler Like Model for Buckling of Microtubules Due to Bending and Torsion.
.........................................................................................................................................................................141

Noor Badshah, Fazli Rehman, Ali Ahmad, An Efficient Hybrid Distance Variational Image Segmentation
Model ............................................................................................................................................................... 156

Awal Sher, Maryum, Haider Ali, Joint Image De-hazing and Segmentation ..................................................172

Hassan Shah, Noor Badshah, Fahim Ullah, Segmentation model for texture images by piecewise smooth
approximation ..................................................................................................................................................175

Noor Badshah, Ijaz Ullah, Ali Ahmad, Efficient Variational Model for Image Segmentation Based on Multi-
Scale lowpass filtering .....................................................................................................................................188

Mushtaq Ahmad Khan, Wen Chen, Asmat Ullah, Muhammad Sadiq, and Sajad Ali, Total variation
regularization via radial basis function approximation for speckle noise removal ..........................................199

Noor Badshah, Muhammad Naveed Khan, and Hadia Atta, Selective segmentation of images via local gaussian
distribution .......................................................................................................................................................212

Lubna Rafiq, Sapna Tajbar, Summaya, Sidra Malik, Nabia Gul, Digital Image Processing Applications for
Monitoring & Mapping Soil Salinity ...............................................................................................................221

Tahir Zaman, Noor Badshah, Hassan Shah, Fahim Ullah, A New Variational Model for Segmentation of Texture
Images via L0 norm .........................................................................................................................................233

Fatima Shoaib, Noor Badshah, Ali Ahmed, An Efficient Hybrid Kernel Metric Energy Segmentation Model
.........................................................................................................................................................................253

Fazal Ghaffar, Noor Badshah, Higher order scheme for the solving 1-D fractional diffusion equation ..........263

S. Ullah, G. M. Gusev, A. K. Bakarov, F. G. G. Hernandez, Optically detected long-lived spin coherence in


multilayer systems: Double and Triple quantum wells ....................................................................................275

Shahzad Ali Jamshid, Iftikhar Hussain, Hamidullah, Muhammad Nafees Khan, Outcome Based Education
System: A Pilot Study in Industrial Engineering ............................................................................................. 283

Arfa Ali, Iftikhar Ahmad, Hussain Rahman, Sami Ur Rahman, A Hybrid Approach For Automatic Aorta
Segmentation In Abdominal 3d CT Scan Images ............................................................................................ 290

Faisal Imtiaz, Nasru Minallah, Muniba Ashfaq, Waleed Khan, M. Jawwad, On the performance of digital image
processing technique for modeling human actions .......................................................................................... 297

vii
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Muhammad Arsalan Khattak, Muhammad Fahad, Muhammad Adeel Arshad, Mohammad Adil, Adil Rafiq,
Calibration Of Numerical Model Of RCC TEE Beam Bridge Deck With Scaled Physical Model For Fatigue
Analysis............................................................................................................................................................ 303

Muhammad Kashif Nawaz, Iftikhar Ahmad, Risk Tolerant K-Min Search Algorithm ...................................319

Mustafa Ayub, Ihsan Ullah Khalil, Introduction to Civilmatics ......................................................................325

Khalid khan, Misbah Ullah, Optimization of Lot size and Backorder Quantity Considering Learning Phenomena
with Random Defect Rate ................................................................................................................................ 328

Muhammad Adil, Adeed Khan, Muhammad Irshad Hussain, Analytical Modelling of Ultimate Flexural and
Shear Strengths of Lightly Reinforced Ferrocement Beams ............................................................................341

Muhammad Shoaib, Usman Khan Khalil, Syed Zain Kazmi, and Javed Iqbal, Performance evaluation of Moving
Objects Detection Algorithms .......................................................................................................................... 352

Umar Mahboob,Syed Zain Kazmi, Usman Khalil, Javed Iqbal, Real Time Eye Detection and Tracking Method
for Driver Assistance system............................................................................................................................ 361

viii
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Meshless collocation method for one-dimensional


highly oscillatory integrals
Suliman a∗, Sakhi Zaman a , Siraj-ul-Islam a

a Department of Basic Sciences, University of Engineering and Technology Peshawar, Pakistan.

Abstract
To evaluate efficiently and accurately highly oscillatory integrals is very demanding in the fields of
electromagnetic, optics, quantum mechanics and acoustics [5,11]. Due to the oscillatory behavior
of the integrals, it is very challenging for the traditional methods to evaluate numerically the
integrals for large frequency parameter. The present paper evaluate highly oscillatory integrals
by Gaussian radial basis functions (GRBF) based on Levin procedure. The characteristic of the
GRBF with Levin approach is to tackle the oscillation of the integrand. The present method is
compared with the cited literature. Some test problems are included to justify the accuracy of
the method.
Keywords: Highly oscillatory integrals (HOIs), Levin’s method, Gaussian radial basis func-
tion (GRBF), Shape parameter  and LU-Factorization.

1 Introduction
To evaluate numerically highly oscillatory integrals is a very challenging problem in different fields
like electromagnetic, optics, acoustics and quantum mechanics [5, 11]. Generally these integrals
can be represented as:
Z b
I= f (x)eiωg(x) dx (1)
a
where f(x) and g(x) are both smooth functions, which are amplitude and phase functions respec-
tively. The parameter ω is the frequency of oscillations and is a positive real number. Large
value of ω implies that the part eiωg of the integrand is highly oscillatory.

In the last two decades, a number of accurate methods are being developed and used for com-
putation of one-dimensional HOIs, which include asymptotic expansion method [4], numerical
steepest decent method [3], Filon(-type) method [1, 10, 16], Levin(-type) method [6–8, 12–15].

The asymptotic expansion theory which is considered by Filon [1] is an accurate method for
the evaluation of HOIs. Later on the author [10] worked on Filon [1] method. The method [10]
asymptotic order is high when the frequency ω −→ ∞, but one of the limitation of this method
is that it solve HOIs with linear phase functions.


The author to whom all the correspondence should be addressed. Email: salman geyes@yahoo.com

1
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018
Table 1: Nomenclature Box:

Symbols Discription
RBF Radial basis functions
GRBF Gaussian radial basis functions
HOIs Highly oscillatory integrals
QG [f ] Meshless method with Gaussian RBF
 Shape parameter of GRBF
ω Frequency parameter

On the other hand Levin method has much attention as it can compute HOIs with compli-
cated phase functions. This method converts an oscillatory integral into an ordinary or partial
differential equation and then solves the equations. Levin [6] have used monomials basis function.

The present paper use Gaussian radial basis function (RBF) to replace the monomial basis
in the Levin approach. Examples are numerically evaluated and are compared with the cited
literature.
The remaining paper is organized in order as follows. The meshless collocation method is
briefly discussed in section 2. The numerical computation of the given examples are given in
section 3. The paper is concluded with a few remarks in section 4.

2 Meshless collocation procedure


To compute an oscillatory integral of the form (1) with no stationary point by Levins procedure
is to find an approximate function S̃(x) that satisfy the ordinary differential equation (ODE):

S 0 (x) + iωg 0 (x)S(x) = f (x). (2)

Substituting the value of f(x) from (2) into (1), we get


Z b
QG [f ] = [S̃ 0 (x) + iωg 0 (x)S̃(x)]eiωg(x) dx
a
Z b
= d[S̃(x)eiωg(x) ]
a
= S̃(b)eiωg(b) − S̃(a)eiωg(a) .

In this procedure, we assume that S̃(x) = m


P
k=1 δk ϕ(r, ) be an approximate solution of (2). The
unknown coefficients of S̃(x) can be determine by the following interpolation condition

S̃ 0 (xi ) + iωg 0 (xi )S̃(xi ) = f (xi ), i = 1, 2, ..., m. (3)

Equation (3) gives a system of m-linear equations in m-unknowns and can be written in matrix
notaion as
Aδ = F,
where
δ = [δ1 , δ2 , ...δm ]> , F = [f1 , f2 , ..., fm ]> ,

2
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018
and A is a square matrix of order m × m with entries:
0
aij = ϕij (r, ) + iωg(xi )ϕij (r, ), i, j = 1, 2, ...m.

The system of linear equations (3) can be solved for δ, using LU-factorisation or Gauss elimination
method.
In the proposed work, a Gaussian RBF ϕ(r, ) as a basis function is used, which is defined as
−r 2
ϕ(r, ) = e 2 , r = |x − xc |,

where xc are the different m-centers of the RBF interpolation and  is the shape parameter. The
derivative of ϕ(r, ) is given by
0 −2r − r22
ϕ (r, ) = e  .
2
The approximate solution by the meshless procedure depends on selecting an optimal value of the
shape parameter. For this purpose many researchers have developed algorithms for an optimal
value of the shape parameter of the meshless procedure. In this work, the following algorithm
has been used for shape parameter. The main advantage of using this algorithm is that the value
of shape parameter is changing with the change accordingly in nodal points and the frequency
parameter ω.

Programme:
i. function ceps = CostEpsilon(x,, ω,F)
0
ii. A= ϕij (r, ) + iωg(xi )ϕij (r, ), i,j=1,2,...,m;
iii. invA = pinv(A);
iv. κ =diag(invA);
v. λ = inv(A)*F’
vi. EF = λ./κ;
vii. ceps = norm(EF(:));
The calling sequence for CostEpsilon is given by
 = f minbnd(@()CostEpsilon(x, , ω, F ), mine, maxe);
where mine and maxe represent an interval to find an optimal value of .

3 Numerical Examples and Discussion


The present section deals about the proposed method which is tested by solving some
benchmark problems from [2, 9]. The reference solution is obtained by MAPLE. The
absolute error norm Labs and percentage relative error norm Lre are computed in test
problems. Results of the proposed method are compared with the the results of methods
in [2, 9]. Approximation of the results are performed using MATLAB 2017.

3
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Test Problem 1. To compute the integral [9]:


Z 1
I[f ] = x2 eiωx dx (4)
0

The integral is highly oscillatory and is evaluated with the method QG [f ]. Results in
the form of absolute error norm are evaluated and are presented in Fig. 1 and table 2. In
Fig. 1, the absolute error of QG [f ] are calculated and compared with the results of method
reported in [10]. From Fig.1, it is concluded that the performance of the method QG [f ] is
much better than the method reported in [10].
The method QG [f ] is tested for higher frequency parameter and nodes in table 2. In the
results of table 2, the shape parameter  is in [0, 0.5]. The shape parameter is kept fixed
after several experiments for finding the optimal value. One may experience different val-
ues and can fixed the optimal one according to the need of desire accuracy. It is clear from
the table 2 that as we increase the value of ω or nodal points, accuracy of the method is
increasing.

The method QG [f ] is efficient, this behavior of the method for test problem 1 is shown in
Fig. 2 (left). The integral 4 is highly oscillatory as for the very low frequency the integrand
shows high oscillations. This behavior of the integrand is shown in Fig. 2 (right). The
advantage of the method QG [f ] is that it improve the accuracy on increasing the value of
ω for very small nodal points.

10 -5

10 -6

10 -7

10 -8

10 -9
0 100 200 300 400 500 600

Figure 1: (left) Labs of QG [f ] for m = 10, (right) Labs of the Filon method in [10] , for test
problem 1.

4
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018
Table 2: Labs produced by QG [f ] for  ∈ [0, 0.5] of test problem 1.

ω m = 10 m = 20 m = 30
102 1.7176e − 07 7.4221e − 10 1.7749e − 09
103 6.0955e − 10 8.4980e − 11 3.8688e − 12
104 2.5888e − 11 6.1223e − 14 8.5053e − 13
105 2.6943e − 13 4.5111e − 15 6.9706e − 15
106 1.5070e − 15 1.2994e − 14 1.5936e − 15
107 2.4956e − 17 7.9096e − 17 9.0589e − 16

1
10 -1

0.8

0.6

0.4

0.2

10 -2 0

-0.2

-0.4

-0.6

-0.8

10 -3 -1
0 100 200 300 400 500 600
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Figure 2: (left) CPU time of the proposed method for m = 10, (right) Oscillatory behavior for
ω = 200 , of test problem 1.

Test Problem 2. To compute the integral [2]:


Z 1 p √ 2
I[f ] = x (1 + x)eiω (x +2x+2) dx (5)
0

The integral (5) is a highly oscillatory integral and this behavior of the integrand is
shown in Fig. 3 (right). Which shows that for a very low frequency ω = 100 the integrand
is highly oscillatory.
As we increase the frequency it will be difficult to evaluate accurately the given integral
by the traditional methods because of oscillations. Therefore the integral is evaluated by
the proposed method QG [f ]. Results in terms of percentage relative error norm Lre are
analyzed and are given in table 3. Also results obtained in terms of absolute error norm for
large frequencies are calculated and are presented in Fig. 3 (left). From Fig. 3 it is obvious
that as we increase the frequency parameter ω, the accuracy of QG [f ] is also increasing for

5
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018
Table 3: Percentage Lre produced by QG [f ] for  ∈ [0, 0.5] and the methods of [2] for the test
problem 2.

ω QG [f ] K.Chen Gauss
100 9.0568e − 06 2.44e − 03 2.40e + 03
200 1.7780e − 06 2.71e + 05 8.59e + 02
300 9.1901e − 06 5.59e − 05 2.35e + 03
400 2.7894e − 06 2.12e − 05 5.5e + 03
500 4.2396e − 05 9.11e − 06 3.65e + 03
600 3.4915e − 06 7.51e − 06 8.01e + 03
700 5.9758e − 06 9.04e − 06 1.59e + 04
800 3.9706e − 05 6.47e − 06 1.38e + 04
900 4.2679e − 07 8.63e − 07 1.00e + 04
1000 6.0068e − 07 4.00e − 06 1.89e + 04

fixed nodal points.

The proposed method QG [f ] is compared with the numerical results of the proposed
methods reported in [2]. This behavior of the proposed method is shown in table 3. From
table 3 it is concluded that QG [f ] is accurate than the methods reported in [2].

6
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

-11
10
1.5

1
10 -12

0.5

10 -13
0

-0.5
-14
10

-1

10 -15
0 1 2 3 4 5 6 7 8 9 10 -1.5
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
10 4

Figure 3: (left) Labs of QG [f ] for m = 20, (right) Oscillatory behavior of the real part of problem
2 for ω = 100.

7
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018
4 Conclusion
In this paper, we presented meshless collocation method QG [f ] to evaluate numerically
highly oscillatory integrals. The proposed method evaluate HOIs for large value of the
frequency parameter, which is a special aspect of the method. The other cited methods
does not give the required accuracy as compared to the meshless method for large value
of ω. Numerical results justify the performance of the method.

References
[1] L. N. G. Filon. On a quadrature formula for trigonometric integrals. proc. Roy. soc.,
49:38–47, 2005.
[2] P. J. Harris and K. Chen. An efficient method for evaluating the integral of a class of
highly oscillatory functions. J. Comp. Appl. Math., 230:433–442, 2009.
[3] A. Iserles and S. P. Norsett. Efficient quadrature of highly oscillatory integrals. Proc.
Roy. Soc., 461:1383–1399, 2005.
[4] A. Iserles and S. P. Nørsett. On the computation of highly oscillatory multivariate
integrals with stationary points. BIT, Numer. Math., 46(3):549–566, 2006.
[5] A. Ishimaru. Wave propagation and scattering in random media, volume 2. Academic
press New York, 1978.
[6] D. Levin. Procedures for computing one and two-dimensional integrals of functions
with rapid irregular oscillations. Math. Comp., 158:531–538, 1982.
[7] J. Li., X. Wang, T. Wang, and S. Xiao. An improved levin quadrature method for
highly oscillatory integrals. J. Appl. Numer. Math., 60:833–842, 2010.
[8] Y. Liu. Fast evaluation of canonical oscillatory integrals. Appl. Math., 6(2):245–251,
2012.
[9] S. Olver. Numerical approximation of highly oscillatory integrals. Olver thesis for the
degree of Doctor of Philosophy, 2008.
[10] S. Olver. Fast and numerically stable computation of oscillatory integrals with sta-
tionary points. BIT, 50:149–171, 2010.
[11] K. Shariff and A. Wray. Analysis of the radar reflectivity of aircraft vortex wakes. J.
Fluid. Mech., 2002.
[12] Siraj-ul-Islam, A. S. Al-Fhaid, and S. Zaman. Meshless and wavelets based complex
quadrature of highly oscillatory integrals and the integrals with stationary points.
Eng. Anal. Bound. Elemt., 37:1136–1144, 2013.
[13] Siraj-ul-Islam, I. Aziz, and W. Khan. Numerical integration of multi-dimensional
highly oscillatory, gentle oscillatory and non-oscillatory integrands based on wavelets
and radial basis functions. Eng. Anal. Bound. Elemt., 36:1284–1295, 2012.

8
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018
[14] Siraj-ul-Islam and S. Zaman. New quadrature rules for highly oscillatory integrals
with stationary points. J. Compt. Appl. Math., 278:75–89, 2015.
[15] S. Xiang. Efficient quadrature for highly oscillatory integrals involving critical points.
J. Comp. Appl. Math., 206:688–698, 2007.
[16] S. Xiang. Mr2276763 (2008k: 65051) 65d30. Numer. Math., 105(4):633–658, 2007.

9
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

FAULT TOLERANT SUFFIX TREES


Syed Zulfiqar Ali Shah1, Iftikhar Ahmad2
Department of Computer Science and Information Technology, Peshawar
University of Engineering and Technology, Peshawar

ABSTRACT

Classical algorithms and data structures assume that the underlying memory is reliable and the data remain safe during or after
processing. But several studies have shown that large and inexpensive memories are vulnerable to bit flips. Overall output of a
classical algorithm is threatened by a few memory faults.
Fault-Tolerant data structures and resilient algorithms are developed to tolerate limited number of faults and provide correct
output based on uncorrupted part of data. Suffix Tree is an important data structure for string matching applications. A fault-
tolerant suffix tree is presented by Christiano and Demaine but it uses complex techniques of encodable and decodable error-
correcting codes, blocked data structures and uses fault-resistant tries.
In this paper we have presented a fault-tolerant suffix tree by using the natural technique of data replication. We used the faulty-
memory random access machine model presented by Finocchi and Italiano. A resilient version of Ukkonen’s suffix tree
algorithm is also presented for constructing our fault-Tolerant Suffix Tree. √𝜎 (σ is the upper bound on the number of
corruptions) is our duplication function to store copies of the start index to sustain at most 𝜎 memory faults injected by an
adversary. The time and space complexity of our FTST is 𝑂(𝑚 𝜎), 𝑚 being size of the input string. The searching cost of a
2𝜎
pattern is 𝑂(𝑛 𝜎 ). We also prove that the upper bound on corrupt suffixes is ⌊ ⌋.
√𝜎+1

Index Terms: resilient data structures, suffix trees, computing with unreliable information.

1. INTRODUCTION
Memory plays pivotal role in all Turing-based computational platforms. The correctness of computational results largely
depends upon the correctness of underlying memory system. Even tiny flips of a few memory bits can alter the stored data and
consequently the obtained results. Large scale computations or applications require even larger amount of memory which can
become a prey of large number of bits’ flip.
An uncontrolled bit flip is called a memory fault. Memory faults are categorised as either Physical errors or Transient
errors [5]. Physical errors are the results of failure of a hardware component. Transient errors are faults like bit flips in
semiconductor devices such as memories. Physical errors can be corrected by replacement of defected devices. Transient errors
can be contained by using the error detection and correction circuitry. But this is an expensive solution in terms of price and
performance as additional computational overhead is also added. Another solution can be the use of high performance and
reliable memory like memory registers. But it is too expensive to be used for large applications. Hence these faults need to be
taken care of at application level instead of hardware level.
Classical data structures and algorithms are not capable of dealing with these memory faults. A few faults will make
the algorithm to take wrong steps [5]. So special algorithms can be used in such situations. Such algorithms are known as
“resilient algorithms”. A resilient algorithm is the one which can compute a correct output based on uncorrupted values [4].
Suffix tree is an important classical data structure which is used for various string applications. Like other classical
data structures suffix tree is also vulnerable to memory faults. No simple Fault-Tolerant algorithm is available for constructing
a Fault-Tolerant Suffix Tree which can be used to search a substring despite the presence of some memory faults. Therefore, it
is necessary to have a fault-tolerant suffix tree which can be developed by using simple Ukkonen’s algorithm to search a pattern
even when the suffix tree is affected by memory faults.

1 zulfikar_pk@yahoo.com
2 ia@uetpeshawar.edu.pk

10
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

2. LITERATURE REVIEW

A lot of work has been done in the field of fault-tolerant data structures and resilient algorithms. Large-scale applications
process massive data which requires large amount of low-cost memory. Hence fault-tolerance is an important consideration in
large systems, safety critical systems and financial institutions’ systems etc. Computing with unreliable information is a new
and interesting area of research. Investigations and explorations have been carried out for coping with the problem of computing
with unreliable information in a variety of different settings. For example, liar model [14], fault-tolerant sorting networks [13],
resiliency of pointer-based data structures [12] and parallel models of computation with faulty memories [15].
Finocchi and Italiano [4] introduced a faulty-memory random access machine model, in which the storage locations
may suffer from memory faults. In this model an adversary can alter up to σ storage locations throughout the execution of an
algorithm. This model is used to produce correct output based on uncorrupted values [6-10,16]. Aumann and Bender [12]
proposed fault-tolerant Stack, Linked List and General Tree. In their work they have used reconstruction technique. When the
faults are detected, the data structure is reconstructed. Finocchi, Grandoni, Italiano [6] presented resilient search trees to obtain
optimal time and space bounds while tolerating up to 𝑂 (√𝑙𝑜𝑔𝑛 ) memory faults, where n being the size of search tree.
Jørgensen, Moruz, and Mølhave [13] presented a resilient priority queue in which the deletemin operation returns either the
minimum uncorrupted element or some corrupted element. The resilient priority queue tolerates O(log n) corruptions.
Two popular techniques, data replication and error-correcting codes, have been used in the design of fault-tolerant
data structures. Data replication technique is successfully used by Finocchi, Grandoni, Italiano, Caminiti in [5-10, 18]. The
technique of Error-correcting codes is used by Christiano, Demaine, and Kishore in [11] and by Chen, Grigorescu, and Ronald
de Wolf in [17]. Christiano, Demaine, and Kishore have used this technique to design lossless fault-tolerant data structures.
Their data structures consist of “fault-tolerant blocks”. These fault-tolerant blocks are constructed by using the linear-time
encodable and decodable error-correcting codes. They construct the fault-tolerant suffix tree by first computing the Euler tour
of the suffix tree of string S by a fault-tolerant algorithm in O(n log n + σ ) time and stores the result in fault-tolerant memory.
Then a fault-tolerant version of this trie is built. This whole setup is used to perform a fault-resistant query in the compressed
suffix trie of string S.

Our fault-tolerant suffix tree model is based on the natural technique of data replication. We have used the Faulty-
Memory random access machine model introduced by Finocchi and Italiano [4]. In this work √𝜎 is used as duplication function,
where 𝜎 is an upper bound on memory faults. Instead of duplicating actual characters of the string, only the starting index of
each edge is duplicated ( √𝜎 + 1 ) times. Our FTST can tolerate ⌊(√𝜎 + 1 ) / 2⌋ - 1 faults on each edge of the suffix tree.

3. PROPOSED ALGORITHM AND ANALYSIS

3.1 Proposed Algorithm

Our Fault-Tolerant Suffix Tree (FTST) model consists of two parts. First part, described in Figure-1, consists of the resilient
version of Ukkonen’s algorithm to construct a resilient suffix tree data structure for a given string S. The resilience capability
is added to this data structure by duplication of starting index of the edge of a node which links it to its parent node. The string
S and pattern P are placed in safe memory while the nodes of fault-tolerant suffix tree are constructed in faulty memory and
can be corrupted by an adversary.

11
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Figure-1
Lemma-1: The maximum number of edges NOE in a suffix tree is given by 𝑁𝑂𝐸 ≤ 2|𝐿𝑆| − |LCS|
Theorem-2: For DF = √ σ , σ < (1/8) * ( 4 * NOE + NOE2 ) + (1/8) * (√ (8 * NOE3 + NOE4 ) ) as otherwise an adversary can corrupt
all edges beyond repair.

3.1.1. FaultTolerantSuffixTree()
This is the main part of FTST which constructs a fault-tolerant suffix tree for a given string S of size LS. Symbol $, as its last character,
makes it an explicit suffix tree. LCS is the number of unique characters that constitute S. NOE is the number of possible edges of this suffix
tree. Max is calculated by theorem-2, which represents the maximum number of memory faults beyond which adversary can corrupt the
whole suffix tree. σ is selected randomly between 1 and Max. DF is our duplication function. NOC is used for replicating starting
index of each edge which is one more than DF for the original value. FTSTi is the Fault-Tolerant Suffix Tree of S from first to
ith character of S. Line 10-15 construct the FTST by using extension rules of Ukkonen’s algorithm. FTST makes use of all the
heuristics of Ukkonen’s algorithm to achieve the same time and space bounds with an overhead of O(σ ). All the nodes of this
FTST are in faulty memory and can be corrupted. The START index of each node is stored in an array of size NOC.
Two worst strategies can be used by adversary to inject corruptions. First, distribute total bounded corruptions into
maximum number of edges. Second, fully corrupt a few edges by injecting maximum corruptions into these edges. Our FTST
model can sustain both types of corruption attacks and can fault-tolerantly find a given substring (if it is a substring).

12
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

The second part of FTST, described in Figures-2 to 4, consists of the following three functions TraverseNode, TraverseEdge and CheckIndex
which fault-tolerantly search a substring P in the given string S by using the fault-tolerant suffix tree built in part first.

Figure-2

13
Figure-3
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Figure-4

3.1.2. TraverseNode()
It receives two values, a pointer to node and an integer value. Integer value is the index of the substring which is to be
found within string S. It calls the function TraverseEdge() to traverse edges of child nodes. It recursively calls itself to
continue the substring search in the grandchild node for the remaining part of the substring.

3.1.3. TraverseEdge()
This function plays the pivotal role in the search operation. It receives a node-pointer and a reference to the index of
substring P. CheckIndex() is called to fault-tolerantly check if the START index of this edge is corrupted. A 0 is returned
to indicate successful match on the current edge. A 1 is returned to indicate success of fault-tolerant search of the
substring P in the given string S.

3.1.3. CheckIndex()
It receives an integer array and checks whether there is any integer in majority among NOC integers and if it is in majority, then
it returns that integer in majority. If a value is in majority, it is considered as a safe value.

3.2 Analysis
Our FTST employs all the techniques of Ukkonen’s algorithm. Time and space complexity of FTST is O(m σ) and searching
cost of a substring of size ‘n’ is O(n σ).
2𝜎
Theorem 1. If 𝐷𝐹 = √𝜎 , then the maximum number of edges that can be corrupted by an adversary is no more than ⌊ ⌋.
√𝜎+1

14
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

𝐷𝐹+1
Proof: We know that a single edge can be corrupted beyond repair if copies are corrupted. In our case, 𝐷𝐹 = √𝜎, which
2
√𝜎+1 √𝜎+1
means corruption of copies will leave an edge value irreparable. Alternatively, corruptions cause a single edge
2 2
2𝜎
corruption, therefore, by using basic mathematics, 𝜎 corruptions will cause at maximum ⌊ ⌋ edge values beyond repair.
√𝜎+1

We know that in a suffix tree the number of children of any node cannot be greater that LCS (LCS is the length of
character set constituting the given string). Hence number of root-originating edges will must be LCS. So if the root-originating
2𝜎
edges are among ⌊ ⌋ edges, then all the suffixes of the suffix tree are corrupted.
√𝜎+1

1. CONCLUSION
In this paper we have presented a Fault-Tolerant Suffix Tree which can search substrings in a given string in the
presence of a limited number of memory faults. Classical suffix tree algorithms cannot respond to substring queries
in the presence of even a few memory faults.
2𝜎
We have shown that the upper bound on corrupt suffixes is ⌊ ⌋ . Experimental study of the performance of our
√𝜎+1
FTST is our next step.

REFERENCES

[1] Tezzaron Semiconductor: “Soft errors in electronic memory” - a white paper.


http://www.tezzaron.com/about/papers/papers.html (2004)
[2] R.C. Baumann, "Radiation-induced soft errors in advanced semiconductor technologies", IEEE Transactions on Device
and Materials Reliability, vol. 5, no. 3, pp. 305-316, 2005.
[3] T. C. May and M. H. Woods. “Alpha-particle-induced soft errors in dynamic memories”. IEEE Transactions on Electron
Devices, 26(2), 1979.
[4] I. Finocchi and G. F. Italiano. “Sorting and searching in the presence of memory faults (without redundancy)”.
Proc.36thACM Symposium on Theory of Computing (STOC’04), 101–110, 2004.
[5] U. Ferraro-Petrillo, I. Finocchi, G. F. Italiano. “The price of resiliency: A case study on sorting with memory faults”.
Algorithmica, 53(4):597–620, 2009.
[6] I. Finocchi, F. Grandoni, G. F. Italiano.”Resilient search trees”. SODA, 547–553, 2007.
[7] I. Finocchi, F. Grandoni, G. F. Italiano. “Designing reliable algorithms in unreliable memories”. Computer Science
Review, 1(2), 77–87, 2007.
[8] Ferraro-Petrillo, U., Grandoni, F., Italiano, G.F.: “Data Structures Resilient to Memory Faults: An Experimental Study
of Dictionaries”. In: Festa, P. (ed.) SEA 2010. LNCS, vol. 6049, pp. 398–410. Springer, Heidelberg (2010)
[9] I. Finocchi, F. Grandoni and G. F. Italiano. “Optimal resilient sorting and searching in the presence of memory faults”.
Manuscript, 2005.
[10] I. Finocchi, G. F. Italiano. “Sorting and searching in faulty memories”. Algorithmica, 52(3):309-332, 2008.
[11] P. Christiano, E. D. Demaine, and S. Kishore. “Lossless fault tolerant data structures with additive overhead”. In
Proceedings of WADS, pages 243–254, 2011.
[12] Y. Aumann and M. A. Bender. “Fault-tolerant data structures”. Proc. 37th IEEE Symp. on Foundations of Computer
Science (FOCS’96), 580–589, 1996.
[13] S. Assaf, E. Upfa. “Fault-tolerant sorting networks”. SIAM J. Discrete Math. 4(4), 472–480 (1991)
[14] A. Dhagat, P. Gacs,P. Winkler. “On playing twenty questions with a liar”. In: Proc. 3rd ACM-SIAM Symp. on Discrete
Algorithms (SODA’92), pp. 16–22 (1992)
[15] P. Indyk. “On word-level parallelism in fault-tolerant computing”. In: Proc. 13th Annual Symp. On Theoretical Aspects
of Computer Science (STACS’96), pp. 193–204 (1996)
[16] A. G. Jørgensen, G. Moruz, and T. Mølhave. “Priority queues resilient to memory faults”. In Proc. 10th International
Workshop on Algorithms and Data Structures, 2007.
[17] V. Chen, E. Grigorescu, R de Wolf “Error-correcting data structures”, SIAM Journal on Computing, 2013 - SIAM
[18] S. Caminiti, I Finocchi, E.G. Fusco, F. Silvestri “Resilient dynamic programming”, Algorithmica, 2017 - Springer

15
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

FPGA Implementation of Single Object Tracking Using Mean


Shift Algorithm
Kausar Ghawas Khan Yousafzai, Laiq Hasan
Department of Computer Systems Engineering, UET Peshawar
kausarusufzai@gmail.com

Abstract

FPGA is a technology leap resulting in disruptive innovation in every engineering field in general
and the field of Image Processing and artificial intelligence in particular. In parallel, development
of multi-threaded, high-level languages including LabVIEW seems to be the dawn of a new era of
extremely fast development and prototyping cycles. Object tracking has numerous applications in
the new security environment including but not limited to perimeter monitoring using CCTV
cameras, intelligent robots used in hazardous environments, target tracking in military weapon
guidance systems and automated batch inspections in numerous industrial applications. This work
proposes an FPGA based architecture to implement mean shift tracking with a focus on efficient
resource management and faster throughput. Our system operates at 76 fps for 720x512 pixel
images. A major distinguishing factor in our research work is the use of LabVIEW for
implementation of the entire tracking loop using a high-speed FPGA; thus reducing the prolonged
design times usually associated with FPGA based systems. Our research is a founding work in the
area of LabVIEW based software-hardware co-design using FPGAs for image processing.

Index Terms— FPGA, Image processing, object tracking, prototyping.

1. Introduction

FPGA (Field Programmable Gate Arrays) is an enabling technology for parallel processing at the
embedded level. Over the past 5-10 years, this technology has brought about paradigm changes in
the architecture of embedded systems. Moreover, the computational power and the parallelism
provided by FPGAs is unmatched to any other embedded technology. Integration of FPGA
technology into LabVIEW, a multi-threaded graphical programming language by National
Instruments has resulted in massive reduction of embedded systems development time. The
processing power supported by the latest FPGAs of Xilinx family namely Zynq-7000 has enabled
numerous image processing tasks not previously possible using an embedded architecture.

The Mean-shift algorithm uses non-parametric density estimation and has been used in object
tracking fields with high speed real time processing requirements by [Bravo et al, 2010]. In the
research by [Fukunaga et al, 1975], the concept of Mean-Shift was presented. The concept was
further matured when [Cheng, 1995] gave the concept of a kernel function and spread scope of this
algorithm, to include multiple domains. Utilization of Mean-shift algorithm for tracking was
introduced by [Comaniciu et al, 2005] which resulted to include tracking of the Non-rigid domain
as an offshoot of Mean-Shift’s optimization.

16
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

The work by [Ostadzadeh et al, 2012] used a heterogeneous Virtex5 FPGA platform to present their
application. They designed their application using Delft Workbench (DWB) and used Molen
processor for the experiments. However, Delft Workbench does not support Zynq platform as the
target board. [Trieu et al, 2013] proposed implementation of Mean shift filter using Virtex-4 FPGA.
However, no details regarding the execution time and memory usage was presented in the paper.
[Kiran et al, 2014] used Gaussian mean shift algorithm for implementation on Spartan-3 FPGA
using Xilinx ISE. However, their paper lack an analysis of the application from profiling and
partitioning of HW/SW design standpoint.

2. Mean Shift Algorithm

Mean shift is used for locating the maxima of a density function with discrete sampled data from a
particular function. This is a recursive method in which an initial estimate is chosen as the starting
point. For a kernel function K(x-xi) and a neighborhood N(x) of x, the weighted mean m(x) is given
below:-

∑0𝑥𝑖∈𝑁(𝑥) 𝐾(𝑥𝑖 − 𝑥)𝑥𝑖


m(x) ==
∑0𝑥𝑖∈𝑁(𝑥) 𝐾(𝑥𝑖 − 𝑥)

The difference (m(x) – m) results in the mean shift. In order to use mean shift for tracking, we split
the video sequence into frames. If we want to track a certain object, we will create a color histogram
of that object in the initial frame. A confidence map is then created in a subsequent image frame
based on the color histogram of the object in the previous frame. The mean shift algorithm is then
used to find the centroid of a confidence map near the object's previous location. A probability
density function is realized in the form of the confidence map on the new image and a probability
is assigned to each pixel of the new image. In essence, this newly assigned probability is the same
as that of the pixel color of the object in the previous frame. Since the mean shift algorithm is both
sensitive to scale and geometric-shift. This means that if we have to track an object whose size or
shape keeps on changing, we would need EM-based mean shift algorithm.

The EM-based mean shift, or shape adapted mean shift, algorithm is an extension of the standard
mean-shift algorithm. The position of the local mode and the covariance matrix that describes the
approximate shape of the local mode gets calculated simultaneously by EM-based mean shift
algorithm. In order to address the issue of shape and scale change of the target, the covariance matrix
which is used to encompass the data regarding the shape and scale of the region of a particular
object, gets updated in each frame so that changes in the shape and scale of the object in each frame
are tracked conveniently. The EM-based mean shift algorithm conceptually consists of the following
three stages:
2.1 Choice of Target model. The target object in the given frame is chosen first. This translates
into representation of the target model using color histogram with a kernel.
2.2 Mean shift convergence. In the next frame, the algorithm starts with the search of the current
histogram and spatial data to achieve the best target match candidate by exploiting the similarity
function. This results in a new center of mass and the object center accordingly shifts to this new
location, as depicted in the figure below. The mean shift vector is used to encompass the magnitude
17
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

and direction of the move. The process is repeated until the similarity function gets appropriate
convergence.

Fig 1 – Mean shift Data points distribution

2.3 Update location and model. In the last step, the scale and shape representing the target model
is updated. The information related to the blending parameter and maximum acceptable scale and
shape results in a change in the location of the target.

Frame 1 Frame 2
Fig 2 – Location Update between Frame 1 & Frame 2
2.4 Kalman Prediction. EM-based mean shift also encapsulates implementation of a Kalman Filter.
Based on the history of measurements of the target, we use a Kalman filter to build a model of the
state of the system. The intent is to accurately predict the location of the target using the history of
measurements.

3. Hardware Architecture

The hardware architecture has been highly optimized because of the following paradigm changes
from the conventional interfaces generated earlier:

3.1 Hardware/software co-design. Since Zynq combines a Cortex ARM A9 processor with the
FPGA, some of the tasks are performed by the ARM core while rest of the burden goes to the FPGA
18
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

which create a very efficient Hardware/software co-design topology. It also enables the design to
go for optimal utilization of onboard resources.
3.2 LabVIEW Multi-threaded Environment. LabVIEW has an inherent multi-threading
capability which extends the idea of multitasking into our developed application. A single
application can be divided into specific operations into individual threads. Each of the threads thus
run in parallel. In doing so, the OS allocates the processing time not only to different applications,
but also the allocation of processing time also covers individual threads within an application. In a
multithreaded National Instruments LabVIEW program, an example application might consists of
four threads namely, a user input thread, a voltage measurement thread, a serial communication
thread, and a real time data storage thread. Each of these threads can be prioritized in such a way
that they operate independently. Thus, we exploit this feature to run multiple tasks in parallel without
affecting other tasks running on the system.

Fig 3 - LabVIEW Multi-threaded Language

3.3 Superscalar architecture. Due to the parallel processing based architecture of FPGA, parallel
execution of multiple functional units is implemented to harness the built-in parallelism. Due
diligence in pipelining the entire architecture results in in increased throughput. This directly results
in high pixel-rate.

In order to address this implementation, we use LabVIEW for FPGA 2017 in conjunction with ISE
FPGA development environment of Xilinx Inc. In the course of development, it was observed by
the authors that the environment of LabVIEW ideally suits the design of complex and modular
applications for FPGAs. For physical prototyping, we used myRIO FPGA based smart board from
National Instruments. The hardware resources on the board include a Zynq-7010 FPGA combining
a dual core Cortex-A9 and Artix-7 FPGA in a single die, a Wifi transceiver, a three-axis
accelerometer, four Leds, two USB ports, 10 analog inputs and 6 analog outputs.
19
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Fig 4 – Block Diagram of Zynq-7000 FPGA smart prototyping system

Fig 5 – PS and PL Parts available in a Zynq-7000 SOC

4. Results

In this section we assess the performance and accuracy of the proposed system. The proper
blend of co-design techniques and description languages with restricted resource utilization is
possible due to the highly optimized architecture. Fig 7 provides different frames in which the mini-
car is successfully tracked. Table 1 provides a comparison of our proposed approach with other
FPGA implementations. The high fps by [Trieu et al, 2011] was achieved because of the relatively
small image size of 360x288 pixels as compared to our image size of 720x512. The resource
utilization is also in the optimized range.
20
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Fig 7 – Tracking (Frame 50,100,150,200)

S Algorithm Device Frame/Sec Reference


NO
1 Mean Shift Altera EPIC6 50 Trieu6
2 Particle Filter Xilinx Virtex-4 25 Marek Wjcikowski9
3 Mean Shift Zynq-7000 76 Proposed

Table 1 – Performance Comparison

S Resource Type Available Used Percentage


NO
1 LUT 17600 8500 48.29
2 Block RAM 2.1 MB 0.8MB 38.09
3 DSP Slices 80 30 37.5
4 FF 35200 16600 47.15

Table 2 – Memory Utilization

5. Conclusions

In this work, we proposed an FPGA based architecture to perform single object tracking in video
sequences. This work acts as a major step in implementation of object tracking algorithm on Zynq
FPGA platform. We followed EM-based Mean shift Algorithm to host our architecture. Our goal
was to design a system which should work in hardware implementations where resources are much
constrained. The non-real time tasks were successfully handled by the Cortex-A9 processor while
time-critical signals were allotted to Artix-7 FPGA. The co-design strategy thus implemented
presents an optimum blend of both the software and hardware domains. Moreover, usage of multi-
threaded LabVIEW software resulted in reducing the implementation intricacies thus resulting in
optimized code at critical stages and interfaces.

6. Relation To Prior Work (New)

The most fundamental work on Mean shift Algorithm was made by was put forward by [Fakunaga
et al, 1975] which was further matured by Cheng using the concept of a Kernel function. Our work
is a physical implementation of this algorithm on a high-end FPGA platform capable of
heterogeneous IP cores. This work will pave the way for future researcher to explore optimum
utilization of the hardware platform. Since hardware-software co-design is a relatively new idea and

21
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

not enough research has been done in this category, our work will prove to be an important step in
this domain.

7. References

[1] Bravo, I.; Mazo, M.; L´azaro, J.L.; Gardel, A.; Jim´enez, P.; Pizarro, D. An intelligent
architecture based on field programmable gate arrays designed to detect moving objects
by using principal component analysis. Sensors 2010, p. 9232–9251.

[2] Fukanaga K, Hostetler LD. The estimation of the gradient of a density function, with
applications in pattern recognition. IEEE Trans. On Information Theory, 1975, p. 32-40.

[3] Cheng Y. Mean shift, mode seeking and clustering. IEEE Trans. On Pattern Analysis and
Machine Intelligence, 1995, p. 790-799.

[4] Comaniciu D, Ramesh V, Meer P. Kernel-Based object tracking. IEEE Trans. on Pattern
Analysis and Machine Intelligence, 2003, p. 564-577.
[5] S Arash Ostadzadeh, Roel Meeuws, Imran Ashraf, Carlo Galuzzi, and Koen Bertels. The
q2 profiling framework: Driving application mapping for heterogeneous reconfigurable
platforms. In International Symposium on Applied Reconfigurable Computing, 2012, p.
76–88.

[6] Trieu, Dang Ba. An Implementation of the Mean Shift Filter on FPGA. 21st International
Conference on Field Programmable Logic and Applications, 2011, p. 219-224.

[7] Kiran B.V, Sunil Kumar. Hardware Efficient Mean Shift Clustering Algorithm
Implementation on FPGA. International Journal of Application or Innovation in
Engineering & Management, 2014, p. 460–464.

[8] Lu X, Ren D, Yu S. FPGA-based real-time object tracking for mobile robot. In: Audio
Language and Image Processing (ICALIP), International Conference on Shanghai
University, 2010 Nov, p. 1657–62.

[9] Wojcikowski M, Zaglewski R, Pankiewicz B. FPGA-based real-time implementation of


detection algorithm for automatic traffic surveillance sensor network. Journal of Signal
Processing Systems. 2012 Jul; p. 1–8.

[10] Ali U, Malik MB. Hardware/software co-design of a real-time kernel based tracking
system. Journal of Systems Architecture. 2010 Aug; p. 317–26.

22
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Performance Analysis of Single Object Tracking Algorithms

Muhammad Haroon, Laiq Hasan

Department of Computer Systems Engineering, UET Peshawar


Abstract

We aim to do qualitative comparison of the state of the art object tracking algorithms based on their performance and would
focus on to find the best tracking algorithm. In order to test these algorithms, we carried out the analysis using various occlusion
and Background clutter based scenarios. The dataset used for testing the algorithms consisted of 50 frames of video and
introduced a wide variety of difficulties related to object tracking including low-resolution and camera motion. Moreover,
challenges in object detection including background clutter and object occlusion are also included in the dataset. Overlap
accuracy and Euclidean distance have been taken as the yard stick for gauging the performance of these algorithms. The
uniqueness of our research work includes the innovative standpoint of performance gain of various tracking algorithms thus
simplifying the decision making of the choice of the best tracking algorithm for future researchers.

Index Terms— Adaptive, Covariance, Kalman Filter, Mean Shift, Occlusion, Tracking

1. Introduction

In the modern era, Video Tracking has emerged as a premier research field and active research and development is being
pursued in this field by virtue of the diversified application and the innovations in the processing technology. Various
application fields of video tracking include Close-Circuit Camera (CCTV) networks for security surveillance, military weapons
in defense and robotics in industry and academia among other interesting applications. Research in the field of digital image
processing has given birth to new algorithms due to the advancement in processing technology. Object tracking is one of the
fastest growing research domain and a multitude of algorithms have been developed for a variety of problems.

Several contributory factors affect the performance of tracking algorithms in a video which includes the extent of object
prior knowledge, the number and nature of parameters included for tracking i.e. scale, location, and object contours. Three key
components form the backbone of a tracking system; a motion model, which depicts the motion of the object over overtime,
an appearance model, which dictates the likelihood of a particular location for an object and a search scheme to narrow down
the most likely location in a current image. In our research we have chosen six recent tracking algorithms for comparison, and
have focused to select a low resolution wide area video. Various challenges are offered by the selected video and thus helps to
form a good testing platform for comparison of the selected algorithms.

2. Tracking Algorithms Under Study

2.1 Incremental Video Tracking (IVT): [Ross et al, 2008] proposed the IVT tracking algorithm. A single point is used to
denote the particular object that we want to track in the current frame. Subsequently, a dynamic model dictating various points
in the next frame is made using a particle filter. An observation model is also fabricated to give each point a likelihood by
computing a window weight around the point. Accordingly the window which corresponds to the most likely point is chosen
as the location of the object in the next frame. This is followed by an incremental update of the model after every few frames.

2.2 Multiple Instance Learning (MIL) Tracker: This algorithm was proposed by [Babenko et al., 2011] which computes
object patch, which is a search area, in the current frame, around the object to be tracked. The search area is further divided
into numerous small patches of identical size. Two bags are formed to include these patches. Patches where parts of the object
are visible are included in the “positive bag” whereas patches where the object is not visible are included in the “negative bag”.
A classifier is further applied online on the two bags so as determine the object’s position in the subsequent frame and is then
applied to the patches selected from the search area of that frame. Periodic updates of both the bags and the classifier are carried
out in the next frame.

23
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

2.3 L1 Tracker: [Mei at al., 2011] originally gave the idea of the L1 tracker which was later worked upon by [Bao et al, 2012].
This algorithm propose that tracking of a particular object maybe carried out if each target patch in the search area is
sporadically represented in the space covered by the target and insignificant templates from the earlier frame are applied to find
the object in the new frame. The candidate object patch is computed by selecting the patch with the smallest projection error.
It is followed by a Bayesian state inference framework which does the necessary tracking.

2.4 Partial Least Squares (PLS) Tracker: This algorithm was proposed by [Wang et al., 2012]. The main hypothesis was
that tracking is taken as a binary classification problem in which PLS analysis for generating a low-dimensional subspace is
used to model the correlation of appearance of the object and class labels from foreground and background. As object
appearance is dependent on the object temporal relationship with the background and is likely to repeat over time and
consequently results in robust tracking.

2.5 Sparse Online Tracker (SOT): This algorithm was developed by [Wang et al., 2013]. For learning an active appearance model
of the primary object, Principal component analysis (PCA) along with a scant representation scheme is used in this algorithm. The
PCA reconstruction is augmented by an l1 regularization, thereby resulting in a novel algorithm to represent an object by scant
prototypes that takes care of both data and noise. Moreover, sparse prototypes are used to represent objects for tracking.

2.6 Spatio-Temporal Context (STC) Tracker: This generative model-based object tracker was developed by [Zhang et al., 2013],
which uses the spatio-temporal context. The spatio-temporal relationships between the object of interest and its local context based
on a Bayesian framework is taken as the basis for this approach. Modeling the statistical correlation between the target and its
surroundings in the low-level feature space (i.e., image intensity and position) is carried out in this algorithm. A confidence map is
built by obtaining the best target location to track the object.

3. Results and Discussion

A low resolution camera is used to create the dataset by taking an aerial image of cars moving on a road. Camera motion does exists
along the moving cars in one direction. In this dataset a spatial resolution of 720x480 pixels is maintained. Fifty interlaced frames
were taken from this video dataset to represent a challenging tracking problem. Performance evaluation is done using a detailed
segmentation of all 39 cars across the 50 frames. Fig. 1 shows example results of tracking.

24
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Fig 1 – Frame-wise Depiction

3.1 Overall Accuracy. A comparison of the manually segmented ground truth is used to evaluate the performance of the tracking
algorithms. Two metrics were generated to indicate accuracy naming (a) localization error and (b) overlap accuracy (shared overlap
of the bounding boxes). Table I shows the results averaged over all 39 cars across all 50 frames.

S No Algorithm Localization Overlap


1 PLS 1.8 70.96
2 SOT 4.7 63.61
3 IVT 4.7 63.61
4 L1 7.8 66.61
5 STC 10.1 61.55
6 MIL 70.7 34.54
Table I. Overall Accuracy Performance Metrics

3.2 Occlusion Handling. In order to create occlusion in the dataset, Car number 36 presents the problem of partial occlusion in
frame 18 to frame 27. The capability of each algorithm has been charted out in Table II to handle the partial occlusion of car number
36 for overlap accuracy and number of frames over which it correctly tracked the car before losing track of the car. L1 tracker emerged
as a relative winner in this category by persistently tracking the car number 36 till the twelfth frame compare to IVT and SOT trackers
which tracked it till the sixth frame before failing.

S No Algorithm Localization(Pixels) Overlap (%) Frames


1 PLS 1.7 74.46 50
2 SOT 2.4 68.35 50
3 IVT 2.5 69.45 50
4 L1 32.3 16.29 11
5 STC 32.5 8.25 6
6 MIL 34.7 8.23 6
Table 2. Occlusion Handling Performance Metrics

3.2 Negligible Motion. The negligible motion is represent by Car number 8 in the dataset which is waiting to make a turn, and
appears to be stationary with negligible motion. This problem thus creates another challenge for the trackers inherently designed to
track moving objects. Table III is used for performance comparison of the trackers to accurately track car number 8.

S No Algorithm Localization(Pixels) Overlap (%) Frames


1 L1 0.6 79.78 50
2 SOT 1.5 75.25 50
3 IVT 1.5 69.47 50
4 PLS 1.3 65.16 50
5 STC 1.8 74.27 50
6 MIL 39.5 4.31 6
Table 3. Negligible Motion Performance Metrics

3.3 Background Clutter. The background clutter is simulated by Car number 10 in the dataset which appears to be surrounded by
multiple cars all moving together at varying speeds, and also passes next to stationary car number 8. The performance of the tracking
algorithms in this high background clutter environment is tabulated in Table IV.

25
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

S No Algorithm Localization(Pixels) Overlap (%) Frames


1 PLS 1.6 73.64 50
2 STC 1.7 71.36 48
3 SOT 1.7 71.52 49
4 IVT 2.2 70.16 48
5 L1 20.7 36.69 23
6 MIL 46.6 3.27 2
Table 4. Background Clutter Performance Metrics

3.4 Low Contrast. In order to simulate a Low contrast in the dataset, Car number 29 is unique because its color matches that of the
road thus affecting the trackers to pose the low contrast problem. The performance of the trackers are tabulated in Table V.

S No Algorithm Localization(Pixels) Overlap (%) Frames


1 L1 1.0 80.52 50
2 PLS 1.3 80.49 50
3 SOT 1.5 74.66 50
4 IVT 1.7 69.26 50
5 STC 2.1 60.28 50
6 MIL 7.0 21.48 16
Table 5. Low Contrast Performance Metrics

5. Conclusions

From the standpoint of overall best performance, the PLS tracker outpaces the rest as shown in Table 1. The iconic L1 tracking
algorithm performs quite well for occlusion-based scenarios. Since in our dataset the difference between the occluding and tracked
entities is very minor because of resolution and interlacing. The output thus results in the loss of tracking as shown in Table 2. From
the background clutter standpoint, the MIL tracking algorithm performs adequately well. Moreover, for low contrast and negligible
motion, MIL tracker demonstrates to perform poorly. The best attribute of MIL tracking algorithm is the occlusion-related scenario
in which its exceptional because of its feature-learning ability and accurate re-track after temporary loss of target. Both the IVT and
SOT algorithms have performance which more or less matches each other under most of the scenarios. The only weakness is the
occlusion handling which is non-satisfactory for both. Since both algorithms use PCA in object-modeling, greater similarity is found
in the performance of both. We also observed STC algorithm to exhibit outstanding performance under occlusion as outlined from
the results in Table 2. Its performance under background clutter and negligible motion scenarios is satisfactory. For low contrast,
however, its performance is weak.

To summarize it, we found PLS algorithm to do well under all conditions and is appropriate for wider-area and low-resolution tracking
of the target. At the same time, rest of the algorithms including IVT, MIL, STC, L1 and SOT has degraded performance in at least one
of the scenarios.

6. References

[1] D. A. Ross, J. Lim, R.-S. Lin, and M.-H Yang, “Incremental learning for robust visual tracking,” Int. J. Computer
Vision, vol. 77, no.1-3, pp. 125–141, May 2008.

[2] B. Babenko, M.-H. Yang, and S. Belongie, “Robust object tracking with multiple instance learning,” IEEE Trans.
Pattern. Anal. Mach. Intell., vol. 33, no. 8, pp. 1619–1632, Aug. 2011.

[3] X. Mei and H. Ling, “Robust visual tracking and vehicle classification via sparse representation,” IEEE Trans.
Pattern. Anal. Mach. Intell, vol. 33, no. 11, pp. 2259–2272, Nov. 2011.

[4] C. Bao, Y. Wu, H. Ling, and H. Ji, “Real time robust L1 tracker using accelerated proximal gradient approach,” in
Proc. IEEE Computer Vision and Pattern Recognition, pp. 1830–1837, 2012.

[5] Q. Wang, F. Chen, W. Xu, and M.-H. Yang, “Object tracking via partial least squares analysis,” IEEE Trans. Image.
Process, vol. 21, no. 10, pp. 4454–4465, Oct. 2012.

26
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

[6] D. Wang, H. Lu, and M.-H. Yang, “Online object tracking with sparse prototypes,” IEEE Trans. Image. Process., vol.
22, no. 1, pp. 314–325, Jan. 2013.

[7] K. Zhang, L. Zhang, and M.-H Yang, “Real-time object tracking via online discriminative feature selection,” IEEE
Trans. Image. Process, vol. 22, no. 12, pp. 4664–4677, Dec. 2013.

27
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Performance Evaluation of Database


Technologies for an Internet of Things Device
Suhail Yousaf
Zahid Ali Khan Department of Computer Science and IT,
Department of Computer Science and IT, University of Engineering and Technology,
University of Engineering and Technology, Peshawar, Pakistan
Peshawar, Pakistan syousaf@uetpeshawar.edu.pk
zfz.zahid@yahoo.com

Abstract— Smartphones have great potential to play a key role in the field of Internet of Things (IoT). Firstly, due
to its onboard sensors and its provision of a spectrum of communication technologies, it can be used as an IoT device
in a multitude of applications. Secondly, due to its powerful processing and storage capabilities, it can be used as a
mobile base station for IoT devices. This paper considers the latter case. The primary purpose of a base station is to
receive bulk data from various devices for preliminary processing. There are two possible outcomes of this stage.
First, it may be an aggregate data that has to be forwarded to a cloud-based infrastructure. Second, a command is
triggered and propagated back to the devices. As it turns out that in any case the smartphone is required to store and
retrieve data efficiently. Smartphones provide lighter versions of relational database management systems (RDBMS).
Moreover, No-SQL databases have also been introduced. Architecturally, No-SQL databases have their own
strengths and weaknesses with respect to RDBMS in terms of performance, scalability, flexibility, and complexity.
This paper compares the performance of RDBMS and No-SQL databases for smartphones. More specifically, a
rigorous performance evaluation and comparison of SQLite and the recently introduced No-SQL database Realm is
conducted on Android platform. Our systematic evaluation of both the databases showed that Realm outperformed
SQLite in writes operations (insert, delete, update), simple fetch, and range. However, SQLite had a clear win in
complex read operations such as aggregation and join.

Keywords— Internet of Things, Realm, SQLite, No-SQL, Big Data

I. INTRODUCTION
The concept of the Internet of Things (IoT) is seen as a paradigm shift in the world of computing. IoT aims at
exploiting various forms of tiny computing devices which are also capable of communicating with each other
over the Internet. These devices are embedded in other real life objects thus enabling communication among these
objects. This interconnectivity is called the Internet of Things. The concept of IoT offers tremendous
opportunities to develop intelligent systems which will be more productive and cost-effective than their
traditional counterparts. Moreover, such a huge connectivity gives birth to a broad range of novel applications in
healthcare, education, personal and social life, transportation and logistics, manufacturing, and many other fields
[1].
As it turns out, such large-scale connectivity of devices will lead to the generation of a huge amount of data
by IoT-based systems. This further introduces a conjunction between IoT and Big Data [2]. A primary concern
about large-scale data generated from IoT is how to make sense out of it. To this end, specialized applications
such as massively parallel-processing (MPP) databases and huge computing resources such as cloud and high-
performance computing based infrastructure are needed [3].
Although the large amount of data may be stored and processed on cloud computing platforms, there is
another rate limiting factor namely the huge amount of bandwidth consumption by IoT-based systems. Primarily
this bandwidth is consumed because of sending raw data produced by IoT-based systems to cloud computing
platforms for processing. To reduce bandwidth consumption, localized data processing techniques can be applied.
Such techniques can discard or aggregate data locally before sending it out to cloud computing platform for
further processing.
Modern smartphones have huge processing capabilities which can be used independent of cloud services.
These phones have a great potential to play a key role in the field of Internet of Things. Firstly, due to its onboard
sensors and its provision of a spectrum of communication technologies, a smart phone can be used as an IoT
device in a multitude of applications. Secondly, due to its powerful processing and storage capabilities, it can be
used as a mobile base station for other IoT devices. In its later role, a smartphone can be used to process data
collected from other IoT devices before sending it out to the cloud platform for detailed analysis.
This paper considers the latter case. The primary purpose of a base station is to receive bulk data from various
devices for preliminary processing. There are two possible outcomes of this stage. First, it may be an aggregate

28
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

data that has to be forwarded to a cloud-based infrastructure. Second, a command is triggered and propagated
back to the devices. As it turns out that in any case the smartphone is required to store and retrieve data
efficiently.
Smartphones provide lighter versions of relational database management systems (RDBMS). Moreover, No-
SQL databases have also been introduced. Architecturally, No-SQL databases have their own strengths and
weaknesses with respect to RDBMS in terms of performance, scalability, flexibility, and complexity. This paper
compares the performance of RDBMS and No-SQL databases for smartphones. More specifically, a rigorous
performance evaluation and comparison of SQLite and the recently introduced No-SQL database called Realm is
conducted on Android platform.
There are several standard benchmarks available. For large data-center level systems OLTP and OLAP style
benchmarks are used [4]–[6]. The SQL based enterprise applications use TPC-C and TPC-E benchmarks [7].
Similarly, the benchmark YCSB is from Yahoo. This benchmark is to evaluate cloud serving systems which are
based on NoSQL [8]. However, to the best of our knowledge, mobile phone based databases have no standard
benchmarks. Although some recent work proposed to evaluate SQL based embedded databases on mobile phones
[9], [10], there is no work which compares locally installed NoSQL database like Realm with SQL databases like
SQLite.
The rest of the paper is organized as follows. Section II describes the methodology used to analyze the
performance evaluation and comparison of SQLite and Realm. Section III presents and discusses the results of a
rigorous performance evaluation based on our experiments. Finally, section IV concludes the paper.

II. EXPERIMENTAL SETUP

A. Implementation of Data Model


In order to standardize tests, we have chosen a schema where the dataset will be synthetically generated
through a data generator application. This application is based on an open source library FAKER [11]. This
library is popular for generating synthetic data for the purpose of testing an application. For a fair comparison, the
datasets for SQLite and Realm databases remain the same.

We considered three parameters to generate the datasets namely number of records per table, number of
attributes per record, and the size of an attribute. The aim is to compare the performance of SQLite and Realm
on two extreme data sets. To this end, we use the entities Users and Messages. These entities reflect data storage
for a typical chat application where ‘N’ messages may be stored against a single user. The entity Users has
fewer attributes and lightweight data per record. On the other hand, the entity Messages has relatively many
attributes and heavyweight data per record. Thus, we are able to evaluate the performance of both databases
over a comprehensive data complexity spectrum.

Fig. 1 The schema with basic relationship between the entities Users and Messages.

As shown in Fig. 1, we have designed a simple schema with basic relationship between the two entities
Users and Messages. Attributes of each table are given below:
• Users (id, name, phone, image_url, status, is_active, is_reported, is_blocked, created_at, updated_at)
• Messages (id, conversation_id, sender_id, message_type, message_status, media_url, media_mime_type,
is_starred, media_size, media_name, latitude, longitude, received_at, created_at, deleted_at).

B. Implementation of Benchmark Queries


This subsection defines our benchmark queries. These queries reflect the way mobile phone application users
and IoT devices interact with the databases. The exact implementation details of these queries may vary for each
database; however, their purpose remains the same.

29
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

The types of benchmark queries and associated tests are presented below:

CREATE Benchmark: This type of query is used to benchmark the time taken to initialize a database with
defined table/structure. We test the following two queries:

CREATE TABLE Users (id, name, phone, image, status, is_active, is_reported, is_blocked, created_at,
updated_at)
CREATE TABLE Messages (id, convesation_id, sender_id, message_type, message, status, media_url,
media_mime_type, is_starred, media_size, media_name, latitude, longitude, received_timestamp, created_at,
deleted_at)
INSERT Benchmark: This benchmark represents all queries that load data into the database by inserting a
specific number of records into the schema. We test the following two queries:
INSERT INTO Users (id, name, phone, image, status, is_active, is_reported, is_blocked, created_at,
updated_at) VALUES (--)
INSERT INTO Messages (id, conversation_id, sender_id, message_type, message, status, media_url,
media_mime_type, is_starred, media_size, media_name, latitude, longitude, received_at, created_at, deleted_at)
VALUES (--)
UPDATE Benchmark: This benchmark represents queries that update data in the schema. We test the
following two queries:
UPDATE Users SET (image, status, is_active, is_reported, is_blocked, updated_at) VALUES (--) WHERE
id=?
UPDATE Messages SET (message_type, message, status, media_url, media_mime_type, is_starred,
media_size, media_name, latitude, longitude, received_timestamp, deleted_at) VALUES (--) WHERE id=?
DELETE Benchmark: This benchmark represents queries that delete data in the schema. We test the
following two queries:
DELETE * FROM Users
DELETE * FROM Messages
FETCH Benchmark: This benchmark retrieves records based on specific value of indexed or non-indexed
attribute. We test the following two queries:
SELECT * FROM Users WHERE id=?
SELECT * FROM Message WHERE conversation _id=?
RANGE Benchmark: This benchmark retrieves records within a specified range. For example, return all
messages between the two given dates. We test the following two queries:
SELECT * FROM Users WHERE ? < created_at AND ? < created_at
SELECT * FROM Messages WHERE ? < created_at AND date < created_at
AGGREGATION Benchmark: This benchmark returns tuples, each of which consists of two values. An
aggregate value obtained from a subset of data grouped by some attribute. The corresponding value of the
attribute on the basis of which grouping is made. We test the following two queries:

SELECT is_active, count(*) FROM Users GROUP BY is_active


SELECT message_type, count(*) FROM Messages GROUP BY message_type

JOIN Benchmark: This benchmark fetches data by joining two tables on a specific attribute. We test the
following query:
SELECT m.id, MAX(m.created_at), m.conversation_id, m.sender_id, m.message, u.name FROM Messages m
LEFT JOIN Users u ON u.id = m.sender_id GROUP BY sender_id ORDER BY m.created_at DESC
It is important to note that as compared to SQL, NoSQL and object based databases have different approaches
to handling each type of operations. For example, Realm does not support GROUP BY, JOIN, and relationships
of foreign keys. The absence of these features hinders comparison of NoSQL based databases (like Realm) with
SQL based databases (like SQLite). To overcome this issue, we have added a layer of software implementation in
our benchmarking system. This layer enables support for the missing features in Realm.

30
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

C. Performance Metric
We benchmark the SQLite and Realm by implementing both databases in an Android application. We use a
single representative performance metric namely average query execution time.
Each test query is executed 50 times. Then average query execution time is calculated. Each test is repeated
for 5 different sizes (number of records) of the two tables (i.e.1000, 2000, 3000, 4000, 5000).
The tests involving FETCH, RANGE, AGGREGATION, and SEARCH suffer from accurately recording
query execution time due to coarse grain timer resolution. To work around this problem, time of consecutive 1000
executions of the query is recorded and used for benchmarking.

D. Experimental Platform Configuration


The tests will be run on a Google device named google pixel with Android OS version 6.0. It is powered by
1.6 GHz processor with 4GB of RAM installed in it. The internal storage of the device is 32 GB. There is no
external storage installed on it.

E. Data Generator
To supply the different types of databases with equivalent datasets, we used a library known as FAKER[8]. In
the first stage, the raw data is generated in-memory using object oriented data model according to the database
table of SQLite and model of Realm. The actual data values are generated by exchangeable value generators; all
relations are picked at random (evenly distributed). In the second stage, the data is transformed into the target
database using a special controller (application). Additionally, every output module creates equal sets of query
parameter values that are used throughout the benchmark run.

III. RESULTS
This section presents results of the benchmark tests mentioned in the previous section. A full trial for each
benchmark test consists of creating the database files, setting up any necessary schema, and then running each
query 50 times. To test the scalability of both databases we ran all benchmarks 50 times and increased the number
of records inserted on each run.

A. Table Creation
The average creation time of the tables associated with the SQLite database was 0.538 microseconds while
Realm took only 0.186 microseconds. This indicates that the initialization of Realm objects is faster than SQLite.

B. Record Insertion
Fig. 2 compares the average execution time of SQLite and Realm when insert operation is performed on the
Users and

Fig. 2 Comparison of SQLite and Realm with respect to ‘insert’ operation on Users and Messages entities.

Fig. 3 Improved performance of Realm in ‘insert’ operation.

31
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Fig. 4 Comparison of SQLite and Realm with respect to 'update’ operation on Users and Messages entities.

Messages entities. It is evident that Realm is faster than SQLite both in case of Users and Messages entities.
However, as shown in Fig. 3, the improvement in performance of Realm is higher in case of Users entity as
compared to the Messages entity. The reason is that each record for Users entity has fewer attributes and
lightweight data per attribute.
Fig. 3 also shows that in case of lightweight data the performance of Realm decreases with increase in the
number of records inserted. Similarly, in case of heavyweight data the performance increases with increase in the
number of records inserted. Moreover, the overall performance gain in case of insert operation is between 10 to
60 percent.

C. Record Updating
Fig. 4 shows the average execution time of SQLite and Realm when records in Users and Messages entities
are updated. We see that Realm is relative faster than SQLite in both cases.
Similarly, Fig. 5 shows that the performance gain by Realm is between 15 to 30 percent.

Fig. 5 Improved performance of Realm in ‘update’ operation.

Fig. 6 Comparison of SQLite and Realm with respect to 'delete’ operation on Users and Messages entities.

Fig. 7 Improved performance of Realm in ‘delete’ operation.

D. Record Deletion
Fig. 6 shows the performance comparison with respect to delete operation on Users and Messages entities. In
both cases, Realm is faster than SQLite.

32
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Moreover, Realm shows better performance on heavyweight data of Messages. It is also significant to note
from Fig. 7 that the overall performance improvement of Realm in case of delete operation is between 20 to 70
percent.

E. Record Fetching with Simple Condition


Fig. 8 compares SQLite and Realm with respect to fetch query. The query was executed on Users and
Messages entities. Note that the query was indexed both for SQLite and for Realm. During the test, there were
1000 records in each table/object.
We see that Realm is faster than SQLite in terms of fetching the data. Moreover, as shown in Fig. 9, the
overall performance improvement of Realm in case of simple fetch operation is between 10 to 55 percent.

Fig. 8 Comparison of SQLite and Realm with respect to 'Fetch’ operation on Users and Messages entities.

Fig. 9 Improved performance of Realm in simple ‘Fetch’ operation.

Fig. 10 Comparison of SQLite and Realm with respect to 'Fetch with Range’ operation on Users and Messages entities.

F. Record Fetching with Range as Condition


Fig. 10 compares the performance of SQLite and Realm with respect to fetch query on Users and Messages
entities where the condition part of the queries consisted of a range. The range was defined by a logical AND
operator with both of its operands as relational expressions. During the test, there were 1000 records in the Users
and Messages entities.
From the graph, it is evident that Realm is faster than SQLite. Similarly, Fig. 11 shows that the overall
performance improvement of Realm in case of simple fetch operation is between 30 to 60 percent.

G. Record Fetching with Searching Keyword


Fig. 12 compares the performance of SQLite and Realm with respect to fetch query with searching keyword
operated on Users and Messages entities. During the test, there were 1000 records in both the Users and Messages
entities. From the graph, it is evident that performance of Realm is significantly higher than SQLite. Further, as
shown in Fig. 13, the overall performance gain by Realm is between 18 to 40 percent.

33
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Fig. 11 Improved performance of Realm in ‘Fetch with Range’ operation.

Fig. 12 Comparison of SQLite and Realm with respect to 'Fetch with Searching Keyword’ operation on Users and Messages entities.

Fig. 13 Improved performance of Realm in ‘Fetch with Search Keyword’ operation.

H. Aggregation
Fig. 14 compares the performance of SQLite and Realm with respect to aggregation query on Users and
Messages entities. More specifically, count query was applied to both entities. During the test, there were 1000
records in both entities. From the graph, it is visible that SQLite is significantly faster than Realm. The reason
behind the poor performance of Realm in this case is that it does not support built-in aggregate queries. To
overcome this deficiency, we implemented a custom solution recommended by Realm and used by the
community too. Further, as shown in Fig. 15 the overall performance gain by SQLite is significantly high, namely
between 88 to 93 percent.

Fig. 14 Comparison of SQLite and Realm with respect to ‘Aggregation’ operation on Users and Messages entities.

34
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Fig. 15 Improved performance of SQLite in ‘Aggregation’ operation.

I. Join
Fig. 16 compares the performance of SQLite and Realm with respect to join query. In this query, we fetched
the data from both entities about name of the user and message sent or received by a user. During the test, there
were 1000 records in both the Users and Messages entities. It is evident that SQLite is significantly faster than
Realm. Further, as shown in Fig. 17, the overall performance gain by SQLite is significantly high, namely
between 91 to 92 percent.

Fig. 16 Comparison of SQLite and Realm with respect to ‘Join’ operation on Users and Messages entities.

Fig. 17 Fig. 17 Improved performance of SQLite in ‘Join’ operation.

IV. CONCLUSION AND FUTURE WORK

A. Conclusion
This study began with the premise that modern smartphone has the potential to be used as IoT device. Due to
its on-board sensors and a variety of wireless communication technologies, a smartphone can play two roles
namely as sensing devices and as base stations for other sensing devices. We focused on its later role and
evaluated its capability to store and retrieve data locally for initial processing.
Specifically, we compared the performance of SQLite and Realm.
We conducted experiments to perform a comparative analysis of the speed of SQLite and Realm.
We observe that Realm exclusively outperforms SQLite in the benchmarks create, insert, update, delete, fetch,
range, and search. Moreover, the minimum performance gain for these benchmarks is between 10 to 30 percent
whereas the maximum performance gain is between 30 to 70 percent.
Considering the insert operation as a representative write operation, we observe the higher performance of
Realm in case of lightweight data. However, the performance decreases with increase in the number of records
inserted. This means that for lightweight data SQLite catches up Realm as the number of insertions increases.
Nevertheless, a positive sign is that the performance of Realm increases with increase in the number of records
inserted. This indicates that for heavyweight data Realm is a better option.

35
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

The fetch operation represents a typical read operation on memory. We observe that in all fetch operations
(simple fetch, range, and search); increasing the number of query executions, the average execution time
increases linearly.
We notice that in case of aggregate and join benchmarks the results are completely different from the previous
trend in performance of Realm. In these cases, SQLite has substantially better performance than Realm. The
reason behind poor performance of Realm in this case can be attributed to the fact that it does not support built-in
aggregate and join queries. To overcome this deficiency, we implemented a custom solution recommended by
Realm and used by the community too.
Based on the previous discussion we are led to the conclusion that Realm performs significantly better than
the popular SQLite. Thus, Realm can be viewed as a candidate for using local data management on smartphones
and other embedded devices in the Internet of Things ecosystem.

B. Future Work
Studies of this sort, no matter how suggestive, must be hedged with caveats. Further experimental study is
needed with additional focus on the nature of input data to our benchmark queries. This may deepen our
understanding of the already identified relationships of variables.
Another line of exploration is to extend our study to more candidate database systems for smartphones and
embedded devices. This will help draw a more rigorous and unified conclusion about the choice of local database
management system for the Internet of Things devices.

REFERENCES

[1] L. Atzori, A. Iera and G. Morabito, "The Internet of Things: A survey", Computer Networks, vol. 54, no. 15, pp. 2787-2805, 2010.
[2] M. Chen, S. Mao and Y. Liu, "Big Data: A Survey", Mobile Networks and Applications, vol. 19, no. 2, pp. 171-209, 2014.
[3] Y. Wang, R. Goldstone, W. Yu and T. Wang, "Characterization and Optimization of Memory-Resident MapReduce on HPC
Systems", 2014 IEEE 28th International Parallel and Distributed Processing Symposium, 2014.
[4] R. Cattell, "Scalable SQL and NoSQL data stores", ACM SIGMOD Record, vol. 39, no. 4, p. 12, 2011.
[5] Y. Shi, X. Meng, J. Zhao, X. Hu, B. Liu and H. Wang, "Benchmarking cloud-based data management systems", Proceedings of the
second international workshop on Cloud data management - CloudDB '10, 2010.
[6] C. Turbyfill, C. Orji and D. Bitton, "AS3AP - An ANSI SQL Standard Scalable and Portable Benchmark for Relational Database
Systems", The Benchmark Handbook, second ed, 1993.
[7] "TPC - Transaction Processing Performance Council", 2018. [Online]. Available: http://www.tpc.org/.
[8] B. Cooper, A. Silberstein, E. Tam, R. Ramakrishnan and R. Sears, "Benchmarking cloud serving systems with YCSB", Proceedings of
the 1st ACM symposium on Cloud computing - SoCC '10, 2010.
[9] Fröhlich, Nadine and Möller, Thorsten and Rose, Steven and Schuldt, Heiko, "A benchmark for context data management in mobile
context-aware applications", Proceedings of the 4th International Workshop on Personalized Access, Profile Management, and Context
Awareness in Databases (PersDB 2010), vol. 6, 2010.
[10] "SQLite Android Benchmark: SQLite vs. McObject's Perst Embedded Database", Mcobject.com, 2018. [Online]. Available:
http://www.mcobject.com/march9/2009. [Accessed: 14- Apr- 2018].
[11] "Java Faker", Java Faker, 2018. [Online]. Available: http://dius.github.io/java-faker/. [Accessed: 14- Apr- 2018].

36
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

HIERARCHICAL COMPARISON AND CLASSIFICATION OF P2P SYSTEMS BASED ON


ARCHITECTURAL DESIGN

Aysha Nayab1, Naina Said, Syed Saddam Hussain Shah, Waleed Khan, Zaryab Ali Shinwari, Nasru
Minallah

Department of Computer Systems Engineering (DCSE)


University of Engineering and Technology, Peshawar (UET Peshawar),
ash.nayab@gmail.com1
ABSTRACT

The recent emergence of high speed networks has resulted in exponential growth in internet traffic. With the advancements in
capture and production of digital media, the volume of media related traffic has specially increased. As per measurement, the
video traffic constituted more than 57% of the internet traffic in 2014 thus being the largest generator of internet traffic. This
increase in internet traffic highly challenges the content providers as well as internet service providers to provide the users with
best possible experience of video streaming. The problems with traditional client server systems to facilitate such large volume
of users is the fact that in these systems, the delivery of the content rely on dedicated infrastructure i.e. server. This increases
the server load which in turns decreases the speed and ultimately quality of the service to the end users. This work discusses
the P2P concepts including methodologies, systems and tools used in their establishment, in detail. This manuscript has been
written in such a way that the reader may easily grab the concepts of multimedia sharing over P2P networks and familiarize
one’s self with the current P2P systems available in the industry today. The work is regarding a review work performed at
University of Engineering and Technology, Peshawar for IGNITE National Technological Fund (Formally known as ICT
R&D) funded project STAMP (Scalable Transmission of Adaptive Multimedia based on P2P). This work discusses the P2P
concepts including methodologies, systems and tools used in their establishment, in detail.

Index Terms— Bittorrent, Seti@home, Gnutella, Freenet, CAN, Pastry, PAST, Chord, P2P Peer 2 Peer; Video on demand
(VOD); WebRTC; Video on-demand (VoD) Content Based Network (CBN); ALTO (Application Layer Traffic Optimization);

1. INTRODUCTION

With the increasing number of Internet users around the globe, the need for proposing and development of effective and reliable
communication among users should also be the top priority of researchers, engineers and scientists. Currently many different
models are available to solve this problem, but as the time progresses the need and requirements also advances. Apart from the
server client relationship, the concept of Peer to Peer (P2P) has captured the attention of researchers and scientists. For content
distribution numerous systems are currently available. A Content Based Network (CBN) is system composed of distributed
servers. CBN delivers webpages and other contents based on user’s geographic location, the origin of the webpage and a content
delivery server. This helps in the speedy delivery of the required data [1]. The CBN copies the contents of the server to the
geographical nearest point to the users. Video on-demand (VoD) are systems which allows users to select and watch/Listen to
video or audio content such as movies, songs and other media, whenever the user wishes to, as compared to waiting for the
specific broadcast time. VoD service is very popular over the mobile Internet as well [2]. In VoD the video is stored on a server.
Upon a request by a user the server broadcasts the video directly to the user. In case of multiple users, the videos is divided in
to multiple chunks and distributed over the users. For this purpose IPTV (Internet Protocol Television) technology is often used
to bring the VoD service to television and internet. The IPTV is a system by which contents can be delivered to users using IP
(Internet Protocol) rather than using traditional terrestrial or satellite based technologies for the communication or broadcasting.
This is also called streaming media. The importance of live streaming cannot be denied, since there is need for quality live
streaming applications for different fields e.g. Medical Based Streaming, Educational, Machine troubleshooting in industries
etc. In case of Medical based streaming, high quality video and audio is required since retrieval of maximum information is
very critical. The streaming quality depends on multiple factors i.e. the bandwidth of the streamer and the end user, Frame rate
per second (fps) and Resolution of the video.

37
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Some of the core concepts of P2P streaming are Video Chunkisation, Overlay management, Peer Discovery and sampling,
Search algorithms, Topology management and chunk trading protocol. These are the main building blocks of peer to peer
systems and are discussed in detail in the upcoming sections.

2. VIDEO CHUNKISATION

In P2P system, a peer act as ”server” host and “client” both at the same time such that a peer act as a “client” if it request a
chunk from its partner peers, and it acts as a “server” if any chunk is requested from its partner peers [4]. In P2P system, a
video stream is divided into small pieces of chunks. These chunks were then used to send to partner peers. The process of
splitting encoded media into small chunks is referred to as “Chunkisation”. There are be two different strategies most P2P
systems use to divide a stream into chunks. One approach is, dividing encoded media (stream) into fixed sized chunks, in which
the process of splitting is not aware of details and information of encoded media it is splitting, known as “media unaware”
strategy. On contrary, another strategy is “media aware” distribution strategy. In Media-Aware distribution strategy, the process
uses some information of encoded media in order to optimize and improve the streaming performance. A video stream is
composed of frames which contain some header information and its respective compressed data. If those headers were lost, it
will be impossible to decode the compressed data related to that header of the frame. Moreover some encoding algorithms uses
different kind of frames in which ‘I’ (Intra-Coded picture), ‘P’ (Predicted) and ‘B’ (Bi-Predicted) frames are included. ‘I’
frames are totally independent frames. While ‘P’ frames rely on previous frame. Whereas the ‘B’ is bi-predicted and depends
upon previous and the following frames. If those frames on which other frames depends were not decoded correctly or lost at
any stage, the dependent frames will be effected too. So it will be better to package the dependent frames in a same chunk
where the frame on which they depends, lies [5].

3. OVERLAY MANAGEMENT

A. Peer Discovery and Sampling


One of the difficult task in P2P systems is to find the peer with the required information or data, especially where the peers are
distributed in an unstructured decentralized manner. There are several search mechanisms proposed in past. But due to
unstructured, decentralized in nature of the system, most of them are inefficient in performance. Among them, there are some
which have some performances gain, such as Breath-first search mechanism, which is extension of Gnutella protocol
(framework), and Depth-first-search which is extension of Freenet protocol (framework) [6]. The process of sampling is defined
as selection of random node among all of the nodes connected to peer-to-peer (P2P) network in [7].
The resource or service discovery systems design rely upon overlay infrastructure of the framework. Each service or resource
discovery has either structured architecture or unstructured architecture. The original Gnutella is based on unstructured
architecture, while there exists some structured architecture based frameworks, such as Naspter and SLP.

B. Breadth-first search
The second major P2P system introduced after Napster is Gnutella framework. It was designed as decentralized unstructured
P2P framework. The original Gnutella use breadth first search approach in order to discover required data on the distributed
peer-to-peer network. The required data was searched via the flooding request message to every other node of the network
using breadth-first-search (BFS) of the overlay network to limited depth denoted by ‘D’. Where D is the depth limit of search
which defines the number of overlay hops to be searched. A maximum time-to-live (TTL) is assigned to each query message.
Each neighboring node receiving query message processes the request and returns the required data if found. If the node doesn’t
have required data which was requested, it further propagated the same query message to its neighboring nodes except the
requester from where it receives the same query message. This whole process of propagating the query message continues until
the process reached the depth-limit D or the required data has been found. To find the maximum number of results within the
radius of depth limit D keeping the original query node as center of the circle, the flooding of query messages process is
conducted. This process ends when either the content is found or the TTL limit is achieved. In Gnutella framework, request
was sent to every neighbor simultaneously without waiting of response of the request from single node. [8]
This approach has numerous drawbacks including frequent peer disconnection, bandwidth cost, and enormous delays if network
grew large enough. Several improvement measures were taken to overcome the issues, and in result Gnutella2 was constructed.
The new Gnutella2 is based on hybrid infrastructure based on the concept of super peer (cluster heads), which ultimately
improves the scalability and performance of the framework [14].

C. Globally Unique Identifier (GUID)


Freenet is an unstructured architecture based P2P system. It was constructed with main aim to provide privacy to the clients
and provide protection of data from being restricted. In order to provide privacy, the major issue it faces is the data redundancy

38
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

replication. Using such architecture leads to sluggish and bad performance of the system while comparing to other framework
of its kind, such that third generation P2P systems gives better performance. Every file inserted into the Free-net system was
assigned its own Globally Unique Identifier (GUID) by the system. This GUID is then used for storing and searching of file
within the system. GUID uses the SHA-1 (secure hash algorithm) secure hashes technique. The overall system architecture
comprises on two types of keys. The key generated by the hashing the internal content of the data or file is known as Content-
hash key, which is later used to generate the GUID. To setup a personal workspace Signed-subspace keys were used. The
workspace using Subspace key can be access by anyone but with restriction that is only owner of the file is allowed to alter the
file. Public-Private key pairs are used in order to achieve this technique. By this the user who uploaded the data remains
anonymous. The peer sends a query message but in this case the query is passed to a single neighboring peer at a time and waits
for the response before forwarding to another neighbor. The search depth is in this method is limited. It consumes greater
bandwidth, although results can be found in less time, as the queries are flooded in the system [14].

D. Topology manager
Instead of building a simple randomly sampled technology, a topology manager implements more complex and sophisticated
techniques. Most of the protocols uses mixed network architectures in order to improve the search process for highly demanded
and less demanded data content. The use of mix topologies techniques are called hybrid systems [9].
P2P systems are network aware i.e. they are capable of adapting themselves according to network conditions to maintain the
quality of service. Network awareness can be based on local measures. But if some information cannot be accessed locally, the
topology manager interacts with the ALTO (Application Layer Traffic Optimization) server to obtain that specific information
(e.g. link costs) . To keep the clients updated with the recent most information of the network, the information is obtained from
(ALTO) protocol. To that end, an ALTO Server facilitates Network and Cost Maps. With the help of those maps, the costs
between endpoints can be determined by the ALTO Client. In order to facilitate the application to make network informed
decisions, ALTO is used to remove the gap between application and network by spreading information related to the network
[10].

E. Chunk trading protocol


Chunk trading protocol is responsible for signaling and scheduling algorithms. It also deals with timing and co-ordinations of
other peers. Chunk trading protocol defines how peers request and send the required chunks in P2P system. The chunk trading
mechanism occurs after defining peer set and sampling has been done. Once an application have list of available peers
participating to the P2P system. Then signal exchanging occurs, which basically tells the chunks it can provide to other peers
of the network and the chunks it needs from other peers of the network. Usually a buffermap message is transmitted to the P2P
system, which contains ChunkIDs, Chunk Offers, Chunk Accepted, Chunk Request and Chunk Deliver [11].
Most of P2P streamers implement an offer/trade protocol
 A peer sends an offer message to publish its set of chunks.
 Then the peers send a select message if they are interested in these chunks.
 When the peer receives the select message, that chunk is transmitted over UDP.
 After the chunk is received, an ACK message is sent to the sending peer.

Figure 1 P2P Systems Classification

39
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

4. P2P STREAMING SYSTEMS

In peer to peer streaming system, each peer on the network acts both as a client and the server. Here, the content delivery does
not rely on the single server rather the upload capacity of each peer on the network is taken advantage of and used for video
transmission which drastically decreases the load on the server. In P2P system, a peer act as ”server” host and “client” both at
the same time such that a peer act as a “client” if it request a video chunk from its partner peers, and it acts as a “server” if it
offer video chunk to its partner peers [10]. Some famous streaming systems based on Peer to Peer are Overcast [15], ESM [16],
NICE [17], PeerCast [18].
These systems are classified into three major categories, Centralized, Decentralized and Hybrid. Decentralized systems are
further classified on the basis of Network Structure and Topology.

A. Classification of P2P Network


In any distributed system like P2P, a tradeoff is usually required among features like scalability, performance, robustness etc.
Based on the requirements for these systems, P2P systems have been classified into the centralized and decentralized systems.
Figure 1 shows the classification. A way to classify P2P systems may be classification on the architecture of how peers organize
themselves and how these peers collaborate among themselves. In decentralized systems all the peers have equal rights and
responsibilities and are further divided on the bases of number of layers involved thus flat and hierarchical systems or on the
bases of logical network i.e. how the queries are forwarded among nodes they may be structured, unstructured or loosely
structured systems. Unstructured is further divided by determination method of neighbors i.e. static or dynamic based on user
interest. Structured systems are defined on the bases of data placement the DHT (mapping of data and peers).

5. DEGREE OF DECENTRALIZATION

The degree to which a system is decentralized is an important feature which depicts how much that network relies on the server
although pure P2P systems should be decentralized but in practice one or more servers are used to provide some basic functions
thus the classification on the bases of decentralization.

6. CENTRALIZED P2P SYSTEMS

Centralized P2P systems mix the features of client server system and P2P system to take advantage of the features of both.In
centralized P2P systems, there are one or more servers like client server systems. However, the job of these servers is to store
the meta data and not the payload. This meta data may include information like file availability, bandwidth, IP addresses,
latency etc. The servers also helps the peers on the network to locate their desired resources or act as a scheduler to help peers
coordinate with each other. Each time a peer requires the resources, it send a message to the server which then responds with
the address of the peer having the required resource. The payload transmission is still carried out from peer to peer directly.

7. DEGREE OF DECENTRALIZATION

The degree to which a system is decentralized is an important feature which depicts how much that network relies on the server
although pure P2P systems should be decentralized but in practice one or more servers are used to provide some basic functions
thus the classification on the bases of decentralization.

8. CENTRALIZED P2P SYSTEMS

Centralized P2P systems mix the features of client server system and P2P system to take advantage of the features of both in
centralized P2P systems, there are one or more servers like client server systems. However, the job of these servers is to store
the Meta data and not the payload. This meta data may include information like file availability, bandwidth, IP addresses,
latency etc. The servers also helps the peers on the network to locate their desired resources or act as a scheduler to help peers
coordinate with each other. Each time a peer requires the resources, it send a message to the server which then responds with
the address of the peer having the required resource. The payload transmission is still carried out from peer to peer directly.

40
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Figure 2 Partially Centralized P2P System Figure 3 Decentralized P2P

9. DECENTRALIZED P2P SYSTEMS

In decentralized P2P systems, all the peers on the network have equal rights and responsibilities. Servant (server + client) is
also a name given to such systems. There is no centralized server to locate resources needed by the peers and store other useful
information thus the system is high in term of robustness. Each peer has only a partial view of the whole network and offers
data or services to only relevant peers. In such type of system, due to the absence of centralized server, quickly locating peers
offering required resources is a major challenge. The obvious advantages of decentralized P2P systems is the inherent
robustness and high performance and scalability.

10. P2P NETWORK STRUCTURE

In decentralized systems peers form an organized overly network on the actual physical network and may further be divided
on the bases of the network structure that provides indexing, routing etc. Based on how queries are being forwarded to the other
nodes on the system, the decentralized systems are further divided into structured, unstructured decentralized systems and
loosely structured systems.

A. Unstructured Decentralized P2P Systems


These systems are considered as the first generation of P2P systems. In unstructured systems, each node is responsible for its
own data and only keep tracks of selected neighbors to which it may transfer its queries for the resources. Such unstructured
organization of the system makes searching for the required data very challenging since there is no way to predict the peers on
the network who have the queried data. In addition, there is no guarantee of the response time either unless in worst case in
which the whole network is searched. Another key issue in the unstructured systems is the determination of the neighbors.

B. Structured Decentralized P2P Systems


In structured decentralized systems also known as the second generation of P2P systems, there is a mapping between data and
peers using strategies like direct hash table. It has dynamic and intelligent switching among the super nodes. For security
reasons, only the metadata of a data belonging to certain owner is inserted into the network and not the actual payload. These
systems provide a guarantee on search cost and the queries are routed to the peers more quickly and accurately.

41
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Figure 4 Hybrid Decentralized P2P


C. Loosely structured systems
Such systems are neither completely structured nor unstructured and may be considered as loosely or semi structured networks.
The routing hints to access file locations are not specified. As it does not have a complete routing map the query responses
aren’t guaranteed. Media may be replicated on different locations and received from the nearest location.

D. Hybrid P2P Systems


Hybrid systems have elegant auxiliary mechanisms that allows them to have the benefits of both the centralized and
decentralized systems advantages. A type of hybrid systems are the ones where a few peers act similar as the server in
centralized system these are known as super nodes and are at the upper level. The selection of these super nodes is still a
challenging process. The common nodes in resource allocation use the services of super nodes. But unlike the servers in
centralized systems super nodes are not as powerful or the in charge of subset peers they also contribute resources and perform
operations.

11. EXISTING P2P SYSTEMS

This section describes different applications based on P2P which offer different services. Some of them provide data sharing
services to the users, some offer Multimedia (Audio and video) sharing and streaming services, some are dedicated towards
IPTV and Video on Demand

A. Napster: Sharing of Digital Content


Napster is a music file sharing system it may be viewed as system of distributed MP3 files over the Napster users. It has a
centralized server that stores the location of nodes. It plays no role in the exchange of actual data but only connects the peers.
It provides three functions:
1. Search engine: the server
2. File sharing: trade of the MP3
3. Internet relay chat: way to find and chat with peers
The procedure has three phases:
1. Joining Napster: by connecting to server and registering
2. Resource discovery: request is sent to server and it responds by list of nodes with the required resource
3. Downloading files: peers establish connections and files are downloaded

B. BitTorrent
In BitTorrent [19] [20] [21] protocol, the peers are selected randomly or based on their uploaded data that’s why low latency
selection of peers is not supported. It is mainly used for content sharing of huge file sizes by large number of users efficiently.
It is a fair network and a peer with better bandwidth may download more quickly over all the system is scalable and efficient.
The only case where delivery is poor if the seeds with complete file leaves network then no peer can get the whole file and
pieces acquired till then may be useless;
The network comprises of three entities such as the tracker, peer, and seed: Tracker acts as server to connect clients to download
files Peer nodes that download and upload pieces of desired file Seed nodes that have the complete file and now uploads only.

42
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

C. SETI@home
Search for Extraterrestrial Intelligence (SETI) at home works in a way that the idle computing resources of many computers is
utilized by using the P2P technology is an alternative to super computer. It splits the tasks into computable chunks called work
units and distributes it among its peers. The four components in the SETI@home are as follows:
 Data collector: Data collector is an antenna that is responsible for gathering radio signals from outer space, and
recording these radio signals in digital linear tapes (DLT
 Data distribution server: it creates the work units.
 Screensaver: SETI@home client.
 User database: database of user information results etc.

D. Gnutella: The First “Pure” P2P System


Gnutella is a software-based network structure that is efficient in bandwidth consumption and overcomes the single point of
failure issue. Its basic operations are:
 Joining or leaving network: Is done through “PING” message that a new node sends and it receives confirmation as
“PONG” messages which gives it the new node information of the existing nodes. So that the new node may establish
its own neighborhood with them. Each node PINGS its neighbors periodically.
 Searching and downloading files: “lookup” message is broadcasted to find a desired. “hit” messages are sent by those
whole have the files.[14]
E. Freenet
Freenet is a file storage service instead of file sharing service. It is a loosely structured, self-organizing, secure system that
maintains anonymity. Every peer and the shared content in the network is given an ID. Peers have information of a fixed
number of other peers. The desired content is loaded from a neighbor by help of the content ID “Chain mode” discovery
protocol is used instead of broadcast which means the request is passed from one node to another. There is no direct connection
between the requesting and sending peers and data propagates through the same path that the sender was discovered. Thus,
anonymity is maintained and the requested content is replicated on the intermediate peers.
F. CAN
Content Addressable Network is based on hash tables, a distributed lookup service. The peers in network store parts of the
network called zones along with listing of small neighbor zones next to it. Its main operations are: insertion, lookup and
deletion. The basic concept of CAN revolves around virtual D-dimensional spaces of each peer to keep the keys in hash tables.
With time zone is dived into two parts and one is managed by the new peer and eventually the new peer contacts all the peers
in the neighborhood for updates in the routing table. N nodes are maintained by each node and a leaving node transfers its hash
tables to the neighboring node. Discovery of node is guaranteed in Θ(log N 1/d) steps, where N are the nodes and d dimensions.
G. Pastry
Pastry also uses distributed hash table where the key values of the network are stored. Routing table is kept, managed by every
peer individually and updated when a new peer joins. Θ(log n) routing steps are expecte; where n is active peers. Message is
routed using the nodeID to nearest node.
H. PAST: A Structured P2P File Sharing System
P2P archival storage utility built on Pastry a (distributed hash table) DHT- supported. It adopts a prefix-based routing scheme.
Each node is assigned a 128-bit identifier public key of the node and a hash function such as SHA-1 and each file is assigned
a 160-bit identifier by hashing owner’s public key with the file name and some randomly chosen salt.
A new file added to the PAST system is put on k nodes that have identifiers numerically nearest to 128 MSB of the file’s
identifier in all the live nodes. In looking up a file node will send the request to the node having a longer prefix match than that
of its own. Otherwise the request is sent to a node to a same length identifier which is numerically nearest to the file’s identifier
[14].
I. Chord
Chord saves and maps the ID’s of items and active peers in the system that are organized in a ring. On the top chord data
locations of content is stored by associating a key with it and storing the maps. A node only requires knowing its successors
key thus a search query propagates in a circle. A new peer is given a key and information of its successor and if a peer leaves
all keys are updated this ensures a successful search mechanism.
J. Canon: Turning Flat DHT into Hierarchical DHT
As most of the decentralized systems presently are non-hierarchical. Canon is a method to convert a single tier DHT into
hierarchical DHT by recursive routing. Consider a system SoC with sub domains DB and IS both of them have chord rings,
and in SoC it forms a new high-level Chord ring by merging in a way that the total number of links of the node stays the same
but global routing is made possible. The merged chord rings retain their initial neighborhood and create new links if these two
conditions are satisfied:

43
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

1. 0 ≤ k <m; where m=size of namespace and the other node is at least at a distance of 2k Standard chord rule.
2. The node is the nearest to it.
K. Skip Graph: A Probabilistic-Based Structured Overlay
Although DHT method are search efficient and guarantee data availability. It is unable to support complex queries because
data locality is lost. Therefore Skip graph method is introduced.
A skip list is a search tree organized by set of sorted linked lists that refer to a level of the tree. The lowest level 0 has all
the nodes in ascending node keys order. For l > 0 each node in the list has a fixed probability and independently is a subset of
list level l −1. Thus the density of nodes becomes lesser as the level rises. This allows a jump over a large groups of nodes in
query processing. The search progresses from high levels to lower unless the desired node is reached. As illustrated by the
figure 6. Skip Graphs are classified as probabilistic-based structure because a fixed probability p can determine which level a
node belongs to. In a Skip Graph a node is connected to many skip lists and its participation is decided by its membership
vector m(x). At the highest level of skip graph every node has its own list.
As shown in the figure 7 a search begins at the highest level and after searching the list and not finding its desired node it
travels to the next lower level until it reaches the level 0. The skip graph method supports wide range of queries. [14]
L. Best Peer: A Self-Configurable P2P System
Best Peer is generic platform used to develop P2P applications, Novel applications and extensibility is one of its most prominent
feature. Distinct features of BestPeer are:
 Use of mobile agent technology. It uses executive instructions that allow peers to run operations locally thus enabling
the processing of raw data at its owner node.
 It allows peers to share computational resources
 Dynamic method ensures high chance of answer by near neighbors reducing query time
 The location independent global names lookup (LIGLO) is a unique peer and act as mentors for common peers.
The steps for a node to join BestPeer network are:
Joining. If a peer is joining the system for the first time it registers in a LIGLO server. The server receives the a registration
and makes a BPID (BestPeer ID) which is global and unique and send it to the node along with a list of online peers registered.
New node creates direct links. A single peer can register to multiple LIGLOs.
Accessing resources. A peer in the network similar to the broad cast system in Gnutella where the peer with desired data
returns the result directly.
Rejoining. If a peer rejoins it sends its IP and BIPD to the LIGLO server where it was registered. If it has a new IP the servers
update it in the list [14]
skip list with 5 nodes. The lines show the process of search node 41 from head through node 26 and node 30 to node 41

Figure 5 Skip list with 5 nodes

44
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Table 1 Comparison of Centralized, Decentralized and Hybrid Systems

Properties for Fault Privacy, Decentralization Scalability Availability Cost of Efficiency


comparison Resilience Anonymity & Self- Ownership and
and security Organization Effectiveness
Napster Single point of Low level of Central server Limited to the High level of Much cheaper Accelerated
failure both provides the maximum data than traditional resource
Centralized System

location of nodes capacity of availability Clint server location at


the server among peers high
efficiency
SETI@home Single point of Can assure Low level of Limited to the Always Very cheap for Satisfactory
failure Privacy and decentralization capability of obtains such powerful
anonymity server results computing
Gnutella Pure P2P Moderate level High speed Problematic Availability Software based Moderate
system. Every of anonymity nodes will be due to is not network
node is equal gradually be network guaranteed infrastructure
placed in center congestion
Past High resilience High level of Structured High Level High Each PAST node High
and low chance anonymity and availability need to maintain performance,
of all nodes security ensured and a table with [(2b- all lookups
falling at the by smart card persistence 1)*(log2b n)+21] resolved in
same time entries number of
Decentralized System

hoops at most
logarithmic
to total
number of
nodes.
Canon It can isolate Data locality Proximity of It is necessary Hierarchal O(logN) is the Efficiency
faults like a compromised physical to know the storage and cost of similar to flat
DNS system cannot process networks, like size of the retrieval ownership DHT.
complex hood of nodes name space
queries being close is system
very high.
Skip Graph Even if Data locality is Can inflate of More O(logN) is the O(logN)
O(1/logN) reserved ad deflate at will, alternatives cost of steps,
nodes are complex according to for content ownership messages,
removed,the queries handled number of storage time
remaining will designs in
be connected network
BestPeer High fault Mobile agent is Most promising Better than MaxCount Each new Query Has
resilience pre-defined, peers kept close that of the and MinHop processing Bandwidth
Hybrid systems

immune to a encrypted and by the centralized ensure fast strategy efficiency


single point signed at the approaches of availability ,increases the and response
first step. MaxCount and of data complexity of time better
MinHop the system than
Gnutella.

12. CONCLUSION

Although every mentioned and reviewed system in this manuscript has its own advantages and discrepancies. Each system is
designed for its own functionality and domain. It has been found that the factor of scalability is not available in most of the
mentioned systems. More over with the advancement in computer components i.e. memory, processors, new p2p systems with
advanced encoding and decoding algorithms are being introduced with time.

45
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

REFERENCES

[1] Quanfeng and L. Zhenghe, "File Sharing Strategy Based on WebRTC," in Web Information Systems and Applications
Conference, 2016 13th, 2016.
[2] X. Tian, C. Zhao, H. Liu and J. Xu, "Video On-Demand Service via Wireless Broadcasting," IEEE Transactions on
Mobile Computing, 2016.
[3] "IPTV," 4 2017. [Online].
[4] S. Joseph, Z. Despotovic, M. Gianluca and S. Bergamaschi, Agents and Peer-to-Peer Computing: 5th International
Workshop, AP2PC 2006, Hakodate, Japan, May 9, 2006, Revised and Invited Papers, vol. 4461, Springer, 2008.
[5] Kiraly, L. Abeni and R. L. Cigno, "Effects of P2P streaming on video quality," in Communications (ICC), 2010 IEEE
International Conference on, 2010.
[6] V. Kalogeraki, D. Gunopulos and D. Zeinalipour-Yazti, "A local search mechanism for peer-to-peer networks," in
Proceedings of the eleventh international conference on Information and knowledge management, 2002.
[7] Datta and H. Kargupta, "Uniform Data Sampling from a Peer-to-Peer Network," in 27th International Conference on
Distributed Computing Systems (ICDCS '07), 2007.
[8] Meshkova, J. Riihijärvi, M. Petrova and P. Mähönen, "A survey on resource discovery mechanisms, peer-to-peer and
service discovery frameworks," Computer networks, vol. 52, pp. 2097-2128, 2008.
[9] X. Li and J. Wu, "Searching techniques in peer-to-peer networks," Handbook of Theoretical and Algorithmic Aspects
of Ad Hoc, Sensor, and Peer-to-Peer Networks, pp. 613-642, 2006.
[10] S. a. R. W. a. S. N. Randriamasy, "Application-Layer Traffic Optimization (alto) Internet Drafts," 3 2017. [Online].
Available: http://www.potaroo.net/ietf/html/ids-wg-alto.html.
[11] Abeni, C. Kiraly, A. Russo, M. Biazzini and R. Lo Cigno, "GRAPES: a Generic Environment for P2P Streaming,"
2010.
[12] V. N. Index, "Forecast and methodology, 2014-2019 white paper," Technical Report, Cisco, Tech. Rep., 2015.
[13] Z. Shen, J. Luo, R. Zimmermann and A. V. Vasilakos, "Peer-to-peer media streaming: Insights and new
developments," Proceedings of the IEEE, vol. 99, pp. 2089-2109, 2011.
[14] Q. H. Vu, M. Lupu and B. C. Ooi, Peer-to-peer computing: Principles and applications, Springer Science & Business
Media, 2009. \
[15] Deshpande, M. Bawa and H. Garcia-Molina, "Streaming live media over a peer-to-peer network," 2001.
[16] J. Zhang, L. Liu, L. Ramaswamy and C. Pu, "PeerCast: Churn-resilient end system multicast on heterogeneous overlay
networks," Journal of Network and Computer Applications, vol. 31, pp. 821-850, 2008.
[17] S. Banerjee, B. Bhattacharjee and C. Kommareddy, Scalable application layer multicast, vol. 32, ACM, 2002.
[18] Y.-h. Chu, S. G. Rao, S. Seshan and H. Zhang, "A case for end system multicast," IEEE Journal on selected areas in
communications, vol. 20, pp. 1456-1471, 2002.
[19] B. Cohen, "Incentives build robustness in BitTorrent," in Workshop on Economics of Peer-to-Peer systems, 2003.
[20] B. Cohen, The BitTorrent protocol specification. BitTorrent. org, Available from: http://www.bittorrent.org/beps/bep
0003.html., 2008.
[21] Legout, G. Urvoy-Keller and P. Michiardi, "Rarest first and choke algorithms are enough," in Proceedings of the 6th
ACM SIGCOMM conference on Internet measurement, 2006.

46
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

ON NON-ASSOCIATIVE FLEXIBLE LOOPS

Amir Khan1 , Gul Zaman2


1
Department of Mathematics and Statistics, University of Swat, KPK,
Pakistan.
Email: amir.maths@gmail.com
2
Department of Mathematics, University of Malakand, KPK, Pakistan.

Abstract. Flexible loops are loops satisfying x(yx) = (xy)x for all
x, y. An infinite family of non-associative flexible loops whose small-
est member is of order 6, is constructed in this paper without any
use of computer program.

Key words : Loops, flexible loops, construction of loops.

1. Introduction
A groupoid (Q , ·) is a quasigroup if for each a, b ∈ Q, the equations
ax = b, ya = b have unique solutions where x, y ∈ Q [1]. A loop is a
quasigroup with an identity element e such that x ∗ e = x = e ∗ x. A
loop identity is of Bol-Moufang type if two of its three variables occur
once on each side, the third variable occurs twice on each side, and the
order in which the variables appear on both sides is the same. There are
exactly 14 such varieties.

Sr. No. Variety Identity


1 Extra loops x(y(zx)) = ((xy)z)x
2 Moufang loops (xy)(zx) = (x(yz))x
3 Left Bol loops x(y(xz)) = (x(yx))z
4 Right Bol loops x((yz)y) = ((xy)z)y
5 C-loops x(y(yz)) = ((xy)y)z
6 LC-loops (xx)(yz) = (x(xy))z
7 RC-loops x((yz)z) = (xy)(zz)
8 Left alternative loops x(xy) = (xx)y
9 Right alternative loops x(yy) = (xy)y
10 Flexible loops x(yx) = (xy)x
11 3-power associative loops x(xx) = (xx)x
12 Left nuclear square loops (xx)(yz) = ((xx)y)z
13 Middle nuclear square loops x((yy)z) = (x(yy))z
14 Right nuclear square loops x(y(zz)) = (xy)(zz)
1

47
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

2 ON NON-ASSOCIATIVE FLEXIBLE LOOPS

The left nucleus of a loop Q is Nλ = {l ∈ Q : l(xy) = (lx)y for ev-


ery x, y ∈ Q}. The right nucleus of a loop Q is the set Nρ = {r ∈
Q : (xy)r = x(yr) for every x, y ∈ Q}, and middle nucleus of Q is
Nµ = {m ∈ Q : (ym)x = y(mx) for every x, y ∈ Q}. The nucleus of Q is
the set N = Nρ ∩ Nλ ∩ Nµ .

A loop (L, ∗) is termed as flexible loop if the following identity is satisfied


for all x, y ∈ L :
(x ∗ y) ∗ x = x ∗ (y ∗ x).

Every Moufang loop is a flexible loop. In this paper we construct com-


mutative flexible loops of order 6 which belongs to an infinite family of
commutative non-associative flexible loops constructed here for the first
time.

2. Construction of non-associative flexible loop


Let G be a multiplicative group with neutral element 1, and A be an
additive abelian group with neutral element 0. Any map
µ : G × G → A,
satisfying
µ(1, g) = µ(g, 1) = 0 , for every g ∈ G,
is called a factor set. When µ : G × G → A is a factor set, we can define
multiplication on G × A by
(g, a)(h, b) = (gh, a + b + µ(g, h)).
The resulting groupoid is clearly a loop with neutral element (1, 0) and
will be denoted by (G, A, µ). Additional properties of (G, A, µ) can be
enforced by additional requirements on µ.

We construct commutative non-associative flexible loop with the help


of two groups such that one is multiplicative group and the other is ad-
ditive abelian group.

Lemma. Let µ : G × G → A be a factor set. Then (G, A, µ) is a


commutative non-associative flexible loop if and only if
µ (g, h) + µ (gh, g) = µ (h, g) + µ (g, hg) , ∀ g, h ∈ G. (1)

Proof. By definition the loop (G, A, µ) is flexible loop if and only if


[(g, a) (h, b)] (g, a) = (g, a) [(h, b) (g, a)],

48
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

ON NON-ASSOCIATIVE FLEXIBLE LOOPS 3

⇒ (gh, a + b + µ (g, h)) (g, a) = (g, a)(hg, a + b + µ (h, g))

((gh)g, 2a + b + µ (g, h) + µ (gh, g)) = (g(hg), 2a + b + µ (h, g) + µ (g, hg)) .

Comparing both sides we get


µ (g, h) + µ (gh, g) = µ (h, g) + µ (g, hg) .

We call a factor set µ satisfying equation (1) a flexible factor set. 


Proposition. Let n ≥ 2 be an integer. Let A be an abelian group of
order n, and α ∈ A an element of order bigger than 1. Let G = {1, x, x2 }
be the multiplicative group with neutral element 1. Define
µ:G×G→A
by {
α , if (a, b) = (x, x2 ), (x2 , x),
µ(a, b) =
0 , otherwise.
Then L = (G, A, µ) is a non-associative flexible loop with
N (L) = {(1, a) : a ∈ A}.
Proof. The map µ is clearly a factor set. It can be depicted as follows

µ 1 x x2
1 0 0 0
x 0 0 α
x2 0 α 0
To show that L = (G, A, µ) is a non-associative flexible loop, we verify
equation (1) as follows.

Case 1: Since µ is a factor set, there is nothing to prove when g =


1andh = 1.

Case 2: When g = x, equation (1) becomes


µ (x, h) + µ (xh, x) = µ (h, x) + µ (x, hx)
If h = x ⇒ µ (x, x) + µ (x2 , x) = µ (x, x) + µ (x, x2 ) ⇒ α = α.

If h = x2 ⇒ µ (x, x2 ) + µ (1, x) = µ (x2 , x) + µ (x, 1) ⇒ α = α.

Case 3: when g = x2 , equation (1) becomes


( ) ( ) ( ) ( )
µ x2 , h + µ x2 h, x2 = µ h, x2 + µ x2 , hx2
If h = x ⇒ µ (x2 , x) + µ (1, x2 ) = µ (1, x2 ) + µ (x2 , 1) ⇒ 0 = 0.

49
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

4 ON NON-ASSOCIATIVE FLEXIBLE LOOPS

If h = x2 ⇒ µ (x2 , x2 ) + µ (x, x2 ) = µ (x2 , x2 ) + µ (x2 , x) ⇒ 0 = 0.

Associativity:
(x, α)((x, 0)(x2 , α)) = (x, α)(1, 0) = (x, a)
((x, α)(x, 0))(x2 , α) = (x2 , α)(x2 , α) = (x, 0)
⇒ (x, α)((x, 0)(x2 , α)) ̸= ((x, α)(x, 0))(x2 , α),
which implies that L = (G, A, µ) is non-associative.

Now it remains to show that N (L) = {(1, a) : a ∈ A}.For this con-


sider
((g, b) (1, a)) (h, c) = (g, b) ((1, a) (h, c))
⇒ (g, b + a + µ (g, 1)) (h, c) = (g, b) (h, a + c + µ (1, h))
⇒ (g, b + a + 0) (h, c) = (g, b) (h, a + c + 0)
⇒ (gh, b + a + c + µ (g, h)) = (gh, b + a + c + µ (g, h)) ,
which is true, so
⇒ (1, a) ∈ Nµ (L) .
Similarly we can show that
(1, a) ∈ Nλ (L) and (1, a) ∈ Nρ (L) ,
hence
(1, a) ∈ N (L)
⇒ N (L) = {(1, a) : a ∈ A},
which is the required result. 
Example. The smallest group A satisfying the assumptions of proposi-
tion is the 2-element cyclic group {0, 1}. The construction of proposition
with α = 1 then gives rise to the smallest non-associative flexible loop of
order 6.

· (1, 0) (1, 1) (x, 0) (x, 1) (x2 , 0) (x2 , 1)


(1, 0) (1, 0) (1, 1) (x, 0) (x, 1) (x2 , 0) (x2 , 1)
(1, 1) (1, 1) (1, 0) (x, 1) (x, 0) (x2 , 1) (x2 , 0)
(x, 0) (x, 0) (x, 1) (x2 , 0) (x2 , 1) (1, 1) (1, 0)
(x, 1) (x, 1) (x, 0) (x2 , 1) (x2 , 0) (1, 0) (1, 1)
(x2 , 0) (x2 , 0) (x2 , 1) (1, 1) (1, 0) (x, 0) (x, 1)
(x2 , 1) (x2 , 1) (x2 , 0) (1, 0) (1, 1) (x, 1) (x, 0)

50
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

ON NON-ASSOCIATIVE FLEXIBLE LOOPS 5

· 0 1 2 3 4 5
0 0 1 2 3 4 5
1 1 0 3 2 5 4
2 2 3 4 5 1 0
3 3 2 5 4 0 1
4 4 5 1 0 2 3
5 5 4 0 1 3 2
We have verified the above example with the help of GAP(Group Algo-
rithm Program) package [4].
References
[1] R. H. Bruck, A Survey of Binary Systems, Ergebnisse der Mathematik und Ihrer
Grenzgebiete (New Series), 20, Springer, 1958.
[2] M. K. Kinyon, Kyle Pula and P. Vojtechovsky, Admissible Orders Of Jordan
Loops, Journal of Combinatorial Designs, 17(2), (2009), 103–118.
[3] K. Mc Crimmon, A Taste of Jordan Algebras, Universitext, Springer, 2004.
[4] G. P. Nagy and P. Vojtechovsky, LOOPS: Computing with Quasigroups
and Loops in GAP, version 1.0.0, Computational Package for GAP;
http://www.math.du.edu/loops.
[5] J. D. Philips and P. Vojtechovsky, C-loops: An Introduction, Publicationes
Mathematicae Debrecen, 68(1-2), (2006), 115–137.
[6] K. Pula, Power of Elements in Jordan loops, Commentationes Mathematicae
Universitatis Carolinae (to appear).
[7] J. Slaney and A. Ali, Generating Loops with the Inverse Property, Sutcliffe G.,
Colton S., Schulz S. (eds.), Proceedings of ESARM (2008), 55–66.
[8] J. Slaney Finder, Finite Domain Enumerator: System Description, Proceedings
of the Twelfth Conference on Automated Deduction(CADE-12), (1994), 798–801.
[9] W. B. Vasantha Kandasamy, Smarandache Loops, American Research Press,
Rehoboth, 2002.

51
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

A Haar wavelet collocation method for recovering


time-space dependent heat source

Muhammad Ahsana,b,∗, Iltaf Hussaina

a
Department of Basic Sciences, UET Peshawar, Pakistan.
b
Department of Mathematics, University of Swabi, Pakistan.
Abstract
In this paper, we developed a new hybrid Haar wavelet collocation method (HWCM) for numerical solution
of inverse heat equation with unknown time-space dependent heat source. Due to the ill-posedness of
inverse heat equations, the output heat source solution is very difficult to capture accurately. In this
hybrid HWCM, first order finite difference is used for time derivative and Haar series are used for spatial
derivative approximation. Different types of test problems (heat source are separable or non-separable in
space and time) are implemented on the present method that shows effective and stable results, even for
highly ill-posed problems under quite large amount of noise levels.

1 Introduction
In this paper, we want to find the unknown heat source that depends on both time and space variables in
the inverse heat equation. These types of inverse equations induced in the diffusion process, conduction of
heat and transportation of natural materials. Mathematically we can write the inverse heat equation as:

Ψt (x, t) = Ψxx (x, t) + f(x, t), 0 < x < 1, t > 0, (1)

Ψ(x, 0) = Φ(x)
Ψ(0, t) = go (t) (2)
Ψ(1, t) = g1 (t)
and
Ψ(x, tf ) = h(x). (3)
where Ψ is state function and f(x, t) is the some physical law, which has been consider heat source in this
paper. We used an overspecified condition Eq.(3) measured at final time tf , which is plotted by noise level
 and define as:
ˆ = h(x) + R(x)
h(x)
where R(x) ∈ [−1, 1].
For inverse heat equation there are many articles with different source term , like Cannon and Duchateau
have find H(u) in [1]. Siraj-ul-islam and Muhammad Ahsan have identified time dependent heat source
H(t) [2] accurately by HWCM. The same author Siraj-ul-islam have also identified time dependent heat
source H(t) accurately by Meshless collocation method [3]. Farcas and Lesnic have find the space dependent
heat source F (x) in [4]. Chein-Shan Liu has used homogenized function method to find time-space sparable
heat source F (x, t) = G(x) + H(t) [5]. Hasanov calculated F(x) and H(t) separately in the heat source
F(x)H(t) [6].
∗ Email: ahsan kog@yahoo.com

52
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

1.1 Haar wavelets


A Haar wavelet functions can be defined as

1
 for x ∈ [ζ1 , ζ2 ),
hi (x) = −1 for x ∈ [ζ2 , ζ3 ), (4)

0 elsewhere,

where
k k + 0.5 k+1
ζ1 =
, ζ2 = , and ζ3 = .
m m m
The above parameters have been given in [2]. We define the following notations for integrals of the Haar
wavelets; Z x
pi,1 (x) = hi (x0 )dx0
0
Z x
pi,2 (x) = pi,1 (x0 )dx0
0
and Z 1
Ci = pi,1 (x0 )dx0 .
0
Using Eq.(4), we get 
x − ζ1 for x ∈ [ζ1 , ζ2 ),

pi,1 (x) = ζ3 − x for x ∈ [ζ2 , ζ3 ),

0 elsewhere,

1 2

 2 (x − ζ1 ) for x ∈ [ζ1 , ζ2 ),
 1 − 1 (ζ − x)2 for x ∈ [ζ , ζ ),

pi,2 (x) = 4m
2 2 3 2 3
1
 2 for x ∈ [ζ3 , 1),
 4m


0 elsewhere,
and
1
Ci = .
4m2

2 Numerical Approximation
Defining the following approximations for Eq.(1):
2M
X
Ψxx (x, t) = λi hi (x) (5)
i=1

and
2M
X
f(x, t) = αi hi (x). (6)
i=1

Integrating Eq.(5) from 0 to x with respect to x, we get


2M
X
Ψx (x, t) = Ψx (0, t) + λi pi,1 (x). (7)
i=1

Integrating Eq.(7) from 0 to 1 with respect to x, we get


2M
X
Ψx (0, t) = Ψ(1, t) − Ψ(0, t) − λi Ci . (8)
i=1

53
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Table 1: The L∞ (f(x, t)) at different  and t with N = 32 and ∆t = 0.01 for Test Problem 1.

t  = 0.1%  = 1%
0.1 6.894 × 10−02 3.489 × 10−01
0.5 1.028 × 10−01 3.536 × 10−01
0.7 1.256 × 10−01 3.568 × 10−01
1 1.695 × 10−01 3.630 × 10−01

Putting Eq.(8) in Eq.(7) and further integrating from 0 to x with respect to x, we get
2M
X
Ψ(x, t) = Ψ(0, t) + x(Ψ(1, t) − Ψ(0, t)) + λi (pi,2 (x) − xCi ). (9)
i=1

The implicit scheme for Eq.(1) is

[Ψ(x, t)]j+1 − δt[Ψxx (x, t)]j+1 + δt[f(x, t)]j+1 = [Ψ(x, t)]j (10)

By putting Eq.(5), Eq.(6) and Eq.(9) in Eq.(10), we get 2M equations with 4M unknown coefficients.
Another 2M equations can be obtain by putting t = tf in Eq.(9). Now we have a complete system of
4M equations with 4M unknown coefficients. By finding these unknown coefficients and plugging back in
Eq.(9) and Eq.(6) we can find Ψ(x, t) and f(x, t) respectively.

3 Numerical Results
In this section we have include two different types of source term (separable or non-separable in space and
time). For numerical results verification we used maximum absolute error (L∞ ).
Test Problem 1. Considering the heat source as product of space and time function i.e. f(x, t) =
F (x)H(t) , where F (x) = sin(x) and H(t) = 2et . The exact solution of Eq.(1) is Ψ(x, t) = et sin(x). The
boundary, initial and the overspecified condition can be obtain from the exact solution.
The numerical results for various noise level and different final times have been given in Table 1. These
results shows that the Haar wavelet collocation method handle these type of inverse heat equation with
considerable accuracy.
Test Problem 2. Considering the heat source as sum of space and time function i.e. f(x, t) = F (x) +
H(t) , where F (x) = 2π 2 sin(πx) and H(t) = π cos(πt). The exact solution of Eq.(1) is Ψ(x, t) = (2 −
2
e(−π t) ) sin(πx) + sin(πt). On comparison with polynomial method (nc = 1102, n = 10) [5] which has a
maximum relative error for f(x,t) is 3.55 × 10−01 but HWCM gives 6.777 × 10−01 at M = 16 for noise
level  = 20%. In Fig. 1 the comparison of HWCM with exact solution is given at different noise levels,
which shows the similarity of numerical and exact values. At large  = 10% the HWCM give considerable
accurate and stable results.

4 Conclusion
In this paper we have implement HWCM for numerical solution of time-space dependent source term in
inverse problem. Due to the small conditional number of the coefficient matrix the calculated result are
accurate and acceptable.

54
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

 = 0.1%  = 0.1%

2 20

1.8

1.6 15

1.4

1.2 10
Ψ(x, t)

f(x, t)
1

0.8 5

0.6

0.4 0

0.2

0 −5
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x x
 = 1%  = 1%

2 20

1.8

1.6 15

1.4

1.2 10
Ψ(x, t)

f(x, t)

0.8 5

0.6

0.4 0

0.2

0 −5
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x x
 = 10%  = 10%

2 20

1.8

1.6 15

1.4

1.2 10
Ψ(x, t)

f(x, t)

0.8 5

0.6

0.4 0

0.2

0 −5
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x x

Figure 1: Comparison of HWCM (blue circles) with the exact solution (red lines) for Test Problem 2, at M = 16,
∆t = 0.01 and t = 1 for different  .

55
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

References
[1] J. Cannon, P. DuChateau, Structural identification of an unknown source term in a heat equation,
Inverse problems 14 (3) (1998) 535.
[2] Siraj-ul-Islam, M. Ahsan, I. Hussian, A multi-resolution collocation procedure for time-dependent
inverse heat problems, International Journal of Thermal Sciences 128 (2018) 160–174.
[3] Siraj-ul-Islam, S. Ismail, Meshless collocation procedures for time-dependent inverse heat problems,
International Journal of Heat and Mass Transfer 113 (2017) 1152–1167.
[4] A. Farcas, D. Lesnic, The boundary-element method for the determination of a heat source dependent
on one variable, Journal of Engineering Mathematics 54 (4) (2006) 375–388.
[5] C.-S. Liu, To recover heat source G (x)+ H (t) by using the homogenized function and solving rectan-
gular differencing equations, Numerical Heat Transfer, Part B: Fundamentals 69 (4) (2016) 351–363.
[6] A. Hasanov, Identification of spacewise and time dependent source terms in 1d heat conduction equa-
tion from temperature measurement at a final time, International Journal of Heat and Mass Transfer
55 (7-8) (2012) 2069–2080.

56
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

A Comparative analysis of meshless and


sinc-collocation method for some PDEs.
Farooq Khana,∗, Masood Ahmadb ,Siraj-ul-Islamc,

a,b,c Department of Basic Sciences, UET Peshawar , Pakistan.

Abstract
Korteweg-de Vries (KDV) equation is an outstanding model of feebly nonlinear shallow water
waves. Recently, investigations of these kinds of wave models have been reported by many re-
searchers. In the present paper, numerical procedures for the solution of KDV equation using
Sinc function and RBF (radial basis function) are applied. In the meshless procedure ordinary
RBF and its integrated form are used as a basis function. The time derivative is discretized
by θ-weighted finite difference technique. For testing this method, we select some PDEs like
Kuramoto- Sivashinsky equation and compare the results with other methods.

Keywords: Sinc function, Radial Basis function, Sinc collocation method, KDV equation.

1 Introduction
The Korteweg-de Vries (KDV) equation was proposed as a model of weakly nonlinear shallow
water waves for the first time in 1985. Recently, investigations of these kinds of equations have
been reported in the literature. Other than fluid dynamics, it has become future field of research
in physics, applied mathematics and related disciplines [1]. It has discovered application related
to nonlinear heat transfer in crystal lattices, particle acoustic waves in plasmas, blood stream in
veins, and even cosmology [1].
The reason of this study is to evaluate performance the meshless and Sinc collocation method
for the solution of non-linear, third-order boundary-value problems. The meshless and Sinc collo-
cation methods are based on the RBF and Sinc functions have been studied and used extensively
for numerical solution of PDEs [1–5, 7]. In these procedures, the coefficient matrices are dense,
unlike finite-element methods, spline collocation methods and finite-difference methods. Radial
basis function depends on the collocation points (scattered or uniform) of a plain series expres-
sion. It is based on a norm distance ri = ||z − zi ||, which makes it easily extend-able to solve
PDEs in n-space dimensions, in a simple way. It is shown in the literature, that RBF collocation
technique converges spectrally for smooth problems. In addition, the accuracy of the RBF based
meshless methods depends on the shape parameter (i-e in the multiquadric, (r2 + c)1 /2) [3].

The author to whom all the correspondence should be addressed. Email:siraj-ul-islam@nwfpuet.edu.pk

57
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

2 Construction of the method


We consider the following initial-boundary value problem [1]


 ut + αuz + βuuz + γuzzz = 0,
u(z, 0) = 0,



u(a, t) = f (t), (1)
u(b, t) = 0,




uz (b, t) = 0,

where f (t) is a given known function, the parameters α, β and γ are constant, z is a spacial
variable, u is the unknown function, t is the temporal variable, ut and uz represent the first order
partial derivative with respect to t and z respectively.
The Sinc function is defined as [7]

1 if z = 0,
Sinc(z) = sin(z) (2)
z 6 0.
if z =
For equally spaced δ and a series of nodes, the Sinc function can be written as
 
z − jδ
S(j, δ)(z) = Sinc . (3)
δ
The Whittaker cardinal function C(f, δ) of a function f is defined as [6, 8]

X
C(f, δ)(z) = f (jδ)S(jδ)(z), (4)
j=−∞

granted the sum converges. For practical purposes, we choose the finite series.

The radial basis function is defined as ψ : Rs → R such that r = ||z|| and ||.|| is some norm
on Rs . The most popular radial basis functions that are used in applications are
• Multiquadric (MQ) q
1 + (εr)2 . (5)

• Inverse Quadric (IQ) q


(1 + (εr)2 )−1 . (6)

• Gaussian (GA)
2
e−(εr) . (7)

Referring [1–3] for detailed explanation of Sinc function and radial basis function and how to
find the derivatives of these functions. To approximate the solution to (1), derivatives must be
calculated. We use θ-weighted scheme to discretized (1) as

uk+1 − uk h i h i
+ θ (α + βuk+1 )uk+1
z + γuk+1 k k k
zzz + (1 − θ) (α + βu )uz + γuzzz = 0 , (8)
δt
where uk = u(z, t = tk ) represents the value of the solution at the k th time step. We generate
the collocation points in the spacial dimension as zi = a + (i − 1)δz , where δz = |b−a|
nz , nz is the
number of grid points. The solution can then be interpolated by Sinc function as

58
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

nz
X
u(zi , tk ) ≡ uk (zi ) ≈ ukj Sj (zi ) . (9)
j=1

The non-linear term uk+1 uk+1


z in (8) can be linearized by multiplying the Taylor series expan-
sion of uk+1 and uk+1
z neglecting O(δ 2
t ) and high order terms i .e., Taylor series expansion of u
k+1
k+1 2 k+1 k+1
and uz and neglecting O(δt ) and higher, The term u uz in the (8) should be linearized.

uk+1 = u(tk ) + δt ut (tk ) + O(δt2 ), (10)

and

uk+1
z = uz (tk ) + δt uzt (tk ) + O(δt2 ) , (11)

so

h ih i
uk+1 uk+1
z = u(tk ) + δt ut (tk ) + O(δt2 ) uz (tk ) + δt uzt (tk ) + O(δt2 ) , (12)
! !
u k+1 − uk uk+1 − uk
z z
= uk ukz + δt ukz + δt uk + O(δt2 ), (13)
δt δt
≈ uk+1 ukz + uk uk+1
z − uk uk . (14)

Substituting (9), (12) into (8) yield the following system of equations;

nz nz nz
(0) (1) (3)
X X X
uk+1
j Sj (zi ) + δt θα uk+1
j Sj (zi ) + δt θγ uk+1
j Sj (zi )
j=0 j=0 j=0
nz nz nz nz
" #
(0) (1) (0) (1)
X X X X
+ θδt β ukj Sj (zi ) uk+1
j Sj (zi ) + uk+1
j Sj (zi ) ukj Sj (zi )
j=0 j=0 j=0 j=0
nz nz nz
(15)
(0) (1) (3)
X X X
= ukj Sj (zi ) − δt (1 − θ)α ukj Sj (zi ) − δt (1 − θ)γ ukj Sj (zi )
j=0 j=0 j=0
nz nz
(0) (1)
X X
+ δt (2θ − 1)β ukj Sj (zi ) ukl Sl (zi ),
j=0 l=0

for i = 1, . . . , nz − 1, and

nz
(0)
X
ukj Sj (a) = f (tk ),
j=0
nz
(0)
X
ukj Sj (b) = 0, (16)
j=0
nz
(1)
X
ukj Sj (b) = 0.
j=0

59
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

To write Eq (15) and (16) in matrix notation, we use the following symbols

U k = [uk1 , uk2 , ........uknz ]T ,


S0 = Sj (zi ), i, j = 1, ......nz ,
S1 = Sj0 (zi ), i, j = 1, ......nz ,
S3 = S 000 j (zi ), i, j = 1, ......nz , (17)
N1k = U k ∗ S1 ,
N2k = Uxk ∗ S0 ,

where ∗ denotes component by component multiplication. Equation(15)and(16) now becomes


" # " #
 
S0 + δt θαS1 + δt θβ N1k + N2k + δt θγS3 ) uk+1 = S0 − (1 − θ)δt αS1 + (2θ − 1)δt βN1k − (1 − θ)δt γS3 uk ,

(18)
or
Auk+1 = Buk , (19)
where
" #
 
A = S0 + δt θαS1 + δt θβ N1k + N2k + δt θγS3 ) ,
" #
B = S0 − (1 − θ)δt αS1 + (2θ − 1)δt βN1k − (1 − θ)δt γS3 ,

using the above equations, we get


U k+1 = I 0 A−1 Buk . (20)
By solving the system (20), we get the approximate solution U k+1 at time level k+1. For meshless
procedure, similar procedure can be followed by replacing sinc function with given radial basis
function.

3 Numerical results
In this section, we consider the following KDV equation to investigate the accuracy and condi-
tioning of the collocation matrices of the proposed methods.

ut + uuz + uzzz = 0,
u(z = 0) = u(z = L) = uz (z = 0) = 0, (21)
u(t = 0) = Asech2 (κz − z0 ) ,
where A = 2κ2 . The analytical solution of the above equation is given below

u(t, z) = Asech2 (κz − ωt − z0 ) . (22)

We take z0 = 0, κ = 0.3, and spacial intervals [−30, 30]. The final time is taken as T = 10.

60
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

0.14 #10 -4
3

0.12
2.5

0.1
2
0.08
L1

L1
1.5
0.06

1
0.04

0.02 0.5

0 0
0 2 4 6 8 10 12 14 16 18 20 20 40 60 80 100 120 140 160 180 200
sh sh

Figure 1: Relation between L∞ and shape parameter(Sh) for IMQ and MQ

Table 1: Comparison of Max absolute error and condition number of Sinc,IMQ and MQ at
different collocation points.
N Sinc IMQ MQ
L∞ κ L∞ κ L∞ κ
30 8.5188e −03 1.005e +00 1.7755e−02 1.8829e+10 3.1216e−02 6.5862e+03
50 2.2319e −04 1.0008e +00 1.5091e−03 1.1754e+12 1.8193e−03 1.1956e+05
70 1.6419e−06 1.0011e+00 1.0871e−03 1.7678e+13 5.5225e−05 1.6355e+06
90 1.6258e−07 1.0014e+00 2.6386e−04 1.3392e+14 2.1056e−05 1.9544e+07
110 2.2107e−07 1.0016e+00 3.2642e−04 6.7636e+14 2.3861e−05 2.1564e+08

The accuracy and dependency of the proposed method on the shape parameter of the MQ and
IMQ are shown in fig1, while the accuracy and conditioning of the collocation matrices are shown
in table 1. It is clear from the fig1, that the accuracy of MQ is becoming worse as value of the
shape parameter increasing, on the other side, we can observe small fluctuation in the accuracy
with respect to the shape parameter in IMQ. From the table 1, it is clear that accuracy and
conditioning of Sinc collocation method is better than the proposed meshless methods. However,
Sinc collocation method is based on grid point while there is no need of grid in the meshless
methods. These grid formulation becomes difficult in higher dimension.

4 Conclusion
The comparison of Sinc collocation method and radial basis function method is discussed in the
present paper. We considered a non-linear KDV equation of third-order. On the basis of the
present work, we conclude as follows
• The accuracy and conditioning of the Sinc collocation method is better than the proposed
meshless methods at different nodes.
• The Sinc collocation method is based on regular grid. The formation of grid is itself a time
consuming process in higher dimension and such method will not suited for irregular domain and
scattered data. The benefit of the meshless methods is that we do not need a grid, the meshless
solution can be obtained both on uniform and scattered nodes.

61
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

References
[1] Kamel Al-Khaled, Nicholas Haynes, William Schiesser, and Muhammad Usman, Eventual
periodicity of the forced oscillations for a korteweg–de vries type equation on a bounded domain
using a sinc collocation method, Journal of Computational and Applied Mathematics 330
(2018), 417–428.

[2] Wen Chen, Zhuo-Jia Fu, and Ching-Shyang Chen, Recent advances in radial basis function
collocation methods, Springer, 2014.

[3] Carsten Franke and Robert Schaback, Solving partial differential equations by collocation
using radial basis functions, Applied Mathematics and Computation 93 (1998), no. 1, 73–82.

[4] Sirajul Haq, Nagina Bibi, Syed Ikram A Tirmizi, and M Usman, Meshless method of lines for
the numerical solution of generalized kuramoto-sivashinsky equation, Applied Mathematics
and Computation 217 (2010), no. 6, 2404–2413.

[5] Reza Mokhtari and Maryam Mohammadi, Numerical solution of GRLW equation using sinc-
collocation method, Comp. Phys. Sim. 181 (2010), 1266–1274.

[6] Frank Stenger, Numerical methods based on Whittaker cardinal, or sinc functions, SIAM
Review 23 (1981), no. 2, 165–224.

[7] Frank Stenger, Handbook of sinc numerical methods, CRC Press, 2016.

[8] Frank Stenger, Numerical methods based on sinc and analytic functions, Springer, 1993.

62
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

On numerical evaluation of the oscillatory integrals of


Bessel type
Iqrar Hussaina∗ ,Sakhi Zamana , Siraj-ul-Islama

a Department of Basic Sciences, University of Engineering and Technology Peshawar, Pakistan.

Abstract
Efficient and accurate numerical approximation of integrals with highly oscillatory special func-
tions like Bessel, Airy and Hankel functions is one of the key problem in the field of science and
engineering. In this paper, we suggest a new modified form of Levin’s method based on multi-
quadric radial basis functions to evaluate Bessel function of order η. The method converts the
oscillatory integral into a system of coupled ordinary differential equations and subsequently, we
find numerical solution by meshless procedure. Some multi-resolution quadratures are also used
to compute these integrals for comparison. Numerical test problems are included to illustrate
accuracy of the proposed methods.

Key words: Integrals with oscillatory Bessel function, Levin collocation method, Multi-resolution
quadratures.

1 Introduction
Numerical approximation of the highly oscillatory integrals is considered to be a main problem
in different fields of sciences like electromagnetics, optics, diffraction theory, seismology image
processing, electrodynamics, acoustics and quantum mechanics etc. [2, 5, 9, 12]. Let us assume a
class of oscillatory integrals given by

Zb1 X
n1 Zb1
I= gj1 (y)ζj1 (µ, y)dy = G(y)χ(µ, y)dy, (1)
a1 j1 =1 a1

where G(y) = (g1 (y), g2 (y), ..., gn1 (y))T is n1 × 1 vector of non-oscillatory functions, χ(µ, y) =
(ζ1 (µ, y), ζ2 (µ, y), ..., ζn1 (µ, y))T is n1 × 1 linearly independent highly oscillatory functions con-
taining Bessel functions, Airy functions and Hankel functions. µ being frequency parameter is
a large positive integer. We also know that a1 and b1 are finite real numbers. In the present
research study, numerical evaluation of oscillatory integrals of Bessel square type is studied which
is given as under.
Zb1
I[g, µ] = g(y)Jη2 (µy)dy (2)
a1

where Jη2 (µy)


is square of Bessel oscillatory function.
For numerical evaluation of the one dimensional highly oscillatory integrals many accurate meth-

63
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Table 1: Symbols box:

Symbols Description
MQ RBF Multi-quadric RBF
S̃(y) Approximate value of S
η Order of the first kind of Bessel function
C Shape parameter of the multi-quadric radial basis functions
χ(µ, y) A vector of Bessel oscillatory functions
ζi1 Element of χ(µ, y)
O(µ) Asymptotic order of convergence in frequency parameter µ
|Er | Absolute error
|Erel | Relative error
QM L
b [g] Levin’s based Meshless procedure
Qm 1
h [g] Hybrid functions based quadrature of order m1

ods have been developed of the form (2) such as Levin collocation method [6,7,10,13], generalized
quadrature rule [3, 4, 11] and Homotopy perturbation method [2] and some more.
In [6] the author presented a new approach for the numerical approximation of the integral (1).
Moreover, the author proposed an approach for the approximation of oscillatory integrals of
Bessel-trigonometric and Bessel square type in the same paper. In [7] the author calculated the
error bounds of the method given in [6].
In paper [10] the author proposed new error analysis for the evaluation of oscillatory integrals
involving Bessel functions given in [6]. In the same paper, the author calculated the asymptotic
order of convergence which is O(µ−5/2 ). In [11] the authors concluded that generalized quadra-
ture rules work well for Bessel-trigonometric transformations and when the oscillations became
faster its accuracy increases.
In [13] the authors proposed new rules for the approximation of highly oscillatory integrals con-
taining Bessel and Bessel-trigonometric functions for uniform and scattered nodes. Moreover, to
handle the singularity Hybrid functions and Haar wavelets have been used.
In current paper, we have used Levin collocation method with MQ RBF and Multi-resolution
quadrature like Hybrid functions [1, 8] for approximation of Bessel square highly oscillatory inte-
grals.

2 Working Procedure
In this section, we discuss the procedures which were used to evaluate Bessel type of oscillatory
integrals.

2.1 Meshless Method based on Levin Approach


The vector χ(µ, y) in (1) satisfies the following equation

χ0 (µ, y) = H(µ, y)χ(µ, y),

in which H(µ, y) is a square matrix whose order is n1 × n1 containing entries of non-oscillatory


P 1 [j1 ]
functions. Let S̃(y) = m i1 =1 βi1 ψi1 (y), j1 = 1, 2, ..., n1 be an approximate solution of the ordi-

64
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

nary differential equation:


S̃0 (y) + H> (µ, y)S̃(y) = G(y). (3)
where ψi1 (y) is MQ RBF. Integral (1) can be written as
Z b1
QM
b
L
[g] = (S̃0 (y) + G> (µ, y)S̃(y)).χ(µ, y)dy
a1
Zb1 h i (4)
= d S̃(y).χ(µ, y) = S̃(b1 ).χ(µ, b1 ) − S̃(a1 ).χ(µ, a1 ).
a1

Specially, in integral (2), matrix H(µ, y) and vectors χ(µ, y), G(y) are given in the form
 
2(η−1)  2 (µy)

y −2µ 0 Jη−1 
0

−1
 
H(µ, y) =   µ −µ  , χ(µ, y) = Jη−1 (µy).Jη (µy) and G(y) =
 0  (5)
 
y
0 2µ −2η Jη2 (µy) g(y)
y
P >
m1 [1] Pm1 [2] Pm1 [3]
In this case, we assume an approximate solution S̃(y) = β
i1 =1 i1 ψi 1 (y), β
i1 =1 i1 ψi 1 (y), β
i1 =1 i1 ψ i 1 (y) ,
[1] [2] [3]
having unknown coefficients βi1 ,βi1 and βi1 , i1 = 1, 2, ..., m1 . The unknown coefficients can be
determined by imposing the interpolation conditions on the following ordinary differential equa-
tion:
S̃0 (yk ) + H> (µ, yk )S̃(yk ) = G(yk ), k1 = 1, 2, ..., m1 . (6)
For this aim, MQ RBF ψ(ky − y c k2 , C) is used, which is given as under
p
ψ(ky − y c k2 , C) = (y − y c )2 + C 2 , (7)
y− yc
ψ 0 (ky − y c k2 , C) = p , (8)
(y − y c )2 + C 2
where C is considered to be a shape parameter and yic1 , i1 = 1, 2, ..., m1 are the m1 -centers of
the radial basis function interpolant. On substituting the values of S̃(yk1 ), S̃0 (yk1 ), H> (µ, yk1 )
and G(yk1 ) in (3), we get a system of coupled equations of the form:
β [1] Ψ1 + β [2] Ψ2 + β [3] Ψ3 = 0
β [1] Ψ4 + β [2] Ψ5 + β [3] Ψ6 = 0
β [1] Ψ7 + β [2] Ψ8 + β [3] Ψ9 = g(yk1 ), (9)
The above system is a tri-diagonal system in three unknowns and β [1] ,β [2] β [3] .
Each Φp , p = 1, ..., 9 in system (9) is m1 × m1 order square matrix. The coupled system (9) can
be written in matrix form as given by
   [1]  
Ψ1 Ψ2 0 β 0

Ψ4 Ψ5 Ψ6  β [2]  =  0  . (10)


  

0 Ψ8 Ψ9 β [3] g(yk1 )
or
Dβ = G,
where D is tri-diagonal block matrix of order 3m1 × 3m1 and each of β and G are 3m1 × 1 order
vectors. The main aim of our proposed research work is to obtain an accurate approximation
[1] [2] [3]
for the ordinary differential equation (3). In this method, the vectors βi1 ,βi1 and βi1 , i1 =
1, 2, ..., m1 are calculated from the system of equations (9), which are the only unknowns in the
approximation of (3).

65
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

2.2 Multi-resolution quadrature


In this section, we have described briefly multi-resolution quadrature like Hybrid functions
(Qm 1 m1
h [g]). In [1, 8] Qh [g] is described in detail. In current research work, we have used Hybrid
8
R b1
functions based quadrature of order m1 = 8 (Qh [g]) for computing the integral: I[g] = a1 g(y) dy.
Expression for the Q8h [g] is given as under

n1     
8h X h h
HFO8 = 295627g a1 + (16k1 − 15) + 71329g a1 + (16k1 − 13)
1935360 2 2
k1 =1
   
h h
+ 471771g a1 + (16k1 − 11) + 128953g a1 + (16k1 − 9)
2 2
   
h h
+ 128953g a1 + (16k1 − 7) + 471771g a1 + (16k1 − 5)
2 2
   
h h
+ 71329g a1 + (16k1 − 3) + 295627g a1 + (16k1 − 1) , (11)
2 2
−a1
in which h = b18n1
.
For approximation of Bessel oscillatory integrals, we use g(y) = ĥ(y)Jη (µy).

3 Numerical Examples and Discussion


In this section, we have taken a numerical test problem to justify the accuracy and efficiency of
our proposed method. Exact solutions are obtained by using MAPLE 18 software. Results are
obtained in terms of absolute errors |Er |. When exact solutions are not achievable then relative
errors |Erel | are calculated by finding absolute difference at higher and lower nodes as described
in table 3. The experimental rate of convergence of the proposed methods in some problems are
computed using the following formula:
 
|Er |µl
Rate = log2 ,
|Er |µu

where µl is lower and µu is upper adjacent frequencies.

Test Problem 1. The following integral [6]


Z 2
I[gb , µ] = J02 (µy)dy,
1

is a highly oscillatory integral of Bessel square type of order η = 0. In Fig. 2 (left) the oscillatory
behavior of the integrand is given. The figure shows that the integrand is highly oscillatory. The
integral is approximated by the new method QM L
b [g].
In table 2, we have compared the absolute errors of our proposed RBF based method and polyno-
mial based method [6] for n = 3 and n = 5. It shows that the accuracy of the polynomial based
method is constant on increasing frequency but improves with increasing nodal points. However,
our proposed method improves the accuracy on increasing the frequency as well as nodal points.
By comparing 3rd and 4th columns with 5th and 6th column, we observe that our proposed
method for scattered nodes also improves the accuracy by increasing frequency as well as for
small nodal points.

66
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Table 2: |Er | of QM L
b [g] and its comparison with the last two columns, same value of µ, for I[gb , µ],
with the rate of convergence given in the last row.

µ QM L
3 [g] QM L
5 [g] QM L
3 [g] QM L
5 [g] n=3 n=5
Uniform Uniform Scattered Scattered Uniform Uniform
1 2.2e − 02 2.3e − 01 8.5e-03 1.6e-01 3.7e − 03 2.4e − 05
10 3.6e − 03 3.0e − 03 2.5e-03 1.1e-03 9.0e − 03 2.4e − 04
100 6.9e − 05 3.4e − 05 3.6e-05 2.4e-05 2.3e − 03 2.3e − 05
1000 3.4e − 06 1.6e − 07 8.7e-07 1.4e-08 1.8e − 03 1.6e − 04
Rate 12.7136 20.6059 13.2480 23.4029 1.0395 −2.7370

Table 3: |Erel | of QM L
b [g] and rate of convergence for I[gb , µ] with higher frequencies.

µ m=5 m=9 m = 10
107 4.7790e − 12 4.5865e − 14 3.6156e − 14
108 3.5434e − 13 2.5497e − 15 1.7183e − 15
109 6.6824e − 14 7.8041e − 17 6.0603e − 18
Rate 6.1602 9.1989 12.5426

Moreover, our proposed method has greater rate of convergence (for uniform and scattered nodes
both) as compared to the polynomial based method (given in last row of the table 2).
Fig. 1 (left & right) indicates that our proposed method QM L
b [g] improves accuracy with the
increase in frequency as well as nodal points. The relative errors of the integral are computed
for higher frequencies as shown in table 3, which suggests that our proposed method improves
accuracy with the increase in frequencies and also has high rate of convergence. Moreover, our
proposed method has two point five asymptotic order of convergence as shown in Fig. 2 (right).

67
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

0 −4
10 10
m = 25 µ = 3000

−2 −5
10 10

−4 −6
10 10

|Er |
|Er |

−6 −7
10 10

−8 −8
10 10

−10 −9
10 10
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 2 4 6 8 10 12 14 16 18
µ N

Figure 1: (left) |Er | of I[gb , µ] by the QM L ML


b [g] for m = 25, (right) |Er | of Qb [g] (noderelated).

2
Oscillatory behaviour of g(y)=J2(0 , 500y) 10
0
10 m = 30
µ = 500
−1
10

−2 1
10 10
|Er | ∗ ω 5/2

−3
10

−4
10 0
10

−5
10

−6
10
−1
10
−7 0 2000 4000 6000 8000 10000 12000
10
−0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 µ

Figure 2: Oscillatory behavior of the integrand for µ = 500 (left), |Er | scaled by µ5/2 ,m = 30
(right) for I[gb , µ] .

4 Conclusion
In this paper, we have used MQ RBFs based Levin approach and Multi-resolution quadrature
like Hybrid functions for the numerical evaluation of one dimensional Highly oscillatory integrals
of Bessel square type. This method has the merits of being numerically stable and having a high
computational speed. The numerical experiment also shows both accuracy and efficiency of our
new method.

References
[1] I. Aziz, Siraj-ul-Islam, and W. Khan. Quadrature rules for numerical integration based on
Haar wavelets and Hybrid functions. Comput. Math. with Applic., 61:2770–2781, 2011.

[2] R. Chen. Numerical approximations to integrals with a highly oscillatory Bessel kernel.
Appl. Numer. Math., 62(5):636–648, 2012.

68
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

[3] K.C. Chung, G.A. Evans, and J.R. Webster. A method to generate generalized quadrature
rules for oscillatory integrals. Appl. Num. Math., 34(1):85–93, 2000.

[4] G. A. Evans and K.C. Chung. Some theoretical aspects of generalised quadrature methods.
J. Complexity, 19(3):272–285, 2003.

[5] David J.Griffiths. Introduction to Quantum Mechanics. 1995.

[6] D. Levin. Fast integration of rapidly oscillatory functions. J. Comput. Appl. Math., 67(1):95–
101, 1996.

[7] D. Levin. Analysis of a collocation method for integrating rapidly oscillatory functions. J.
Comput. Appl. Math., 78(1):131–138, 1997.

[8] Siraj-ul-Islam, I. Aziz, and F. Haq. A comparative study of numerical integration based on
Haar wavelets and Hybrid functions. Comput. Math. with Applic., 59(6):2026–2036, 2010.

[9] Siraj-ul-Islam and S. Zaman. New quadrature rules for highly oscillatory integrals with
stationary points. J. Comput. Appl. Math., 278:75–79, 2015.

[10] S. Xiang. Numerical analysis of a fast integration method for highly oscillatory functions.
BIT Numer. Math., 47(2):469–482, 2007.

[11] S. Xiang and W. Gui. On generalized quadrature rules for fast oscillatory integrals. Appl.
Math. Comput., 197(1):60–75, 2008.

[12] Z. Xu and S. Xiang. On the evaluation of highly oscillatory finite Hankel transform using
special functions. Num. Alg., 72(1):37–56, 2016.

[13] S. Zaman and Siraj-ul-Islam. Efficient numerical methods for Bessel type of oscillatory
integrals. J. Comput. Appl. Math., 315:161–174, 2017.

69
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

ON MAGNETOHYDRODYNAMIC STAGNATION POINT FLOW OF THIRD


ORDER FLUID OVER A LUBRICATED SURFACE WITH HEAT TRANSFER
Manzoor Ahmad

Department of Mathematics, University of Azad Jammu & Kashmir, Muzaffarabad 13100, Pakistan

ABSTRACT

This paper addresses the magnetohydrodynamic (MHD) boundary layer flow of third order fluid in the region of stagnation
point with heat transfer effects over a surface lubricated with power law fluid. The lubricant layer is thin and assumed to have
a variable thickness. Due to variable thickness a slip condition is deduced at fluid-fluid interface. The resulting system of
nonlinear equation and nonlinear boundary condition is solved through shooting and homotopic methods. The numerical
values of local Nusselt number are computed and analyzed. Influence of embedded parameters on the velocity and
temperature profiles are sketched and discussed. It is examined that the velocity profile varies from no-slip to full slip case
under the influence of involved parameter. The velocity profile decreases and temperature profile increases for the large slip
parameter.

Index Terms— Third order fluid, magnetohydrodynamics, lubricated surface, slip condition.

1. INTRODUCTION

There is no doubt that magnetohydrodynamic (MHD) flow by heated surface has vital importance in the engineering and
geophysical applications. Important application of such flows to metallurgy lies in the purification of molten metals from
non-metallic inclusion by the application of magnetic field. The MHD micro pumps at present have also received special
status from the recent scientists and engineers due to their simple fabrication process, moving parts absence, low voltage
operation etc. Lorentz force for MHD pumps acts as a driving source in the flow. The MHD flows are especially attracted in
view of engineering process such as plasma studies, MHD generators, nuclear reactors, the boundary layer control in
aerodynamics, blood flow and lubrication with heavy oils and greases. On the other hand the materials in several industrial
and geophysical applications are non-Newtonian. Sarpkaya [1] pointed out some examples of non-Newtonian fluids which
might be conductors of electricity e.g. flow of nuclear slurries and mercury amalgams and lubrication with heavy oils and
greases. Obviously the heat transfer in such flows cannot be underestimated when processes for polymer solution, molten
plastics, foods, drug delivery etc are considered. Now several models of non-Newtonian fluids have been proposed for the
description of rheological characteristics of different materials. The third order fluid is one which can easily predict the shear
thinning and shear thickening effects even in steady one dimensional flow situation [2-5].
The research on boundary layer stagnation point flow was initiated by Heimenz [6]. Later on Homann [7] extended this for
axisymmetric three dimensional flow. Afterwards stagnation point flow was studied extensively (see [8-10] and many refs.
therein). However it is noted that the stagnation point flow in all the above mentioned studies is considered over a rigid plate.
Yeckle et al. [11] firstly examined the stagnation point flow over a thin lubricated surface. The slip condition on the boundary
was introduced by Wang [12]. Andesson and Rousselt [13] considered flow over a rotating disk via thin lubrication layer. In
a similar fashion axisymmetric stagnation point flow in the region of stagnation point flow was taken out by Santra et al.
[14]. Further this problem was extended by Sajid et al. [15] by incorporating generalized slip boundary condition. Sajid et al.
[16] also examined the stagnation point flow in a Walters’ B fluid over a lubricated surface. However heat transfer analysis
even in these few attempts is not explored.
The aim of present communication is to model the MHD boundary layer flow of third order fluid via heat transfer effects
over a surface lubricated with power law non-Newtonian fluid. To our knowledge no such investigation is available yet. The
solutions of transformed non-linear ordinary differential equations with non-linear boundary conditions are computed with a
combination of hompotopy analysis method [17-19] and shooting method [20]. This method has successfully applied to the
many problems [20-23]. Results for physical parameters namely the Prandtl number, second grade and third grade fluid
parameters, the slip and MHD variables on both the velocity and temperature profiles are plotted and analyzed. Numerical
values of Nusselt number for different involved parameters are also examined and the obtained results are validated through
residual errors.

70
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

2. MATHEMATICAL FORMULATION

Let us consider two-dimensional flow and heat transfer of third order fluid in the region of stagnation point over a surface
with thin layer of power-law lubricant. The lubricant spreads over the sheet and makes a thin layer of variable thickness. A
constant magnetic field of strength is applied in a direction transverse to the flow. The induced magnetic field effect is
ignored for small magnetic Reynolds number. The flow rate of lubricant is
( )
∫ ( ) (1)
where ( ) is the variable lubricated thickness and is the velocity component in radial direction. The boundary layer
equations governing the flow and heat transfer through continuity, momentum and energy equations are
(2)
̂ ( )
0 1 [. / ]
(3)
̂
(4)
with
̂ ( ). / (5)
. / (6)
where and are the velocity components of second grade fluid in tangential and vertical directions respectively, ̂ is the
modified pressure, is the kinematic viscosity, is the density, is the second grade fluid parameter, is the specific heat
at constant temperature and is the permeability of the absorbent medium.
As for the boundary there are two regions, a thin layer of power-law lubricant i.e. ( ) and a third order fluid that is
( ) . Therefore the relevant boundary conditions for the present flow and heat transfer cases are given by
( ) ( ) , at (7)
( ) (8)
. / 0 1 ( ). / . / ( ) (9)
in which is the dynamic viscosity, is the consistency index and is the power law index.
Under the assumption that vertical component of lubricant does not change inside the thin layer we have
( ) for , ( )- (10)
(11)
Following [14], it is assumed that radial component of velocity varies linearly inside the power law lubricant
̂( )
( ) (12)
( )
in which ̂( ) is the interfacial component of velocity for both the fluids at the interface ( ). Hence by using Eq. (1)
one obtains the thickness of the lubrication layer
( ) ̂ (13)
( )
and hence the boundary condition (8) becomes
. / 0 1 ( ). / (14)
( )
For the flow near a stagnation point the free stream velocity takes the form
( ) (15)
where is constant having dimensions of Which gives
̂
(16)
For the numerical and analytic solution of the governing equation, we assume that the boundary conditions at fluid-fluid
interface are equally applicable at the fluid solid interface This assumption holds when the assumed lubricant layer is
thin. A similar assumption has already been found in the literature, [14-16,20,21,22,23].
Considering the dimensionless variables
√ ( ) √ ( ) ( ) (17)
We have the following systems

71
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

( ) (18)
( ) (19)
( ) ( ) ( ) ( ) ( ) * ( )+ ( ) (20)
( ) ( ) (21)
where
√ ( )
( )
and (22)
In next section we present the solutions on the basis of shooting and homotopy analysis methods .

3. SOLUTION DEVELOPMENT

Here we first convert the boundary value problems (18-19) into set of first order initial value problems using shooting
method [20] by considering
( ) (23)
( ) (24)
Eq. (20) contain nonlinear boundary condition is cubic in ( ) and has the following real root
⁄ ( )
( ) ⁄ (25)
where

( √ ( ) )
Differentiating Eqs. (18), (20), (23) and (25) with respect to and (19), (21) and (24) with respect to we get
( ) ( )
(26)
(27)
⁄ ( ⁄ (
) )
( ) ( ) ( ) . ⁄ / ( ) ( ) (28)
For numerical computations we replace by a value and divide the domain into subintervals having length
such that
∑ (29)
Equations (18), (19), (25) and (26) can be transformed into first order initial value problems in each subinterval in case of
fixed length interval we have ,( ) - The initial value problems in each subintervals take the form
(30)

(31)

( ) . ( ) / . ( ) / (32)

(33)

( ) (34)

(35)

(36)

( ( )) . ( )

/ (37)

(38)

( ) (39)
⁄ ( )
( ) ( ) ( ) ⁄ ( ) ( ) , (40)

72
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

⁄ ( ) ⁄ ( )
( ) ( ) ( ) . ⁄ / ( ) ( ) (41)
The numerical values at the end point in the th subinterval are the initial conditions for the ( )th subinterval. We now
apply homotopy analysis method to solve initial value problems given by Eqs. (30)-(41). The details are given in next
subsections.
3.1 Zeroth order deformation problem
Defining the zeroth order deformation problems we have
̅( )
( ) [ ̅( ) ( )] 0 ̅( )1 (42)
̅( )
( ) [̅ ( ) ( )] 0 ̅ ( )1 (43)
̅ ( )
̅( ) ̅( ) .̅ ( )/ ̅( )
̅ ( ) ̅ ( )
( ) [ ̅( ) ( )] { ̅( ) ̅( ) . ̅( )/ }
̅ ( )
[ ( . ̅( )/ )
]
(44)
̅ ( )
( ) [ ̅( ) ( )] 0 ̅( )1 (45)
̅ ( )
( ) [̅ ( ) ( )] 0 { ̅( )̅ ( )}1 (46)
̅( )
( ) [ ̅( ) ( )] 0 ̅ ( )1 (47)
̅ ( )
( ) [̅ ( ) ( )] 0 ̅( )1 (48)
̅( )
̅( ) ̅ ( ) ̅( ) ̅( ) ̅( )̅ ( ) ̅( )
̅( ) ̅ ( )
̅( ) ̅( )
( ) [ ̅( ) ( )] { ̅( ̅ (
} (49)
) )
̅( ) . ̅ ( ) ̅( )/ ̅( )
̅ ( )
( . ̅( )/ ̅( ) ̅( ))
[ ]
̅ ( )
( ) [̅ ( ) ( )] 0 ̅ ( )1 (50)
̅ ( )
( ) [̅ ( ) ( )] 0 { ̅( )̅ ( )}1 (51)
where denotes the embedding parameter, the first order linear operator, ( ) ( ) ( ) ( ) ( ), ( )
( ) ( ) ( ) and ( ) are the initial guesses taken to be the numerical values at the starting point of each
subinterval. Here we have chosen auxiliary parameter to be and auxiliary function to be . The convergence in the
proposed hybrid homotopy analysis method is controlled by the length of the subinterval i.e. and the order of
approximation.
3.2 th order deformation problem
The th order deformation problem in each subinterval is given by
[ ] (52)

[ ] (53)

[ ] ( ) ∑ { } (54)
( )

∑ ( ) ]
[
[ ] (55)

73
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

[ ] ∑ [ { }] (56)

[ ] (57)

[ ] (58)

[ ] ∑ { } (59)
( )

∑ ( )]
[
[ ] (60)

[ ] ∑ [ { }] (61)
⁄ ( )
( ) ( ) ( ) ⁄ ( ) ( ) (62)
⁄ ( ⁄ (
) )
( ) ( ) ( ) . ⁄ / ( ) ( )
(63)
2 (64)
The final solutions in each subinterval are thus given by
, - ∑ , - (65)
The solutions of the governing problems are obtained in the following steps. Firstly approximate values of and are
chosen and the system of initial value problems are solved for . Then the initial condition at the second subinterval is
evaluated from this solution and a solution is found in the second subinterval. This procedure continues and an analytic
solution is evaluated in each subinterval. A zero finding algorithm is chosen to evaluate the correct value of and which
leads to ( ) and ( ) We choose the order of approximation and size of interval such that the obtained
residual error lies within an accuracy of .

4. NUMERICAL RESULTS AND DISCUSSION

The above procedure is implemented through MATHEMATICA for finding the solution of stagnation point flow and heat
transfer in a third order fluid over a lubricated surface. The analytic solutions using homotopy analysis method of required
accuracy are evaluated in each subinterval together with their respective numerical values at each node point of the
subinterval. In table 1 the numerical values of local Nusselt number ( ) are given for different values of and
for . These numerical values provide the missing conditions for obtaining the temperature profiles. To ensure the
accuracy and correctness of the obtained hybrid homotopy analysis method solutions we have plotted residual error curves in
each case. Here we are shown one of these error curves for a set of parameters in Fig. 1. It is evident from this figure that the
obtained solutions have an accuracy of . Our main focus is to discuss the influence of different involved parameter on
the velocity and temperature profiles. For such objective the Figs. 2-9 are plotted. Fig. 2 presents the effect of slip parameter
on the velocity field ( ). It can be seen from this Fig. that the velocity decreases by increasing the slip parameter. The
full slip and no-slip cases are obtained when and respectively. Here we see that for full slip case, the slip on the
surface dominates the stagnation point effect and there is no change in the velocity throughout the surface. The influence of
Hartman number M on the velocity profile ( ) is shown in Fig. 3. Reduction in velocity is noted when Hartman number
increases. This is due to the fact that Hartman number depends upon Lorentz forces which resists the flow and consequently
when we increase the Hartman number then Lorentz force increases which leads to decrease the velocity. Fig. 4 is made to
see the impact of Prandtl number on the temperature profile ( ). We see that temperature and boundary layer are reduced
for larger value of Prandtl number. The Prandtl number depends on thermal diffusivity and it plays a vital role for higher and
lower temperature. Hence greater value of Prandtl number reduces thermal diffusivity and consquently temperature
decreases. The effect of slip parameter on the temperature field is given in Fig. 5. This Fig. illustrates that the temperature
and momentum boundary layer increases larger slip parameter. Fig. 6 gives the influence of Hartman number on the
temperature profile ( ). Here we see that the temperature and boundary layer thickness are enhanced when the values of
Hartman number are increased. Further it is noticed that the Hartman number behaves quite reverse on the velocity and

74
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

temperature profiles. The impact of second grade fluid parameter on the temperature field ( ) is seen through Fig. 7. Here
we see that temperature oscillates inside the boundary layer thickness when we increase the second grade parameter. Fig. 8
have been portrayed for third grade parameter on the temperature field ( ). The temperature and thermal boundary layer
thickness increases by increasing the third grade parameter. The temperature and thermal boundary layer decreases by
increasing the power law index n (see Fig. 9). The results show that for a shear thickening lubricant the boundary layer is
more when compared with shear thinning lubricant.
Table 1: Numerical values of Nusselt number ( ) when ⁄
( )
0.1 0.2 0.1 0.1 1.0 0.786269
0.5 0.751187
1.0 0.716859
3.0 0.636217
50.0 0.563307
1.0 0.0 0.1 0.1 1.0 0.713327
0.1 0.714004
0.3 0.719351
0.4 0.723923
1.0 0.2 0.2 0.1 1.0 0.705322
0.3 0.709315
0.4 0.716035
1.0 0.2 0.1 0.3 1.0 0.714218
0.5 0.709544
0.7 0.696283
1.0 0.2 0.1 0.1 2.0 0.983703
3.0 1.189432
4.0 1.361684

Fig. 1 Residual error in ( )

75
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Fig. 2 Influence of slip parameter on the velocity profile ( )

Fig. 3 Influence of Hartman number on the velocity profile ( )

Fig. 4 Influence of Prandtl number on the temperature profile ( )

76
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Fig. 5 Influence of slip parameter on the temperature profile ( )

Fig. 6 Influence of Hartman number on the temperature profile ( )

Fig. 7 Influence of second grade parameter on the temperature profile ( )

77
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Fig. 8 Influence of third order parameter on temperature profile ( )

Fig. 9 Influence of power law index on the temperature profile ( )


For the validity of the developed algorithm, a comparison is made for the present solution with the existing results in the case
of no-slip when for viscous fluid as given in table 2. This table depicts that our solution have an excellent agreement
with the previous limiting results.
Table 2: Comparisons of the numerical values of ( ) for the no-slip case ( ) with Santra et al. [14] and White [25] for
viscous fluid when .

Present Santra et al. [14] White [25]


( ) ( ) ( )
( ) ( ) ( )
0.0 0.0 0.0 0.0
0.6 0.60870994 0.60870994 0.60871
1.2 0.89597727 0.89597727 0.89598
1.8 0.98315816 0.98315816 0.98316
2.4 0.99845935 0.99845935 0.99847
3.0 0.99992397 0.99992397 0.99993

5. CONCLUSIONS

The MHD boundary layer two-dimensional stagnation point flow and heat transfer in third order fluid over a thin lubricated
surface with power law non-Newtonian fluid has been considered. The flow problem is governed by nonlinear partial
differential equation and a nonlinear condition at the interface. The governing equations are transformed to a local similar
ordinary differential equation subject to nonlinear boundary conditions. A hybrid homotopy solution of the governing

78
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

problem is evaluated and results are discussed under the influence of fluid parameters appearing in the problems. It is found
that for large value of Hartman number the local Nusselt number and temperature profile both increase significantly.

6. REFERENCES

[1] T. Sarpkaya, “Flow of non-Newtonian fluid in a magnetic field,” AIChEJ., vol. 7, 2010, pp. 324-328.
[2] M. Sajid, and T. Hayat, “Non similar solution for the axisymmetric flow of a third grade fluid over a radially
stretching sheet,” Acta Mechanica, vol. 189, 2007, pp. 193-205.
[3] S. Abbasbandy, and T. Hayat, “On series solution for unsteady boundary layer equations in a special third grade
fluid,” Commu. Nonlin. Sci. Numer. Simul., vol. 16(8), 2011, pp. 3140-3146.
[4] B. Sahoo, and S. Poncet, “Flow and heat transfer of a third grade fluid past an exponentially sheet with partial slip
boundary condition, Int. J. Heat Mass Transfer, vol. 54, 2011, pp. 5010-5019.
[5] T. Hayat, A. Shafique, A. Alsaedi, and M. Awais, “MHD axisymmetric flow of a third grade fluid between
stretching sheets with heat transfer,” Computer and fluids, vol. 86 2013, pp. 102-108.
[6] K. Hiemenz, “Die Grenzschicht an einem in den gleich formigen ussig keitsstrom eingetacuhten geraden
krebzylinder,” Dingl Polytech J., vol. 32, 1911, pp. 321-324.
[7] F. Homann, “Der Einfluss grosser Za¨ higkeit bei der Stro¨mung um den Zylinder und um die Kugel,” Z. Angew.
Math. Mech. (ZAMM), vol. 16, 1936, pp. 153-164.
[8] A. Malvandi, F. Hedayati, and D. D. Ganji, “Slip effects on unsteady stagnation point flow of a nanofluid over a
stretching sheet,” Power Technology, vol. 253, 2014, pp. 377-384.
[9] G. K. Ramesh, B. J. Gireesha, and C. S. Bagewadi, “MHD flow of a dusty fluid near the stagnation point over a
permeable stretching sheet with non-uniform source/sink, Int. J. Heat Mass Transfer,” vol. 55(18), 2012, pp. 4900-
4907.
[10] M. Turkyilmazoglu, and I. Pop, “Exact analytical solutions for the flow and heat transfer near the stagnation point
on a stretching/shiriking sheet in a Jeffery fluid,” Int. J. Heat Mass Transfer, vol. 57(1), 2013, pp. 82-88.
[11] A. Yeckel, L. Strong, and S. Middleman, “Viscous film flow in the stagnation region of the jetimpinging on planar
surface,” AIChE J., vol. 40, 1994, pp. 1611-1617.
[12] C. Y. Wang, “Stagnation flows with slip: exact solutions of the Navier–Stokes equations,” Z. Angew. Math. Phys.
(ZAMP), vol. 54, 2003, pp. 184-189.
[13] H. I. Andersson, and M. Rousselet, “Slip flow over a lubricated rotating disk,” Int. J. Heat Fluid Flow, vol. 27, 2006,
pp. 329-335.
[14] B. Santra, B. S. Dandapat, and H. I. Andersson, “Axisymmetric stagnation point flow over alubricated surface,”
Acta Mech., vol. 194, 2007, pp. 1-7.
[15] M. Sajid, K. Mahmood, and Z. Abbas, “Axisymmetric stagnation-point flow with a general slip boundary condition
over a lubricated surface,” Chin. Phys. Lett., vol. 29, 2012, pp. 1-4.
[16] M. Sajid, T. Javed, Z. Abbas, and N. Ali, , “Stagnation point flow of a viscoelastic fluid over a lubricated surface,”
Int. J. Non-Linear Sci. Numer. Simul., vol. 14, 2013, pp. 285-290.
[17] S. J. Liao, Homotopy analysis method in nonlinear differential equations, Springer & Higher Education Press,
Heidelberg, 2012.
[18] M. M. Rashidi, B. Rostani, N. Freidoonimehr, and S. Abbasbandy, “Free convective heat and mass transfer for
MHD fluid flow over a permeable vertical stretching sheet in the presence of the radiation and buoyancy effects,”
Ain shams Eng. J., vol. 5, 2014, pp. 901-912.
[19] T. Hayat, T. Muhammad, A. Alsaedi, and M. S. Alhuthali, “Magnetohydrodynamic three-dimensional flow of
viscoelastic nano-fluid in the presence of nonlinear thermal radiation,” J. Magn. Magn. Mater., vol. 385, 2015, pp.
222-229.
[20] T. Y. Na, Computational methods in engineering boundary value problems, Academic Press, New York, 1979.
[21] M. Sajid, A. Arshad, T. Javed and Z. Abbas, “Stagnation point flow of Walters’ B fluid using hybrid homotopy
analysis method” Arab. J. Sci. Eng. Vol. 40(11), 2015, pp. 3311-3319.
[22] M. Ahmad, M. Sajid, T. Hayat, and I. Ahmad, “On numerical and approximate solutions for stagnation point flow
involving third order fluid,” AIP Advances, vol. 5(6), 2015, pp. 067138.
[23] M. Sajid, M. Ahmad, I. Ahmad, M. Taj, and A. Abbasi, “Axisymmetric stagnation point flow of a third grade fluid
over a lubricated surface.” Adv. Mechn. Engin. Vol. 7(8), 2015, pp. 1-8.
[24] M. Sajid, M. Ahmad, and I. Ahmad, “Axisymmetric stagnation point flow of second grade fluid over a lubricated
surface.” Eur. Int. J. Sci. Tech., in press, 2015.
[25] F. M. White, “Viscous Fluid Flow”, 2nd Edition, Mc-Graw Hill, 1991, pp. 156.

79
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Numerical evaluation of two dimensional highly


oscillatory integrals
Mehwish Saleem a,∗, Siraj-ul-Islama ,Sakhi Zamana

a Department of Basic Sciences, University of Engineering and Technology Peshawar , Pakistan.

Abstract
In this paper, new procedures are proposed for numerical evaluation of two-dimensional highly
oscillatory integrals with or without critical point(s). A part of the new procedures is to use
Levin collocation method with Gaussian radial basis function. Multi-resolution quadrature rules
based on hybrid functions and Haar wavelets are also used for comparison and are used as a part
of interval splitting algorithm to handel the critical point of the oscillator. Some test cases are
included for numerical verification of the proposed methods.
Keywords: Multivariate highly oscillatory integrals, Gaussian radial basis function, Multi-
resolution quadratures based on hybrid and Haar functions.

Table 1: Nomenclature Box

Symbols Description
ω Frequency parameter
RBF Radial basis function
 Shape parameter of RBF interpolation
P̃ Approximate value of P
ψ(r, ) Radial basis function
QLG [f ] Meshless quadrature with gaussian radial basis function
L
QM [f ] Meshless quadrature with multiquadric radial basis function
Qh [f ] Hybrid functions
QH [f ] Haar wavelets
Labs Absolute error
Lpre Percentage relative error
ζ Splitting parameter
$j Unknown parameter of RBF interpolation

1 Introduction
Highly oscillatory integrals (HOIs) have numerous applications in engineering and sciences such
as optics [27], acoustics [28], electromagnetics [20], signal processing [5], scattering theory [6, 22],

The author to whom all the correspondence should be addressed. Email:mehwishs199@gmail.com

80
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

electrodynamics and quantum mechanics [21, 29]. Generally, the two-dimensional HOIs can be
written as Z
Iω [f ] = f (u)eiωg(u) dA, u ∈ R2 . (1)

where Ω is a regular domain, f and g are both smooth functions and ω  1. The parameter ω
is a positive real number which represents the frequency of oscillations. The real and imaginary
part of a integral can be written as
Z Z
Re(Iω [f ]) = f (u) cos(ωg(u))du, and Im(Iω [f ]) = f (u) sin(ωg(u))du.
Ω Ω

If the frequency parameter is large then integral (1) becomes more oscillatory and is difficult to
compute the integral by classical quadrature rules such as trapezoidal rule, Simpson rule and
Gauss-Legendre quadrature [16] etc..
In the 20th century, many state of the art methods have been devised for numerical approxima-
tion of multi-dimensional HOIs, such as asymptotic method [15], steepest descent method [3,7,12],
Filon type methods [9, 11, 13–15, 23, 26, 35], Levin methods [17–21, 23, 26, 30, 32–34, 37].
The asymptotic approach is a well known procedure to approximate HOIs. In general
the asymptotic expansion of Iω [f ] diverges, but it can yields very accurate approximation for
higher frequency ω. Due to divergence of this approach, the error becomes uncontrollable, which
is problematic in the context of numerical computation. In literature, many numerical method
have been produced that exhibit convergence.
High asymptotic order is achieved in Filon-type methods [13–15] by interpolating the
derivative of f at critical point, with controllable error for fixed frequency ω. But the main
drawback of Filon-type method is that it is only applicable, if the oscillatory integral has linear
phase function.
The numerical steepest decent method [12] is another well known approach to approximate
HOIs. In this method the original integral can be transformed into a non-oscillatory integral by
applying the integral on the steepest decent path in a complex plan. The resulting integral can
be solved by the Laguerre integral method. But the main drawback of this method is that it can
be used to evaluate integrals with analytical integrands. Also, the process of finding the steepest
descent path is hard.
The Levin method can solve the oscillatory integrals with complicated phase functions. It
transforms the original integral into ODE (in case of one-dimensional integrals) and PDE (in
case of multivariate integrals). The discretized form of PDE or ODE gives a system of linear
equations. Thus in this procedure, we solve an ordinary or partial differential equation instead of
oscillatory integral. If the number of nodes N is large, then the system of linear equations tends
to be ill-conditioned and leads to trustless numerical results. In this case, it is better to use the
TSVD method to get accurate and stable results.
In [17, 21, 32], authors have used Levin collocation methods and its modified forms for nu-
merical solution of univariate and multivariate HOIs having no critical point. In [1, 32], the
authors have used meshless collocation method with multiquadric RBF for numerical solution of
the transformed problems.
In [32], the authors proposed new quadrature rules for numerical evaluation of multivariate
mild, highly oscillatory and non-oscillatory integrals. Levin collocation method based on multi-
quadric RBF and multi-resolution quadratures based on Haar wavelets and hybrid functions are
used. In the same paper, the authors have applied the proposed methods for evaluation of varying
oscillatory and non-oscillatory integrands defined on circular and rectangular domains. In [1],
the same approach has been used for numerical evaluation of one dimensional highly oscillatory

81
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

integrals. An algorithm was developed for selecting an optimal value of c, the shape parameter
of the multiquadric RBF interpolants.

2 Highly oscillatory integrals with critical point(s)


The two-dimensional HOIs (1) with critical point have a lot of applications in science and engi-
neering. The integral (1) has a critical point at u0 if ∇g(u0 ) = 0, where g is as oscillator and
u0 ∈ Ω. In this paper, we only consider the case that the critical point is at u = 0. The critical
point may lie inside or at the end points of the domain region.
In literature, many accurate method have been formulated for approximation of HOIs with
critical point(s), which are applicable to special type of integrals. The accurate numerical meth-
ods like Levin and Filon types of methods also fail due to existence of critical point in these
integrals. In literature, many researchers have contributed for resolution of these issues [13, 24].
Recently, various method have been proposed for numerical approximation of multivariate HOIs
with critical point(s) [1, 2, 8, 20, 36, 38]
In [20], the authors have proposed delaminating quadrature for numerical evaluation of multi-
dimensional HOIs with critical point. The advantage of this method is that it is numerically stable
than the Levin’s method. The method can be handled both the stationary and resonance point
of the oscillator.
Recently, the authors [1] have presented a new splitting algorithm to handel the critical point
of HOIs. In this algorithm, the domain is split into sub-domains in order to separate the critical
point. Meshless collocation method is used to compute the integrals free of critical point. Multi-
resolutions quadratures are employed to compute the integral over a small interval having critical
point.
In [38], the authors have extended the approach [1] from one-dimensional integrals to multi-
variate integrals. Theoretical error bonds of the individual methods as well as of the combined
algorithm were found. In the same paper, an algorithm is devolved to evaluate double integrals
having resonance point. In this algorithm, the double integral with resonance point is reduced to
one-dimensional integral having critical point.
In the present work, the same approach [38] is followed. Gaussian radial bases function is use
in the meshless procedure instead of multiquadric radial basis function. Numerical examples are
included to verify accuracy of the method.

3 Meshless procedure with Levin approach


In 1982, Levin [17] proposed a new procedure for evaluating two-dimensional highly oscillatory
integrals (1) with regular domain.
Let P̃ (u, v) be an approximate solution satisfying the following second-order partial differential
equation
∂2P ∂2g
 
∂P ∂g ∂P ∂g ∂g ∂g
+ iω P + + − ω2P = f (u, v). (2)
∂u∂v ∂u∂v ∂v ∂u ∂u ∂v ∂u ∂v
The integral given in (1) can be written as
Z bZ d
∂2
QL
G [f ] = [P̃ (u, v)eiωg(u,v) ]
a e ∂u∂v
(3)
= P̃ (b, d)eiωg(b,d) − P̃ (a, d)eiωg(a,d) − P̃ (b, e)eiωg(b,e)
+ P̃ (a, e)eiωg(a,e) .

82
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Now, we discuss the procedure how to find the approximate solution P̃ (u, v). Levin [17] has used
monomial basis for numerical solution of the inverse problems given in (2). In the proposed work,
we use a multivariate Gaussian RBF to approximate solution P̃ (u) in the following form

N2
X
P̃ (u) = $j ψ(r, ), u ∈ Rd , d = 2, (4)
j=1

where ψ(r, ) is a Gaussian RBF,  is a shape parameter of the RBF interpolation and r > 0.
The Gaussian RBF is defined as  2
− r2
ψ(r, ) = e  ,
where r are the radial distance function and is defined as
q
r(u, v) = (u − ucj )2 + (v − vkc )2 , j, k = 1, ..., N 2 .

The first order partial derivatives of ψ(r, ) are

∂ψ(r, ) −2(u − uc )
= ψ(r, ) (5)
∂u 2
and
∂ψ(r, ) −2(v − v c )
= ψ(r, ). (6)
∂v 2
In (4), the N 2 -unknown coefficients $j , j = 1, 2, ..., N 2 can be explored from the following
condition
 
L P̃ (u)eiω g(u) |u=uk = f (uk )eiω g(uk ) , k = 1, 2, ..., N 2 , u ∈ Rd , d = 2, (7)

where
h i ∂ 2 (P̃ (u)eiωg(u) )
L (P̃ (u)eiωg(u) )|u=uk = |u=uk , uk = (uk , vk ), k = 1, 2, ..., N 2 . (8)
∂u∂v
Equation (7) represents a system of N 2 equations in N 2 unknowns. The matrix form of the
system of linear equations is given by
B$ = F,
where B is square matrix of order N 2 × N 2 and each of $, F are vectors of order N 2 . To avoid
inaccuracies cased due to ill-conditioned system matrix for solving partial differential equation
(4) at higher nodes, it is better to use TVSD method instead of LU factorization. Ultimately,
we find the value of meshless approximation P̃ (u), u ∈ R2 . Then using (3), one can find the
desired numerical solution of the oscillatory integral (1).

3.1 Multi-resolution quadratures


In paper [32], the accurate procedure for hybrid function is discussed to evaluate multivariate non-
oscillatory and
R oscillatory integrals for small frequency. This procedure only computes integral
2
of the form Ω f (u)du, u ∈ R . The detail analysis of the hybrid function based quadrature in

83
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

two-dimensional case, has been given in the paper [4]. In the case of two variables, the formula
for multi-resolution quadrature based on hybrid function of order 9, is given as

N   
(b1 − a1 ) X (b1 − a1 )(18i − 17)
Qh [f ] ≈ 832221 G a1 +
5734400N 18N
i=1
   
(b1 − a1 )(18i − 15) (b1 − a1 )(18i − 13)
− 260808 G a1 + + 2903148 G a1 +
18N 18N
   
(b1 − a1 )(18i − 11) (b1 − a1 )(18i − 9)
− 3227256 G a1 + + 5239790 G a1 +
18N 18N
   
(b1 − a1 )(18i − 7) (b1 − a1 )(18i − 5)
− 3227256 G a1 + + 2903148 G a1 +
18N 18N
   
(b1 − a1 )(18i − 3) (b1 − a1 )(18i − 1)
− 260808 G a1 + + 832221 G a1 + , (9)
18N 18N

where
N   
(d − e) X (d − e)(18i − 17)
G(v) = 832221f e + ,v
5734400N 18N
i=1
   
(d − e)(18i − 15) (d − e)(18i − 13)
− 260808f e + , v + 2903148f e + ,v
18N 18N
   
(d − e)(18i − 11) (d − e)(18i − 9)
− 3227256f e + , v + 5239790f e + ,v
18N 18N
   
(d − e)(18i − 7) (d − e)(18i − 5)
− 3227256f e + , v + 2903148f e + ,v
18N 18N
   
(d − e)(18i − 3) (d − e)(18i − 1)
− 260808f e + , v + 832221f e + , v . (10)
18N 18N

Similarly, the two dimensional Haar wavelets based quadrature [31] is given as

sZ d 2N  
s−r X (s − r)(j − 0.5)
Z
QH [f ] = F (u, v) du dv ≈ H r+ ,
r e 2N 2N
j=1

where
2M  
d−eX (d − e)(k − 0.5)
H(v) = F e+ ,v .
2M 2M
k=1

4 Oscillatory integrals with critical point


In [38], the authors have used a new algorithm for numerical evaluation of multivariate highly os-
cillatory integral having critical point. According to this algorithm, meshless collocation method
and multi-resolution analysis are coupled in order to tackle the critical point. The proposed work
is the extension of the algorithm reported in [38]. Gaussian RBF is used as basis function in the
meshless collocation procedure instead of multiquadric RBF.

84
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

4.1 Splitting algorithm


In paper [38], the authors have proposed a splitting technique to sperate the region having
critical point from the region having no critical point(s). Multivariate Haar wavelets and hybrid
function based quadrature are combined with meshless method to approximate HOIs having
critical point(s). The multi-resolution quadratures QH [f ] or Qh [f ] are used to approximate HOIs
over the intervals containing the critical point. The integrals that contain no critical point, are
computed by the meshless collocation method. For this, we define ζ as
 1
N0 k
ζ= , (11)
ω
where N0 are the nodal points of multi-resolution quadratures and k is the order of critical point.
The critical point can be separated by two ways. First, if the critical point lies at one end of the
domain region, the integral (1) written in split form as
Z Z Z
iωg(u) iωg(u)
Iω [f ] = f (u)e dA + f (u)e dA + f (u)eiωg(u) dA, u ∈ R2
D1 D2 D3 (12)
= I1 [f ] + I2 [f ] + I3 [f ],
where D1 , D2 and D3 are the domain regions and are shown Fig. 1 (left). Second, if the critical
point appears inside of the domain, the integral (1) can be split as
Z Z Z
Iω [f ] = f (u)eiωg(u) dA + f (u)eiωg(u) dA + f (u)eiωg(u) dA
Z D1 Z D2 Z D3
+ f (u)eiωg(u) dA + f (u)eiωg(u) dA + f (u)eiωg(u) dA
ZD4
ZD5
ZD6 (13)
+ f (u)eiωg(u) dA + f (u)eiωg(u) dA + f (u)eiωg(u) dA, u ∈ R2
D7 D8 D9
= I1 [f ] + I2 [f ] + I3 [f ] + I4 [f ] + I5 [f ] + I6 [f ] + I7 [f ] + I8 [f ] + I9 [f ].
where D1 , ...,D9 are the split domains regions and are shown in Fig. 1 (right). Integral I1 [f ] of
(12) and (13) contains a critical point and can be computed by Qh [f ]. The integrals I2 , I3 , ..., I9
do not contain any critical point are approximated by QL G [f ]. The resulting value of the integral
(1) is given by
QhG [f ] = Qh [f ] + QL
G [f ]. (14)

5 Numerical illustration
In this section, we test the proposed methods for numerical solution of two-dimensional HOIs with
and without critical point. The exact solution of all test problems are obtained by Maple 15.
The first problem is related to the case when there is no stationary point. While the second and
third problem has a critical point at the end and at the middle of the domain region respectively.
We have found out the percentage relative error norm Lpre and absolute error norm Labs to check
performance of the proposed methods. The approximation is done using MATLAB programming.
Test Problem 1. Consider the following integral
Z 1Z 1
iω(u+v+2)3
Iω [f ] = (u + v + 1)4 e 128 dudv. (15)
0 0

85
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

(1,1) (1,1)

𝐷4 𝐷5
𝐷2 𝐷2 𝐷8

𝐷1

(Ϛ,Ϛ) 𝐷3 (Ϛ,Ϛ)

𝐷3 𝐷9
𝐷1
𝐷6

(0,0) (0,0)

Figure 1: (left) critical point (0, 0) lies in the leftmost corner of domain , (right) critical point
(0, 0) lies in the middle of the domain.

The highly oscillatory integral given in (1) has been considered from [10]. The integrand be-
comes more oscillatory for large value of frequency parameter ω and it is difficult to be computed
by the existing quadratures. We have tested the new method QL G [f ] for different values of fre-
quency parameter ω and compared the results with the existing methods. Irregular oscillations
of the integrand of (15) are shown in Fig. 3. The oscillatory integral is computed by QL G [f ],
Qh [f ] and QH [f ] and the results are analyzed in table 2. Comparison of the results produced
by the proposed methods with [10] shown in the same table. Accuracy of QL G [f ] is better than
L −7
the method [10]. The Lpre of QG [f ] is decreased up to O(10 ), while the same error norm of
method [10] is reduced upto O(10−4 ). The proposed method QL G [f ] is tested for high frequencies
and the results are analyzed in table 3. A better rate of convergence of the proposed method is
shown in the last column of table 3.
Furthermore, the absolute error of QL G [f ], Qh [f ] and QH [f ] are computed for ω = 1000 and
the results are analyzed in Fig. 2. From Fig. 2, it is clear that the proposed methods also improve
their accuracy on increasing N .

Test Problem 2. Consider the integral


Z 1Z 1
2 2
Iω [f ] = cos(uv + v + 1)eiω(u +v ) dudv. (16)
0 0

The above integral has been taken from [25]. The integral has a critical point of order 1.
The multivariate meshless procedure QL G [f ] fails to compute the integral at the critical point. To
avoid this difficulty, we use splitting technique reported in paper [38]. According to this splitting
technique, the domain region [0, 1] × [0, 1] is spilt as shown in the equation (12). For this, we

86
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Table 2: Lpre of QL
G [f ], Qh [f ] and QH [f ], N = 10 and comparison with the results [10] for test
problem (1).

ω QL
G [f ] [10] Qh [f ] QH [f ]
100 3.61e − 005 3.86e − 004 3.19e − 005 1.16e + 002
200 7.51e − 006 4.68e − 004 4.60e − 001 1.17e + 002
300 2.41e − 004 2.36e − 004 1.68e + 002 8.10e + 002
400 2.02e − 005 3.91e − 004 7.01e + 002 7.07e + 002
500 1.05e − 004 1.42e − 004 6.62e + 004 4.20e + 002
600 2.70e − 005 2.74e − 004 2.05e + 005 2.43e + 003
700 6.75e − 006 5.22e − 004 1.67e + 005 4.79e + 002
800 6.15e − 006 1.57e − 004 2.71e + 006 4.18e + 002
900 1.81e − 006 3.28e − 004 2.64e + 006 5.41e + 003
1000 9.78e − 007 1.42e − 003 1.21e + 007 1.13e + 003

Table 3: Labs of QL
G [f ] for test problem (1).

ω/N 5 10 15
105 1.9698e − 015 9.4718e − 016 1.6256e − 015
106 5.2105e − 018 1.1364e − 017 3.7283e − 017
107 1.7191e − 019 2.5043e − 019 2.1287e − 019
Rate 4.9217 5.5039 7.4524

87
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

2
10
QL
G [f]
Qh [f]
QH [f]
0
10

−2
10

−4
Labs 10

−6
10

−8
10

−10
10

−12
10
0 5 10 15 20 25 30 35 40
N

Figure 2: Labs of QL
G [f ], Qh [f ] and QH [f ], ω = 1000 for test problem (1).

7
x 10

−2

−4

−6

−8
10

5 10
5
0
0
−5
−5
−10 −10

Figure 3: Oscillatory behavior of the real part of integrand (1).

define P (ζ, ζ) in the neighborhood of critical point. Then integral (2) can be written as
Z 1Z 1
2 +v 2 )
I= cos(uv + v + 1)eiω(u dudv
0 0
Z ζ Z ζ Z ζ Z 1
iω(u2 +v 2 ) 2 +v 2 )
= cos(uv + v + 1)e dudv + cos(uv + v + 1)eiω(u dudv
0 0 0 ζ
Z 1Z ζ
2 +v 2 )
+ cos(uv + v + 1)eiω(u dudv.
ζ 0

88
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

−1 6
10 10
QhG [f], n = 30 QhG [f]
Qh [f] Qh [f]
QH [f] QH [f]
−2 4
10 10

−3 2
10 10

Labs
Labs

−4 0
10 10

−5 −2
10 10

−4
−6 10
10

−6
−7
10 10
100 200 300 400 500 600 700 800 900 1000 0 5 10 15 20 25 30
ω N

Figure 4: Labs of QhG [f ], QH [f ] and Qh [f ], ω = 1000 for problem (2) (left), Labs of QhG [f ] scaled
by ω 2 (right).

The first integral has a critical point and can be approximated by the multi-resolution quadrature
Qh [f ] and the remaining two integrals contains no critical point can be approximated by the
QL h
G [f ]. The results of QG [f ], Qh [f ] and QH [f ] are analyzed in Fig.4 (left). The figure shows that
accuracy of QG [f ] is much better than Qh [f ] and QH [f ]. The Labs of QhG [f ] is decreases upto
h

(10−6 ) and the scaled error produced by QhG [f ] is shown in Fig. 4 (right). Also in Fig.6, the
absolute error of QhG [f ], Qh [f ] and QH [f ] is computed on varying N and fixed frequency. It is
clear from the figure that the accuracy of QhG [f ] is improving on increasing N . Furthermore, the
proposed method QhG [f ] is tested for higher frequencies and the results are analyzed in table 4.

89
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Table 4: Labs of QhG [f ] for problem (2) .

(ω/N ) N = 10 N = 20 N = 30
104 5.32e − 05 9.17e − 06 4.24e − 06
105 1.52e − 05 5.66e − 06 6.51e − 06
106 7.88e − 06 6.41e − 06 6.85e − 06

6
10
QhG [f]
Qh [f]
QH [f]
4
10

2
10
Labs

0
10

−2
10

−4
10

−6
10
0 5 10 15 20 25 30
N

Figure 5: Labs of QhG [f ], QH [f ], and Qh [f ], N = 30 for problem (2).

Test Problem 3. The following integral has been considered [8].


1 1
1
Z Z
3 3
Iω [f ] = eiω(u +v ) dudv. (17)
−1 −1 3+u+v

The integrand has a degenerate critical point (0, 0) having order k = 2 which lies in the middle
of the domain. The integral is evaluated by QhG [f ] and Qh [f ]. In order to isolate the stationary
point, we need to split the integral into nine sub integrals as given in Fig. 1 (right). According

90
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

to the splitting technique, the integral (3) can be written as


1 −1
1
Z Z
3 3
Iω [f ] = eiω(u +v ) dudv
−1 −1 3+u+v
ζ Z ζ 1 Z −ζ
1 1
Z Z
3 3 3 3
= eiω(u +v ) dudv + eiω(u +v ) dudv
−ζ −ζ 3+u+v 0 −1 3+u+v
0 Z −ζ −ζ Z 0
1 1
Z Z
3 3 3 3
+ eiω(u +v ) dudv + eiω(u +v ) dudv
−1 −1 3+u+v −1 −ζ 3+u+v
−ζ Z ζ 0 Z 1
1 1
Z Z
3 3 3 3
+ eiω(u +v ) dudv + eiω(u +v ) dudv
−1 0 3 + u + v −1 ζ 3 + u + v
Z 1Z 1 Z 1Z ζ
1 3 3 1 3 3
+ eiω(u +v ) dudv + eiω(u +v ) dudv
0 ζ 3 + u + v ζ 0 3 + u + v
Z 1Z 0
1 3 3
+ eiω(u +v ) dudv
ζ −ζ 3 + u + v
= I1ω [f ] + I2ω [f ] + I3ω [f ] + I4ω [f ] + I5ω [f ] + I6ω [f ] + I7ω [f ] + I8ω [f ] + I9ω [f ].

The integrals are evaluated by the proposed splitting procedure. The result of the new methods
are declared in Fig 6. The figure illustrates that accuracy of the QhG [f ] improves on increasing
ω. Table 5 confirms that the new algorithm improves accuracy on increasing N or ω.
−1
10
ReQhG [f]
ReQh [f]
−2
10

−3
10

−4
10
Labs

−5
10

−6
10

−7
10

−8
10
100 200 300 400 500 600 700 800 900 1000
ω

Figure 6: Labs of the methods QhG [f ] and Qh [f ], N = 30 for problem (3).

91
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Table 5: Absolute errors Labs of QhG [f ] for problem (3).

ω/N 10 20 30 d = 0 [8] d = 1 [8] d = 2 [8]


10 1.14e − 06 1.14e − 07 2.62e − 07 − − −
50 4.26e − 05 2.49e − 07 1.90e − 07 5.31e − 03 2.51e − 04 2.91e − 05
100 1.05e − 04 1.87e − 07 5.51e − 08 2.71e − 03 9.91e − 05 9.01e − 06
200 3.56e − 04 4.63e − 07 6.03e − 07 1.30e − 03 3.91e − 05 2.80e − 06
400 6.97e − 04 1.92e − 06 4.47e − 07 6.70e − 04 1.60e − 05 8.90e − 07
800 7.75e − 04 2.29e − 05 1.32e − 06 3.41e − 04 6.12e − 06 2.91e − 07

6 Conclusion
In this paper, alternate algorithms are implemented for numerical computation of two-dimensional
HOIs with and without critical point. New quadrature rule QL G [f ] based on Levin’s approach with
Gaussian radial bases function in meshless procedure is used to compute HOIs without critical
point. Multi-resolution quadratures Qh [f ] and QH [f ] are used for comparison with proposed
method and also for case when integral has a critical point.
Splitting technique is implemented for numerical evaluation of two-dimensional HOIs with
critical point. New method QhG [f ] is obtained by merging the methods Qh [f ] and QL G [f ]. It has
been shown that the proposed method is numerically stable and accurate for higher frequency,
when the oscillator has a critical point. The accuracy of the proposed methods are verified
numerically.

References
[1] A. Al-Fhaid, S. Zaman, and Siraj-ul-Islam. Meshless and wavelets based complex quadrature
of highly oscillatory integrals and the integrals with stationary points. Engi. Anal. Bound.
Elem., 37:1136–1144, 2013.

[2] A. Asheim. Applying the numerical method of steepest descent on multivariate oscillatory
integrals in scattering theory. arXiv preprint arXiv:1302.1019, 2013.

[3] A. Asheim and D. Huybrechs. Asymptotic analysis of numerical steepest descent with path
approximations. Found. Comp. Math., 10:647–671, 2010.

[4] I. Aziz, Siraj-ul-Islam, and W. Khan. Quadrature rules for numerical integration based on
Haar wavelet and hybrid functions. Comput. Math. Applic., 61:2770–2781, 2011.

[5] A. Bruce, D. Donoho, and H.Y. Gao. Wavelet analysis [for signal processing]. IEEE spectrum,
33:26–35, 1996.

[6] S. N. Chandler-Wilde, I. G. Graham, S. Langdon, and E. A. Spence. Numerical asymptotic


boundary integral methods in high-frequency acoustic scattering. Acta Numer., 21:89–305,
2012.

[7] W.C. Chew. Waves and fields in inhomogeneous media. V. Nost. Rein., New York, 1990.

92
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

[8] D.Huybrechs and S. Vandewalle. The construction of cubature rules for multivariate highly
oscillatory integrals. Math. Comput., 76:1955–1980, 2007.

[9] L. N. G. Filon. On a quadrature formula for trigonometric integrals, in: Proceedings of royal
society. Proc. Royal Soc., 49:38–47, 1928.
[10] P. J. Harris and K. Chen. An efficient method for evaluating the integral of a class of highly
oscillatory functions. J. Comp. Appl. Math., 230:433–442, 2009.
[11] D. Huybrechs and S. Olver. Super interpolation in highly oscillatory quadrature. Found.
Comput. Math., 12:203–228, 2012.
[12] D. Huybrechs and S. Vandewalle. On the evaluation of highly oscillatory integrals by analytic
continuation. J. Numer. Anal., 44:1026–1048, 2006.
[13] A. Iserles and S. Nørsett. Quadrature methods for multivariate highly oscillatory integrals
using derivatives. Math. Comp., 75:1233–1258, 2006.
[14] A. Iserles and S. P. Norsett. On quadrature methods for highly oscillatory integrals and
their implementation. BIT, 44:755–772, 2004.
[15] A. Iserles and S. P. Norsett. Efficient quadrature of highly oscillatory integrals using deriva-
tives. Proc. Roy. Soci., 461:1383–1399, 2005.
[16] D. Kincaid and W. Cheney. Numerical Analysis, Brooks. 2002.
[17] D. Levin. Procedures for computing one and two-dimensional integrals of functions with
rapid irregular oscillations. Math. Comp., 158:531–538, 1982.
[18] D. Levin. Fast integration of rapidly oscillatory functions. J. Comput. Appl. Math., 67:95–
101, 1996.
[19] D. Levin. Analysis of a collocation method for integrating rapidly oscillatory functions. J.
Comput. Appl. Math., 78:131–138, 1997.
[20] J. Li, X. Wang, T. Wang, and C. Shen. Delaminating quadrature method for multi-
dimensional highly oscillatory integrals. Appl. Math. Comp., 209:327–338, 2009.

[21] J. Li, X. Wang, T. Wang, and S. Xiao. An improved levin quadrature method for highly
oscillatory integrals. Appl. Numer. Math., 60:833–842, 2010.
[22] J. Nedelec. Acoustic and electromagnetic equations: Integral representations for harmonic
problems. 2001.

[23] S. Olver. On the quadrature of multivariate highly oscillatory integrals over non-polytope
domains. Numer. Math., 103:643–665, 2006.
[24] S. Olver. Moment-free numerical approximation of highly oscillatory integrals with stationary
points. European J. Appl. Math., 18:435–447, 2007.

[25] S. Olver. Numerical approximation of highly oscillatory integrals. PhD thesis, University of
Cambridge, 2008.

[26] S. Olver. Fast and numerically stable computation of oscillatory integrals with stationary
points. BIT, 50:149–171, 2010.

93
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

[27] H. M. Ozaktas, A. Koç, I. Sari, and M. A. Kutay. Efficient computation of quadratic-phase


integrals in optics. Opt. lett., 31:35–37, 2006.

[28] A. D. Pierce and P. Smith. Acoustics: An introduction to its physical principles and appli-
cations. Today Phys., 34:56–57, 1981.

[29] K. Shariff and A. Wray. Analysis of the radar reflectivity of aircraft vortex wakes. J. F.
Mech., 463:121–161, 2002.

[30] Siraj-ul-Islam, A. S. Al-Fhaid, and S. Zaman. Meshless and wavelet based complex quadra-
ture of highly oscillatory integrals and the integrals with stationary points. Eng. Anal.
Bound. Elemt., 37:1136–1144, 2013.

[31] Siraj-ul-Islam, I. Aziz, and F. Haq. A comparative study of numerical integration based on
Haar wavelet and hybrid functions. Comput. Math. Appl., 59:2026–2036, 2010.

[32] Siraj-ul-Islam, I. Aziz, and W. Khan. Numerical integration of multi-dimensional highly


oscillatory, gentle oscillatory and non-oscillatory integrands based on wavelets and radial
basis functions. Eng. Anal. Bound. Elemt., 36:1284–1295, 2012.

[33] Siraj-ul-Islam and S. Zaman. New quadrature rules for highly oscillatory integrals with
stationary points. J. Comput. Appl. Math., 278:75–89, 2015.

[34] S. Xiang. Efficient quadrature for highly oscillatory integrals involving critical points. J.
Comp. Appl. Math., 206:688–698, 2007.

[35] S. Xiang and H. Wang. Fast integration of highly oscillatory integrals with exotic oscillators.
Math. Comput., 79:829–844, 2010.

[36] Z. Xu and S. Xiang. On the evaluation of highly oscillatory finite Hankel transform using
special functions. Numer. Algor., 72:37–56, 2016.

[37] S. Zaman and Siraj-ul-Islam. Efficient numerical methods for Bessel type of oscillatory
integrals. J. Comput. Appl. Math., 315:161–174, 2017.

[38] S. Zaman and Siraj-ul-Islam. Numerical methods for multivariate highly oscillatory integrals.
Inter. J. Compu. Mathe., 95:1–23, 2017.

94
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Identification of unknown heat source in inverse


problem by Haar wavelet collocation method

Khawaja Shams-ul-Haqa , Muhammad Ahsana,b , Siraj-ul-Islam a

a
Department of Basic Sciences, UET Peshawar, Pakistan.
b
Department of Mathematics, University of Swabi, Pakistan.
Abstract
In this paper, we developed a new Haar wavelet based numerical algorithm for inverse heat equation with
unknown space dependent heat source. These inverse heat PDEs are ill-posed and small error in the
input data produces large amount of error in the output heat source. First order finite-difference formula
is used for time derivative and finite Haar series is used for space derivative approximation. Numerical
implementation shows effective and stable results, even for highly ill-posed problems, when the available
data contains a considerable amount of noise. Due to small conditional number of the discretized matrix,
the present approach can be easily extended to non-linear inverse problems as well.

1 Introduction
In this paper, we want to find the unknown heat source that depends on both time and space variables in
the inverse heat equation. These types of inverse equations induced in the diffusion process, conduction of
heat and transportation of natural materials. Mathematically we can write the inverse heat equation as:

Ψt (z, t) − Ψzz (z, t) = F(z, t), 0 < z < 1, t > 0. (1)

The initial, boundary and over specified conditions are given as:

Ψ(z, 0) = I(z),
Ψ(0, t) = g1 (t), (2)
Ψ(1, t) = g2 (t),

and
Ψ(z, T ) = h(z). (3)
In Eq. 1, Ψ(z, t) and F(z,t) represent any physical state and some source respectively. But the source term
F(z,t) and the solution Ψ(z, t) are unknown in this case. These type of inverse problems are fundamentally
ill posed (does not depended continuously on initial and boundary conditions). Due to illposedness, the
existence, stability and uniqueness of these type of problems is not guaranteed. Small change in input data
shows unexpected result in output data [1,2]. Verity of numerical methods have been used in literature for
the numerical approximation of inverse heat problems like the method of fundamental solution [3], bound-
ary element method (BEM) [4], Tikhonov regularization technique (TRT) [5] and Fourier regularization
method [6].

In the last few years, wavelet methods are used very commonly in different applications. These included
wavelet collocation method [7], Galerkin wavelet method [8–11] and Haar wavelet method for inverse heat
problems (IHPs) [12]. In this work, we propose a Haar wavelet collocation method (HWCM) for the
numerical solution of the IHP. The organization of this paper is as follow, Haar wavelets are introduced
in Section 2. Numerical method is presented in Section 3. Results and discussions are given in Section 4.

95
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

1.1 Haar wavelets


A Haar wavelet functions can be defined as

1
 for z ∈ [ζ1 , ζ2 ),
hi (z) = −1 for z ∈ [ζ2 , ζ3 ), (4)

0 elsewhere,

where
k k + 0.5 k+1
ζ1 =
, ζ2 = , and ζ3 = .
m m m
The above parameters have been given in [12]. We define the following notations for integrals of the Haar
wavelets; Z z
pi,1 (z) = hi (s)ds,
0
Z z
pi,2 (z) = pi,1 (s)ds
0
and Z 1
Ci = pi,1 (z)dz.
0

Using Eq.(4), we get 


z − ζ1 for z ∈ [ζ1 , ζ2 ),

pi,1 (z) = ζ3 − x for z ∈ [ζ2 , ζ3 ),

0 elsewhere,

1 2

 2 (z − ζ1 ) for z ∈ [ζ1 , ζ2 ),
 1 − 1 (ζ − z)2 for z ∈ [ζ , ζ ),

pi,2 (z) = 4m
2 2 3 2 3
1
 4m2

 for z ∈ [ζ 3 , 1),
0 elsewhere,

and
1
Ci = .
4m2

2 Numerical Approximation
The source term F(z,t) in Eq.(1) can be written in the following form

F (z, t) = φ(t)f (z)

where f (z) is unknown function which is to be determined and φ(t) is known function. For space derivative
we use the Haar wavelet as
2M
X
Ψzz (z, t) = λi hi (z). (5)
i=1

Integrating Eq.(5) from 0 to z with respect to z, we get


2M
X
Ψz (z, t) = Ψz (0, t) + λi pi,1 (z). (6)
i=1

Integrating Eq.(6) from 0 to 1 with respect to z, we get


2M
X
Ψz (0, t) = Ψ(1, t) − Ψ(0, t) − λi Ci . (7)
i=1

96
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Putting Eq.(7) in Eq.(6) and further integrating from 0 to z with respect to z, we get
2M
X
Ψ(z, t) = Ψ(0, t) + z(Ψ(1, t) − Ψ(0, t)) + λi (pi,2 (z) − zCi ). (8)
i=1

Now put t =T in Eq. (1), we get


Ψt (z, T ) − Ψzz (z, T )
= f (z). (9)
φ(T )
Eliminating f (z) from Eq. (1) and Eq. (9), we get

Ψt (z, T ) − Ψzz (z, T )


Ψt (z, t) − Ψzz (z, t) = φ(t)[ ]. (10)
φ(T )
00
From the over specified condition, we have Ψzz (z, T ) = h (z) and Ψt (z, T ) = 0,
00
−h (z)
Ψt (z, t) − Ψzz (z, t) = φ(t)[ ]. (11)
φ(T )

Now using the implicit scheme


00
φ(t)h (z) j
[Ψ(z, t)]j+1 − ∆t[Ψzz (z, t)]j+1 = [Ψ(z, t)]j − ∆t[ ] (12)
φ(T )

By putting Eq. (5) and Eq. (8) in Eq. (12), we get 2M equations in 2M unknowns. By finding these
unknowns and putting back in Eq. (8) we can obtain the numerical solution of Ψ(z, t).

From Eq. (1) we can find f (z) in the following manner

Ψt (z, t) − Ψzz (z, t)


f (z) = , (13)
φ(t)
[Ψ(z,t)]j+1 −[Ψ(z,t)]j 00

∆t − h (z)
f (z) = , (14)
φ(t)
00
[Ψ(z, t)]j+1 − [Ψ(z, t)]j − ∆th (z)
f (z) = . (15)
∆tφ(t)

3 Results and Discussions


In this section we have included an example to verify the accuracy and efficiency of the present HWCM.
We have used root means square error to check the accuracy of the numerical results. The root means
square error is defines as, v
u 2M
u1 X
RM S(Ψ) = t (ΨExact − ΨN umerical )2 ,
N i=1
v
u 2M
u1 X
RM S(f ) = t (fExact − fN umerical )2 .
N i=1

We introduce a noise level to the over specified condition Ψ(z, T ) = h(z)+ σR where R ∈ (−1, 1) is the
random number.

97
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Table 1: The comparison of RMS error and condition number of RBF method [8] with HWCM at ∆(t) =
0.01, N = 32 and σ = 0.1 for various time t for Test Problem 1

t RM S(Ψ) RM S(Ψ) RM S(f ) RM S(f ) condition number condition number


N = 48 N = 32 N = 48 N = 32 N = 48 N = 32
[8] present HWCM [8] present HWCM [8] present HWCM
0.2 0.0121 0.0289 0.0846 0.0383 9.3099 × 1010 4.15507 × 101
0.3 0.0306 0.0300 0.0389 0.0401 2.1324 × 1010 4.15507 × 101
0.4 0.0328 0.0320 0.1016 0.0407 7.9735 × 109 4.15507 × 101
0.5 0.0105 0.0352 0.1602 0.0409 1.3145 × 109 4.15507 × 101
0.6 0.0105 0.0395 0.1969 0.0410 2.3021 × 1010 4.15507 × 101

Figure 1: Comparison of exact and numerical values of Ψ(z, t) and f (z) at σ = 0.1, N = 32, ∆t = 0.01 and
t = 0.6.

98
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Figure 2: Numerical solution of Ψ(z, t) and f (z) at σ = 0.1, N = 32 and ∆t = 0.01.

Test Problem 1. The exact solution and the exact source term of Eq. (1) are Ψ(z, t) = sin(z)et and
f (z) = 2sin(z) respectively. Here the boundary, initial and over specified conditions are given as

Ψ(z, 0) = sin(z)
Ψ(0, t) = 0 (16)
t
Ψ(1, t) = sin(1)e

The numerical results are compared with radial basis function (RBF) [8] for σ = 0.1 at different values
of final time, are given in Table 1. From the table it is clear that the HWCM accuracy is comparatively
better than [8]. These results show that the Haar wavelet collocation method can handle inverse heat
equation with considerable accuracy. In Fig. 1 the numerical results have been compared with exact
solution for Ψ(z, t) and f (z). In Fig. 2 the surface plot of the approximate solution is shown. Better
performance of the proposed methods is evident from these figures as well.

4 Conclusion
In this paper, we have implement HWCM for numerical solution of time-space dependent source term in
inverse problem. Due to the small conditional number of the coefficient matrix the calculated result are
accurate and acceptable.

99
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

References
[1] C.-S. Liu, A double optimal descent algorithm for iteratively solving ill-posed linear inverse problems,
Inverse Problems in Science and Engineering 23 (1) (2015) 38–66.
[2] C.-S. Liu, A highly accurate LGSM for severely ill-posed BHCP under a large noise on the final time
data, International Journal of Heat and Mass Transfer 53 (19-20) (2010) 4132–4140.
[3] N. Mera, The method of fundamental solutions for the backward heat conduction problem, Inverse
Problems in Science and Engineering 13 (1) (2005) 65–78.
[4] H. Han, D. Ingham, Y. Yuan, The boundary element method for the solution of the backward heat
conduction equation, Journal of Computational Physics 116 (2) (1995) 292–299.
[5] W. Muniz, F. Ramos, H. de Campos Velho, Entropy-and tikhonov-based regularization techniques
applied to the backwards heat equation, Computers & mathematics with Applications 40 (8-9) (2000)
1071–1084.
[6] C.-L. Fu, X.-T. Xiong, Z. Qian, Fourier regularization for a backward heat equation, Journal of
Mathematical Analysis and Applications 331 (1) (2007) 472–480.
[7] Siraj-ul-Islam, B. Šarler, I. Aziz, Haar wavelet collocation method for the numerical solution of
boundary layer fluid flow problems, International Journal of Thermal Sciences 50 (5) (2011) 686–697.
[8] A. Shidfar, Z. Darooghehgimofrad, A numerical algorithm based on RBFs for solving an inverse source
problem, Bulletin of the Malaysian Mathematical Sciences Society 40 (3) (2017) 1149–1158.
[9] Siraj-ul-Islam, I. Aziz, M. Ahmad, Numerical solution of two-dimensional elliptic PDEs with nonlocal
boundary conditions, Computers & Mathematics with Applications 69 (3) (2015) 180–205.
[10] Siraj-ul-Islam, I. Aziz, A. Al-Fhaid, An improved method based on haar wavelets for numerical
solution of nonlinear integral and integro-differential equations of first and higher orders, Journal of
Computational and Applied Mathematics 260 (2014) 449–469.
[11] Ü. Lepik, Solving PDEs with the aid of two-dimensional haar wavelets, Computers & Mathematics
with Applications 61 (7) (2011) 1873–1879.
[12] Siraj-ul-Islam, M. Ahsan, I. Hussian, A multi-resolution collocation procedure for time-dependent
inverse heat problems, International Journal of Thermal Sciences 128 (2018) 160–174.

100
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

On numerical computation of three dimensional


highly oscillatory integrals
Shomaila Mazhar a,∗, Siraj-ul-Islama ,Sakhi Zamana

a Department of Basic Sciences, University of Engineering and Technology Peshawar , Pakistan.

ABSTRACT
In this paper, new procedures are proposed for numerical evaluation of three-dimensional highly
oscillatory integrals with and without critical point. The new method is based on Levin approach
with Gaussian radial basis function and multi-resolution quadrature rules based on Hybrid func-
tions and Haar wavelets are used to compute multivariate HOIs with and without critical point.
The proposed method gives the desired accuracy at large frequency parameter of the oscillations.
Test problems are included for numerical verification of the proposed method.
Keywords: Multivariate highly oscillatory integrals, Gaussian Radial Basis function, Hybrid
and Haar functions.

1 INTRODUCTION
HOIs have wide range of utilization in different practical problems of science and engineering.
Sometimes it is impossible to evaluate the definite integral analytically and then numerical in-
tegration is the only useful way to evaluate the integrals. Classical quadratures like Newton
cotes and Gaussian quadrature fails in the case of HOIs. The inherent reason for the failure of
the classical quadratures is the frequent oscillations of the integrands. The quadrature points in
this case cannot match oscillations of the frequency, so efforts should be made to establish new
quadrature algorithms to resolve the problem. In many areas of applied mathematics, one needs
to compute HOIs of the form
Z
Iω [f ] = f (ū)ei ω g(ū) dū, ū ∈ Ω1 ⊂ Rm , m = 3, (1)
Ω1

where f (ū) and g(ū) are smooth functions and the parameter ω > 0 is the frequency of oscillations
[2, 9, 10, 16].
In 20th century, many accurate numerical methods for one-dimensional and multi-dimensional
oscillatory integrals have been developed. These methods can be broadly divided into two groups:
asymptotic approach and quadrature methods. Asymptotic method is a well known procedure to
approximate HOIs having high frequency. Researchers like as Sheehan Olver [11,13,14], Iserles [8]
and Filon method [4] are the main contributors in this area. There are some drawbacks in this
approach like Filon method is only implemented when g is a linear function. Thus the generality
of the asymptotic method is limited in some cases [7].

The author to whom all the correspondence should be addressed. Email:shomaila-leo91@yahoo.com

101
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

The Filon method has a relative long history but is generally thought to be only very effective
for the integrals with linear phase functions [3] The main stream method which is concerned
about numerical treatment of the multivariate HOIs are: asymptotic methods [7]. In [7] the au-
thor expand HOIs into asymptotic series in inverse powers of the frequency. The quadrature based
methods present more practical approach for the computation of integrals having highly oscilla-
tory kernels. These methods are accurate and efficient for computing single or multi-dimensional
integrals having complicated phase functions. Among other types of quadrature rules, the numer-
ical steepest descent method [5] and the Levin’s method [10] are used to approximate the HOIs.
The steepest decent method is applicable to compute the integrals having analytical integrands
and also finding the steepest descent path is complicated. Historically, this was considered very
difficult and expensive before Levin’s presented an alternate procedure, which made it possible
that the quadrature methods can be used for numerical solution of HOIs [10].
Levin’s method [10] received much attention, due to its better performance in the case of HOIs.
As the Levin’s approach can compute the oscillatory integrals with complicated phase function,
therefore many researchers have used this approach for numerical evaluation of HOIs [1, 2, 9, 18].
Efficient numerical evaluation of multi-dimensional HOIs having CP is specifically challenging.
Few methods are available in the literature in which some are limited in scope. There is growing
interest in the numerical solution of HOIs due to its importance in its own right as well as because
of wide ranging of applications of such HOIs in sciences and engineering.
Some numerical methods have been proposed in the literature for evaluating the HOIs with
CP with emphasis on the asymptotic decay as ω → ∞. High asymptotic order accuracy is
achieved in Filon-type methods [7] and moment-free Filon-type methods [12, 17] by interpolating
the derivatives of f at the CP and endpoints of the interval Numerical steepest descent technique
[5] simultaneously accomplish high asymptotic order and numerical stability. In [18], the authors
extended the technique [17] to handle the CP of the HOIs. In this algorithm, the meshless method
is coupled with multi-resolution analysis. The linear polynomial is replaced by the Haar basis
quadrature and Hermite polynomials are replaced by the HFs based quadrature. In the same
paper, the analytical error bounds of the individual methods are determined and justified by the
numerical examples.
In [19], the author proposed a new method for numerical evaluation of multivariate HOIs
over a piecewise analytic rectangular and non-rectangular regions. The domain region is split
into sub-regions with and without stationary points. Multi-variate meshless quadrature based
on Levin’s approach is used in the region without stationary points. Multi-variate HF of order
m = 9 is used in the region having stationary points.
In the proposed work, Levin collocation method based on Gaussian RBF is used for approx-
imation of the HOIs with no CP. A part from that multi-resolution analysis are also used and
implemented for approximation of the HOIs with and without CP.

2 QUADRATURE RULES
In this section, we discuss two types of quadrature rules i.e. Levin’s method based on Gaussian
RBF and multi-resolution analysis. The detailed description of these quadrature rules are given
as follows.

102
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

2.1 Levin’s method


The three-dimensional HOIs over the rectangular domain is given as
Z b1 Z d1 Z t1
Iω [f ] = f (u, v, w)ei ω g(u,v,w)
dudvdw. (2)
a1 e1 s1

An approximate function P̂ (u, v, w) is supposed to satisfy the following third order PDE:

Puvw + iω(Puv gw + Pvw gu + Puw gv + Pv guw + Pu gvw + Pw guv + P guvw )


− ω 2 (Pw gu gv + Pv gu gw + Pu gv gw + P guv gw + P gv guw + P gu gvw ) (3)
3
− iω (P gu gv gw ) = f (u, v, w).

Then the integral (2) reduced to


b1 d1 t1
∂3
Z Z Z
Iω [f ] = [P (u, v, w)ei ω g(u,v,w)
]. (4)
a1 e1 s1 ∂u∂v∂w

The aim of this work is to find the particular solution P̂ (u, v, w) of the PDEs (3). For this we
select an appropriate basis function with better approximating properties.

2.2 Multivariate meshless procedure


P 3
According to this procedure, an approximate solution P̂ (ū) = M j=1 δj ψ(kū − ūj k2 ) is supposed
3 3
to satisfy the PDE . The δj , j = 1, ..., M are the M unknown coefficients and can be computed
by the following condition
 
L P̂ (ū)ei ω g(ū) = f (ū)ei ω g(ū) , k = 1, 2, ..., N 3 , ū ∈ R3 , (5)

ū=ūk ū=ūk

where L is the three-dimensional partial derivative operator.


In this thesis, the Gaussian RBF ψ is chosen as basis function due its proven characteristics
and can be defined as
r2
ψ (r, c) = e− c2 , (6)
where r are the radial distances and can be written as

r2 = (u − uj )2 + (v − vk )2 + (w − wl )2 , j, k, l = 1, ..., M 3 .

The equation (5) can be written in matrix form as

Aδ = F, (7)

where A is M 3 × M 3 square vectors and the column matrices δ, F are of order M 3 × 1. The
values of unknowns δj , j = 1, 2, ..., M 3 can found by solving the system of equations (7) by the
Choleski’s method or by the TSVD. Consequently, one can find the approximate RBF solution
P̂ (u, v, w).

103
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

2.3 Multi-resolution quadratures


Multi-resolution quadratures like HF and HW are discussed in detail in [6, 15] to evaluate multi-
variate non-oscillatory integrals. These quadratures are used to evaluate mildly oscillatory and
highly oscillatory integrals in paper [6].
The formula for the 9th order HF based quadrature approximating integral of the form
R b1 R b2 (w) R b3 (v,w)
a1 a2 (w) a3 (v,w) f (u, v, w)dudvdw, is given by
m    
9~ X ~ ~
Qh9 [f ] ≈ [832221 H a1 + (18k − 17) − 260808 H a1 + (18k − 15)
5734400 2 2
i=1
   
~ ~
+ 2903148 H a1 + (18k − 13) − 3227256 H a1 + (18k − 11)
2 2
   
~ ~
+ 5239790 H a1 + (18k − 9) − 3227256 H a1 + (18k − 7)
2 2
   
~ ~
+ 290314 H a1 + (18k − 5) − 260808 H a1 + (18k − 3)
2 2
 
~
+ 832221 H a1 + (18k − 1) ],
2
b1 −a1
where ~ = 9m and
m    
9k X k(18i − 17) k(18i − 15)
H(w) ≈ [832221G a2 (w) + , v − 260808 G a2 (w) +
5734400 2 2
i=1
   
k(18i − 13) k(18i − 11)
+ 2903148 G a2 (w) + − 3227256 G a2 (w) +
2 2
   
k(18i − 9) k(18i − 7)
+ 5239790 G a2 (w) + − 3227256 G a2 (w) +
2 2
   
k(18i − 5) k(18i − 3)
+ 2903148 G a2 (w) + − 260808 G a2 (w) +
2 2
 
k(18i − 15)
+ 832221 G a2 (w) + ],
2
(8)
b2 −a2
where k = 9m and
m    
9r X (r)(18i − 17) (r)(18i − 15)
G(v, w) ≈ [832221F a3 (v, w) + − 260808 F a3 (v, w) +
5734400 2 2
i=1
   
(r)(18i − 13) (r)(18i − 11)
+ 2903148 F a3 (v, w) + − 3227256 F a3 (v, w) +
2 2
   
(r)(18i − 19) (r)(18i − 7)
+ 5239790 F a3 (v, w) + − 3227256 F a3 (v, w) +
2 2
   
(r)(18i − 5) (r)(18i − 3)
+ 2903148 F a3 (v, w) + − 260808 F a3 (v, w) +
2 2
 
(r)(18i − 1)
+ 832221 F a3 (v, w) + ],
2
(9)

104
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

−a3
where r = b39m .
Similarly, the HW based quadrature for approximating three-dimensional integral is given by
2P  
b1 − a1 X (b1 − a1 )(i − 0.5)
Qhw [f ] ≈ H a1 + ,
2P 2P
i=1

where
2N  
b2 (w) − a2 (w) X (b2 (w) − a2 (w))(i − 0.5)
H(x) = G a2 (w) + ,w ,
2N 2N
i=1
and
2M  
b3 (v, w) − a3 (v, w) X (b3 (v, w) − a3 (v, w))(i − 0.5)
G(y, z) = F a3 (v, w) + , v, w , (10)
2M 2M
i=1

3 SPLITTING ALGORITHM
The author [19] proposed a new splitting algorithm for evaluation of HOIs having CP. According
to this procedure, a small region having CP is separated from the region having no CP. For this
purpose, the regular domain is split into two main parts: one in which the CP lies and the other
part does not contain any CP. In the former domain, the method Qh9 [f ] is used to approximate
the integral and in the later domain, QL G [f ] is used to approximate the integral. For this, a
number ζ, such that a < ζ < b, is defined in the following form
 1/κ
N
ζ= , (11)
10ω

where ζ → 0 when ω → ∞ for fixed N , and κ is the order of CP.


In three-dimensional case the volumetric regular domain is sub-divided into four compart-
ments: ρ1 is the small domain that contains CP at the one corner. The other compartments ρ2 ,
ρ3 and ρ4 are the domains having no CP as shown in the Fig (3). According to the proposed
splitting algorithm, integral (1) can be separately written as
Z Z Z
Iω [f ] = f (u, v, w)eiωg(u,v,w) dV
Z Z ZΩ Z Z Z
iωg(u,v,w)
= (u, v, w)e dV + (u, v, w)eiωg(u,v,w) dV
ρ
Z Z Z 1
ρ
Z Z Z2 (12)
+ f (u, v, w)eiωg(u,v,w) dV + f (u, v, w)eiωg(u,v,w) dV
ρ3 ρ4
= I1 [f ] + I2 [f ] + I3 [f ] + I4 [f ].

The integral I1 [f ], containing CP, is computed by Qh9 [f ]. The remaining integrals have no CP
are computed by the QL G [f ].

3.1 Working procedure


Suppose that the integral (1) has a CP at x = a having order k − 1. The following working
procedure is used to evaluate the integral

105
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

(0, 0, 1)

ρ4

(0,0,ζ )

ρ2
ρ3
P(ζ, ζ, ζ)
ρ1

O (0, ζ ,0) (0, 1, 0) Y


(ζ,0 ,0)

(1,0,0)

Figure 1: Splitting of the regular volumetric domain.

i. For small ω, Qh9 [f ] is used to compute integral (1).

ii. For higher values of ω, integral (1) can be split according to the equation (12).

iii. The integral I1 [f ], containing CP, is computed by the method Qh9 [f ] and the remaining
integral are computed by the method QL G [f ], then the resulting value of (1) becomes

QH h L
M [f ] = Q9 [f ] + QG [f ]. (13)

4 NUMERICAL ANALYSIS
The proposed methods are illustrated by solving some numerical test problems. The reference
solution is obtained by using MAPLE. Relative errors are computed in case when the software
like MAPLE and MATLAB fail to produce analytical solution. We approximated the results
by using MATLAB platform with Core i5 laptop.

4.1 Test problems


Test Problem 1. Let us consider the HOIs
Z 1Z 1Z 1
Iω [f ] = (cos(10(u + v + w)))eiω(u+v+w) dudvdw. (14)
0 0 0

The oscillatory integral (14) has been considered in [2, 9]. The regular volumetric domain is
shown in Fig.3 (right). The integrand is highly oscillatory, due to which the existing quadratures

106
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Table 1: The Erel of the quadratures QL h h


G [f ], Q9 [f ] and Qw [f ], N = 5, 10 for numerical problem
1.

ω QL
G [f ] Qh9 [f ] Qhw [f ]
102 1.00007e − 06 8.38959e − 03 3.21302e − 03
103 7.71632e − 11 9.43774e − 02 1.18337e − 03
104 5.47748e − 15 3.59024e − 01 4.32166e − 03

Table 2: Erel of the quadratures QL h h


G [f ], Q9 [f ] and Qw [f ], ω = 1000 for numerical problem 1.

N QL
G [f ] Qh9 [f ] Qhw [f ]
2 5.13279e − 11 3.82385e − 01 2.26968e − 02
4 9.82742e − 11 3.82955e − 01 1.15833e − 03
6 6.15685e − 12 2.26936e − 03 3.51822e − 03
8 1.03461e − 12 1.73570e − 05 2.33272e − 03
10 2.18110e − 13 7.58054e − 02 2.57165e − 04
12 4.74509e − 14 7.58071e − 02 1.34360e − 04

fail to compute the integral. The integral is approximated by the new algorithms QL h
G [f ], Q9 [f ]
h
and Qw [f ]. The results are computed in terms of Eabs and Erel are shown in Fig. 2-3 and table
1-2.
It is shown in the figures that accuracy of the new algorithm QL G [f ] improves even if we
increase the frequency parameter ω. From the Fig. 2 (left), it is shown that the quadrature
Qh9 [f ] is accurate and improves the accuracy if we increase N . Also the method QL G [f ] performs
better for small values of N . Fig. 2 (right) shows that all the methods have low computational
cost on larger frequencies and nodes. But Qh9 [f ] fails on increasing frequency at fixed nodes as
shown in Fig.3 (left). From the figures and tables it is obvious that for a very high frequency
the method Qh9 [f ] does not perform well. The method Qhw [f ] is used for comparison but as this
method uses linear basis, therefore, it fails to compute HOIs accurately.
The method QL −13 ) for N = 16 and ω = 104 , while the method Qh [f ]
G [f ] gives accuracy O(10 9
gives accuracy O(10−8 ) at the same nodal point as shown in the Fig. 2 (right). Consequently,
the proposed method QL G [f ] is accurate and efficient for three-dimensional problems.

Test Problem 2. Consider the following integral


Z 1Z 1Z 1
1
Iω [f ] = eiω(u+v+w) dudvdw. (15)
0 0 0 (u + v + w + 1)

The integrand is highly oscillatory. Due to non-availability of the exact solution, we have used
relative errors in this problem. The integral is challenging as the existing software like MAPLE
and Builtin command of MATLAB like Quadl and Quadgk fail to compute the integral. We have

107
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

2 4
10 10

QLG [f] QLG [f]


0
10 3
Qh9 [f] 10 Qh9 [f]
Qhw [f] Qhw [f]
−2
10
2
10

−4
10

CPU time
1
10
Eabs

−6
10
0
10
−8
10

−1
10
−10
10

−2
−12 10
10

−14 −3
10 10
2 4 6 8 10 12 14 16 18 2 4 6 8 10 12 14 16 18
N N

Figure 2: (left) Eabs of the quadratures QL h h 4


G [f ], Q9 [f ], Qw [f ], ω = 10 , (right) CPU time for
numerical problem 1

0
10 (0, 0, 1)

QLG [f]
−2
10 Qh9 [f]
Qhw [f]
−4
10

P (1, 1, 1)
−6
10
Eabs

−8
10

−10
(0, 1, 0)
10
(0, 0, 0)

−12
10

−14 (1, 0, 0)
10
2 4 6 8 10 12 14 16 18
ω

Figure 3: (left) Eabs of the quadratures QL h h


G [f ], Q9 [f ], Qw [f ] for N = 10, (right) Unit cubic
domain for numerical problem 1.

108
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Table 3: Erel of the quadratures QL h h


G [f ], Q9 [f ], Qw [f ] for N = 5, 10 for numerical problem 2.

ω QL
G [f ] Qh9 [f ] Qhw [f ]
102 3.36873e − 09 1.64795e − 04 8.15901e − 04
103 3.90338e − 13 4.65723e − 02 2.61815e − 04
104 1.52120e − 16 3.82065e − 03 1.21105e − 03

0 −2
10 10

−2
10 −4
10

−4
10
−6
10

−6
10
−8
10
ERel

ERel
−8
10
QLG [f]
QLG [f] −10
Qh9 [f]
10
−10
10
Qh9 [f] Qhw [f]
Qhw [f]
−12
−12 10
10

−14 −14
10 10

−16
10 −16
10
2 4 6 8 10 12 14 0 5 10 15 20 25 30 35 40 45 50
N ω

Figure 4: (Left) Erel of the quadratures QL h h


G [f ], Q9 [f ], Qw [f ] for ω = 1000, (right) Erel of the
L h h
quadratures QG [f ], Q9 [f ], Qw [f ], N = 10 for numerical problem 2.

applied the proposed quadratures QL h h


G [f ], Q9 [f ] and Qw [f ] to compute the integral at different
values of N and ω. The results are analyzed in terms of Erel . Which are shown in Fig 4 and
table 3 .
It is shown in the figures and tables that accuracy of the new algorithm improves even if we
increase the frequency parameter ω. From Fig. 4 (left), it is shown that Qh9 [f ] is an accurate
method and improves accuracy if we increase the nodal points. Also the method QL G [f ] gives
L
better performance at smaller nodes. Fig. 4 (right) shows that the method QG [f ] has a low
computational cost for a large frequency and nodes. But the method Qh9 [f ] fails if we fix the
nodal points and increase the frequency parameter as shown in Fig. 4 (left). From the figures
and tables, it is clear that the method Qh9 [f ] can produce accuracy for higher frequency if the
nodal points are proportional to the frequency. In this case, the method becomes very costly.
The method Qhw [f ] has the same drawback as the method Qh9 [f ].
The method QL L −16 ) at N = 14
G [f ] can evaluate HOIs at larger values of N . QG [f ] gives O(10
3 h −7
and ω = 10 , while Q9 [f ] gives O(10 ) for the same nodal point as shown in Fig. 4 (right).

109
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Table 4: Erel of QH
M [f ] for N = 5 and N = 10 for test problem ??.

ω Re[QHM [f ]] Im[QHM [f ]] Re[QHM [f ]] Im[QHM [f ]]


N =5 N =5 N = 10 N = 10
100 1.8452e − 03 1.3344e − 03 3.7849e − 05 1.3395e − 04
200 1.3001e − 03 8.5766e − 04 6.9579e − 05 8.0435e − 05
400 8.8183e − 04 6.0047e − 04 5.6075e − 05 5.0942e − 05
600 6.7483e − 04 4.9084e − 04 4.8912e − 05 4.3993e − 05
800 5.5867e − 04 4.0297e − 04 1.4856e − 05 1.4856e − 05
1000 4.9171e − 04 3.3677e − 04 2.2318e − 05 2.2318e − 05

Test Problem 3. Consider the following integral


Z 1Z 1Z 1
2 2 2
Iω [f ] = eiω(u +v +w ) dudvdw (16)
0 0 0

This integral has been considered in [19]. Integral (16) contains a CP at O(0, 0, 0). The
integral is evaluated by the proposed algorithm QM h
H [f ]. The integral is also computed by Q9 [f ]
and Qhw [f ]. Eabs and Erel are analyzed in Fig. 5 and table 4. Let P (ζ, ζ, ζ) be the splitting
neighborhood of the CP O(0, 0, 0), then according to the proposed procedure, integral (16) can
be separated as
Z 1Z 1Z 1
2 2 2
Iω[f ] = eiω(u +v +w ) dudvdw
0 0 0
Z ζZ ζZ ζ Z 1Z 1Z 1
iω(u2 +v 2 +w2 ) 2 2 2
= e dudvdw + eiω(u +v +w ) dudvdw
0 0 0 0 ζ 0 (17)
Z 1Z ζZ 1 Z ζZ ζZ 1
2 2 2 2 2 2
+ eiω(u +v +w ) dudvdw + eiω(u +v +w )) dudvdw
ζ 0 0 0 0 ζ
= I1 [f ] + I2 [f ] + I3 [f ] + I4 [f ].

The integral I1 [f ] contains the CP and is approximated by the method Qh9 [f ]. The remaining
oscillatory integrals are computed by the QL G [f ]. Accuracy of the proposed spitting algorithm
QHM [f ] improves on larger ω. Fig. 5 (left) shows that the method Qh9 [f ] is an accurate method
and improves accuracy if we increase the nodal points. But this method fails if we fix the nodal
points and increase the frequency parameter as shown in Fig. 5 (right). The new method QH M [f ]
improves accuracy for increasing ω as shown in the Fig. 5 (right). For large frequency parameter,
the method Qh9 [f ] gives the required accuracy for very large number of nodal points which is
impractical. The splitting method QH −5
M [f ] gives accuracy O(10 ) for N = 10, ω = 70, while the
method Qh9 [f ] attains O(10−3 ) for the same N as shown in Fig. 5 (right). It is evident from
the whole discussion that the proposed method is more accurate for three-dimensional oscillatory
integrals having CP.

110
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

0 −1
10 10

−1 Re(QHM [f]) Re(QHM [f])


10
Re(Qh9 [f]) Im(QHM [f])
−2 im(Qh9 [f]) −2
Re(Qh9 [f])
10 10
Re(Qhw [f]) Im(Qh9 [f])
−3
im(Qhw [f])
10
Eabs

Eabs
−4 −3
10 10

−5
10

−6
10 −4
10

−7
10

−8
10 −5
10
2 4 6 8 10 12 14 16 0 10 20 30 40 50 60 70
N ω

Figure 5: Test problem 17, (left) Eabs of QH h h


M [f ], Q9 [f ] and Qw [f ] for ω = 50, (right) Eabs of
H h
QM [f ] and Q9 [f ] for N = 10.

5 CONCLUSION
In this paper, new procedures that are proposed for approximation of three-dimensional HOIs,
when the oscillator have or have not CPs. The new methods QL h h
G [f ], Q9 [f ] and Qw [f ] are im-
plemented to approximate multivariate HOIs without CP. It has been shown that the proposed
methods are accurate and efficient at low nodes and higher frequencies.
Splitting method QH h L
M [f ] is obtained by merging the methods Q9 [f ] and QG [f ]. Accuracy
of the proposed method QH M [f ] is confirmed numerically. From the theoretical and numerical
investigations especially on higher frequencies, it is verified that the new numerical methods are
numerically stable and accurate when the oscillator has CP or higher frequencies.

References
[1] Siraj-ul-Islam A. S. Al-Fhaid and S. Zaman. Meshless and wavelets based complex quadra-
ture of highly oscillatory integrals and the integrals with stationary points. Eng. Anal.
Bound. Elemt., 37(9):1136–1144, 2013.

[2] I. Aziz, Siraj-ul-Islam, and W. Khan. Numerical integration of multi-dimensional highly


oscillatory, gentle oscillatory and non-oscillatory integrands based on wavelets and radial
basis functions. Eng. Anal. Bound. Elemt., 36(1284–1295), 2012.

[3] G. A. Evans and J. R. Webster. A comparison of some methods for the evaluation of highly
oscillatory integrals. J. Comp. Appl. Math., 112(1):55–69, 1999.

[4] L.N.G. Filon. On a quadrature formula for trigonometric integrals, in: Proceedings of royal
society. Proceedings of Royal Society, Edinburgh, (38-47), 1928.

111
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

[5] D. Huybrechs and S. Vandewalle. On the evaluation of highly oscillatory integrals by analytic
continuation. SIAM J. Num. Anal., 44(3):1026–1048, 2006.

[6] Siraj-ul-Islam I. Aziz and W. khan. Quadrature rules for numerical integration based on
haar wavelets and hybrid functions. Comp. Math. Appl., 61(9):2770–2781, 2011.

[7] A. Iserles and S. P. Norsett. Efficient quadrature of highly oscillatory integrals using deriva-
tives. In Proceedings of the Royal Society of London A: Mathematical, Physical and Engi-
neering Sciences, volume 461, pages 1383–1399. The Royal Society, 2005.

[8] S. P. Iserles, A. Norsett and S. Olver. Highly oscillatory quadrature: The story so far. In
Nume. Math. Appl., pages 97–118. Springer, 2006.

[9] T. Wang J. Li, X. Wang and C. Shen. Delaminating quadrature method for multi-dimensional
highly oscillatory integrals. Appl. Math. Comput., 209(327-338), 2009.

[10] D. Levin. Procedures for computing one and two-dimensional integrals of functions with
rapid irregular oscillations. Math. Comp., 158(531-538), 1982.

[11] S. Olver. On the quadrature of multivariate highly oscillatory integrals over non-polytope
domains. Num. Math., 103(4):643–665, 2006.

[12] S. Olver. Moment-free numerical approximation of highly oscillatory integrals with stationary
points. Europ. J. Appl. Math., 18(4):435–447, 2007.

[13] S. Olver. Numerical approximation of vector-valued highly oscillatory integrals. BIT Num.
Math., 47(3):637–655, 2007.

[14] S. Olver. Numerical approximation of highly oscillatory integrals. PhD thesis, University of
Cambridge, 2008.

[15] I. Aziz Siraj-ul Islam and F. Haq. A comparative study of numerical integration based on
haar wavelets and hybrid functions. Comput. Math. Appl., 59(6):2026–2036, 2010.

[16] T. Wang X. Wang and C. Shen. An improved levin quadrature method for highly oscillatory
integrals. Appl. Num. Math., 60(833-842), 2010.
Rb
[17] S. Xiang. Efficient filon-type methods for a f (x)eiωg(x) dx. Numer. Math., 105:633–658,
2007.

[18] S. Zaman. New quadrature rules for highly oscillatory integrals with stationary points. J.
Comp. Appl. Math., 278:75–89, 2015.

[19] S. Zaman. Numerical methods for multivariate highly oscillatory integrals. Inter. J. Comp.
Math., pages 1–23, 2017.

112
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

A global weak form meshless method for the


numerical solution of elasto-static problems
Wajid Khan a∗, Siraj-ul-Islam, Baseer Ullah

a Department of Basic Sciences, University of Engineering and Technology, Peshawar, Pakistan.

Abstract
For the numerical solution of boundary value problems, global weak form meshless method,
element free Galerkin method (EFGM) is an effective tool. In this paper, the EFGM with nu-
merical integration based on Haar wavelets is proposed for the numerical solution of linear one-
and two- dimensional elasto-static problems. Moving least square (MLS) shape functions are
used as approximants for the unknown solutions. Essential boundary conditions coupled with
Lagrange multipliers method are imposed. Comparison of the EFGM with Gaussian quadrature
(EFGM-GQ) and EFGM with Haar wavelets (EFGM-HW) based quadrature rules is carried out.
Numerical results show the proposed method convergence for displacement and stresses with
nodes refinement.

Keywords: Elasto-static problems, Element-free Galerkin method, Moving least squares ap-
proximations.

1 Introduction
In the last few decades a lot of meshless methods (MMs) have been introduced, with the need to
solve practical science and engineering problems. To discretize the problem, MMs require only a
set of scattered field nodes inside and on the boundary of problem domain. The MMs avoid the
need of creating complex meshes prior to the shape functions approximation, as the case in mesh
based methods such as the finite element method (FEM), the boundary element method (BEM)
and the finite volume method (FVM) etc.
Many MMs have been developed in the last few decades like: smooth particle hydrodynamics
(SPH) method [1], hp cloud method [2], reproducing kernel particle method (RKPM), diffuse
element method (DEM) [3], EFGM [4] and partition of unity (PUM) method [5]. All these MMs
are based on global weak form and are meshless in terms of shape functions approximations.
Most of MMs requires the background cells for the integrals involved in the Galerkin weak form.
Generally, the global weak form MMs are not as computationally efficients as the classical FEM,
due to the shape functions approximation and the corresponding numerical integration complexity
[6].
To find the integrals in Galerkin weak formulation, the numerical integration is performed
using Gaussian quadrature. In FEM the Gaussian quadrature is used in each element to perform
the numerical integration and the shape functions local support domain correspond to the cells

The author to whom all the correspondence should be addressed. Email: wjdkhan206@gmail.com

113
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

used for the integration [6]. In MMs the integration cells are free from the shape functions con-
structions and the numerical integration is also done with Gaussian quadrature. The numerical
integration error has been little studied in MMs. The numerical integration error will be dom-
inant when high order Gaussian quadrature formula is used [7]. In this paper the EFGM with
numerical integration based on Haar wavelets is used to find numerical solution of elasto-static
problems. The elasto-static problem with the boundary conditions is first converted into weak
form. The unknown function is approximated with the MLS shape functions. The integrals
are evaluated through the Haar wavelets and Gaussian based quadrature rules in the context of
EFGM.
The pattern of paper is as follows. Section 2 describes the EFGM and MLS approxima-
tions details. The quadrature rules are discussed in Section 3. Section 4 explain the numerical
examples. The conclusion is drawn in Section 5.

2 EFG method
Consider the following 2D equation in elasticity with domain Ω and boundary Γ

σ ij,j (x) + bi (x) = 0, (1)

with boundary conditions are:


ti = t̄, on x ∈ Γt , (2)
and
ui = ū on x ∈ Γu . (3)
Here σ ij = Ckl , ij = 12 (ui,j + uj,i ), u denotes the displacement vector,  strain tensor, σ
Cauchy stress tensor and b body force. The ū, t̄ represent displacement and boundary traction
on essential and Neumann boundaries respectively. The variational form of (1) is
Z Z Z Z Z
T T
δ(Lu) C(Lu)dΩ − δu bdΩ − δu t̄dΓ − δη (u − ū)dΓ − δuT ηdΓ = 0,
T T
(4)
Ω Ω Ω Γ Γ

The system of equations thus obtained is:


    
K G u F
= . (5)
GT 0 η Q

Here F and K are the global force vector and the stiffness matrix defined as
Z
Kij = BiT CBj dΩ, (6)

Z Z
Fi = φi bdΩ + φi t̄dΓ, i, j = 1, 2, . . . , N, (7)
Ω Γt

where the total number of nodal points is N . The matrices C and B are given by
 
1 υ 0
E 
C= υ 1 0 , (8)
1 − υ2 1−υ
0 0 2

114
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

and  
φi,x 0
B =  0 φi,y  , (9)
φi,x φi,y
where υ and E represents the Poisson’s ratio and Young’s modulus respectively. The matrices G
and Q are Z
Gij = φi Li dΓ, (10)
Γu

and Z
Qj = Lj ūdΓ, (11)
Γu

where Lj is the Lagrange interpolation polynomials.

2.1 The moving least squares approximations


To construct the meshless shape functions, the MLS approximations are mostly used in EFG
method. The unknown function u(x) approximation uh (x) is
m
X
h
u (x) = pj (x)λj (x) = P T (x)λ(x). (12)
j=1

The vector P is
P T (x) = [p1 (x), p2 (x), . . . , pm (x)]. (13)
To find the unknowns coefficients λj (x), the following functional is defined
n
X
J(x) = [(uh (xj ) − uj )2 w(||x − xj ||)], (14)
j=1

where w is the weight function. The weight function used in this study is the cubic spline function
defined as 
2 2
 3 − 4d + 4d
 3 for d ≤ 12 ,
w(x − xj ) = w(d) = 43 − 4d + 4d2 − 43 d3 for 12 < d ≤ 1, (15)

0 for d > 1,

where d = |x − xj |.
The minimization of (14) with respect to λ gives [8]

A(x)λ = B(x)u(x). (16)

Thus by substituting the value of λ in (12) gives

uh (x) = P T (x)A−1 (x)B(x). (17)

The shape function derivative is given by


T −1
uh,x = P,x A B + P T [(A− 1),x B + A−1 B,x ]. (18)

115
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

2.2 Quadrature Rules


The Haar wavelsts based quadrature rule for one- dimensional integral is [9]:
xb Q  
(xb − xa ) X (xb − xa )(i − 0.5)
Z
φ(x) dx ≈ φ xa + , (19)
xa Q Q
i=1

where Q = 2M represents the quadrature points.


The Haar wavelets based quadrature rule for two dimensional integral is [10]:
yd xb Q Q    
(xb − xa )(yd − yc ) X X (xb − xa )(k − 0.5) (yd − yc )(l − 0.5)
Z Z
φ(x, y) dxdy ≈ φ xa + , yc + .
yc xa Q2 Q Q
l=1 k=1
(20)

3 Numerical examples and discussions


Here we consider 1D and 2D problems to check the validity and efficiency of the proposed method.
The relative L2 norm error is defined as:
k u − uh k2
E2 (u) = (21)
k u k2
k ux1 − uhx1 k2
E2 (ux1 ) = (22)
k ux1 k2
where uh , uhx1 are the numerical displacement and strain while u, ux1 are their exact counterparts.

3.1 One-dimensional numerical example


Example 1. Consider the following one-dimensional problem [11]
Eu,x1 x1 + x1 = 0, for x1 ∈ [0, 1], (23)
with boundary conditions
u, x1 (1) = 0,
u(0) = 0.
The analytical solutions for displacement and stress are given by:
x31
 
1 1
u(x1 ) = x1 − , (24)
E 2 6
1 1 − x21
 
(x1 ) = . (25)
E 2
In table 1 numerical results of the displacement and strain obtained by the methods EFGM-
GQ and EFGM-HW are reported. From the table it is clear that EFGM-HW produces the same
accuracy as the EFGM-GQ for the displacement and strain. Fig. 1 shows the comparison of
exact displacement and strain with the numerical solutions obtained by the methods EFGM-GQ
and EFGM-HW using 21 nodal points, 20 background cells and 4 quadrature points in each cell.
From the figures it is clear that the numerical solutions are in excellent agreement with the exact
solutions.

116
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Table 1: The errors E2 (u) and E2 (ux1 ) comparison for the methods EFGM-GQ and EFGM-HW
of test problem 1.

EFGM-GQ EFGM-HW
N Q E2 (u) E2 (ux1 ) E2 (u) E2 (ux1 )
11 2 2.62E − 04 2.28E − 02 3.61E − 04 2.22E − 02
21 2 6.59E − 05 8.32E − 03 8.32E − 05 8.08E − 03
31 2 3.23E − 05 4.63E − 03 3.67E − 05 4.46E − 03
41 2 2.01E − 05 3.08E − 03 2.09E − 05 2.93E − 03
51 2 1.41E − 05 2.26E − 03 1.37E − 05 2.12E − 03

Example 2. Consider the following one-dimensional problem with large localized gradient [12]

u,x1 x1 + b(x1 ) = 0, (26)

with ( 2 2
(2α4 − 4[α4 (x1 − 0.5)]2 )e−[α (x1 −0.5)] , x1 ∈ [0.42, 0.58]
b(x1 ) =
0, otherwise.
The analytical solution
2 (x −0.5)]2
u(x1 ) = x1 + e−[α 1
, x1 ∈ [0, 1]. (27)
This example has been solved in [12] using the standard EFG method with linear complete
shape functions and 30 uniform nodal points, 29 background cells with four Gauss quadrature
points in each cell. The numerical solution shows spurious oscillation around the point x1 =
0.5. In [12] the author used the global enrichment method instead of standard EFG with some
improvement. However, the global enrichment has the problem of local character making the
existing method complicated. The EFGM-HW with 30 nodal points and 29 background cells
with 4 point in each cell gives acceptable results without any additional parameter as in global
enrichment method. In Fig. 2 comparison of numerical and exact solution computed by the
EFGM-HW and the EFG method [12] is shown. In table 2 relative two norm errors comparison
of u and ux1 obtained by the EFGM-HW and EFGM-GQ are given.

3.2 Timoshenko Beam Problem


Example 3. Consider the two dimensional timoshenko beam problem subject to a parabolic
shear load at the free end and dimensions L = 48m, D = 12m, ν = 0.3, E = 37 pa and load
p = 1000N. The analytical solution is [13]:

D2
  
px2 2
ux1 = − (6L − 3x1 )x1 + (2 + ν) x2 − , (28)
6EI 4
D2 x1
 
p
ux2 = 3νx22 (L − x1 ) + (4 + 5ν) + (3L − x1 )x21 , (29)
6EI 4

117
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Table 2: The E2 (u) errors comparison for the methods EFGM-HW and EFGM-GQ of test
problem 2.

EFGM-HW EFGM-GQ
N Q E2 (u) E2 (u)
11 4 1.10E − 01 1.70E − 01
21 4 5.52E − 03 5.28E − 03
31 4 2.92E − 03 3.00E − 03
41 4 1.19E − 03 1.21E − 03
51 4 4.35E − 04 4.44E − 04

(a) EFGM-GQ (b) EFGM-HW

(c) EFGM-GQ (d) EFGM-HW

Figure 1: Comparison of u, uh and ux1 , uh


x1 of test problem 1.

118
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

(b) EFGM-HW
(a) EFG [12]

Figure 2: Comparison of u and uh of test problem 2.

D3
where I = 12 . The exact stresses are
p
σxx = − (L − x1 )x2 , (30)
I
σyy = 0, (31)
p D2
σxy = − ( − x22 ). (32)
2I 4
Fig 3 shows the exact and numerical stresses comparison at the center of beam obtained through
EFGM-HW and EFGM-GQ using 11 × 11 = 121 nodal points and 10 × 10 = 100 background cells
with 4 × 4 quadrature points in each cell. From the figure it is clear that the exact and numerical
solutions are in excellent agreement.

119
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

(a) EFGM-GQ (b) EFGM-HW

(c) EFGM-GQ (d) EFGM-HW

Figure 3: Comparison of exact and numerical stresses of test problem 3.

120
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

4 Conclusion
In this research work, the EFGM with Haar wavelets based integration technique is proposed for
one- and two- dimensional elasto-static problems. From the numerical results, it is evident that
the EFGM with Haar wavelets based quadrature rules gives comparable results to the existing
quadrature rules used in the MMs.

Acknowledgement
The authors acknowledge financial assistance from the Higher Education Commission Islamabad
Pakistan through NRPU Project No. 6331.

References
[1] R. A. Gingold, J. J. Monaghan, Smoothed particle hydrodynamics: theory and application
to non-spherical stars, Monthly notices of the royal astronomical society 181 (3) (1977)
375–389.

[2] C. A. Duarte, J. I. Oden, Hp -cloud a meshless method to solve boundary value problems,
Computer methods in applied mechanics and engineering 139 (1996) 237–262.

[3] B. Nayroles, G. Touzot, P. Villon, Generalizing the finite element method: diffuse approxi-
mation and diffuse elements, Computational mechanics 10 (5) (1992) 307–318.

[4] T. Belytschko, Y. Y. Lu, L. Gu, Element-free Galerkin methods, International journal for
numerical methods in engineering 37 (2) (1994) 229–256.

[5] I. Bubuska, J. Melenk, The partition of unity method, International journal of numerical
methods in engineering 40 (1997) 727–728.

[6] J. Dolbow, T. Belytschko, Numerical integration of the Galerkin weak form in meshfree
methods, Computational mechanics 23 (3) (1999) 219–230.

[7] T. Belytschko, Y. Krongauz, D. Organ, M. Fleming, P. Krysl, Meshless methods: an overview


and recent developments, Computer methods in applied mechanics and engineering 139 (1-4)
(1996) 3–47.

[8] W. Khan, Siraj-ul-Islam, B. Ullah, Analysis of meshless weak and strong formulations for
boundary value problems, Engineering Analysis with Boundary Elements 80 (2017) 1–17.

[9] I. Aziz, Siraj-ul-Islam, W. Khan, Quadrature rules for numerical integration based on Haar
wavelets and hybrid functions, Computers and Mathematics with Applications 61 (2011)
2770–2781.

[10] Siraj-ul-Islam, I. Aziz, Fazal-e-Haq, A comparative study of numerical integration based


on Haar wavelets and hybrid functions, Computer and Mathematics with Applications 59
(2010) 2026–2036.

[11] J. Dolbow, T. Belytschko, An introduction to programming the meshless element free


Galerkin method, Archives of computational methods in engineering 5 (3) (1998) 207–241.

121
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

[12] V. P. Nguyen, T. Rabczuk, S. Bordas, M. Duflot, Meshless method: A review and computer
implementation aspects, Mathematics and computers in simulations 79 (2008) 763–813.

[13] S. Timoshenko, J. Goodier, Theory of Elasticity (Third edition), New York, McGrawHill,
1970.

122
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

A Comparative study of the approximations of


singular and hyper singular integrals
Yahya a , Siraj-ul-Islama∗, Sakhi Zamana

a Department of Basic Sciences, University of Engineering and Technology Peshawar, Pakistan.

Abstract
A quadrature rule based on hybrid functions and uniform haar wavelets is proposed to find
approximate values of definite integral. In this paper we made a comparison of the numerical
results obtained from Newton-Cote’s type of quadrature, a quadrature rule based on uniform
haar wavelets and hybrid functions of different orders to find approximate values of singular
and Hyper Singular Integrals(HSI), which are interpreted as Hadamard finite part integral. The
main advantage of this method is its efficiency and simple applicability. Error estimates of the
proposed method alongside numerical examples are given to test the convergence and accuracy
of the method.

Keywords: Singularity, hyper singular integral, Hilbert transform, Cauchy principal value,
Hadamard finite part integral, Multi-resolution analysis.

1 Introduction
The finite Hilbert transform [3]

Rb f (x)
I(f, c) = − dx; a < c < b, (1)
a x−c

where f(x) is differentiable in (a,b) sufficiently and it has uncontrolled instability when quadrature
rules like Newton-Cote’s or Gauss type are applied for their approximate evaluation, due to the
presence of the singularity at x = c in their domain of integration. The Cauchy principal value
(CPV) of the integral (1) is defined as:
Z c− Z b 
f (x) f (x)
I(f, c) = lim dx + dx , (2)
→0+ a x−c c− x−c

provided the limit exists. In case this limit exists, the limiting value is known as the Cauchy-

principal value (CPV) and the integral is denoted by

Rb f (x)
I(f, c) = − dx
a x−c

The author to whom all the correspondence should be addressed. Email:khanyahya921@gmail.com

123
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Singular integrals of the type (1) occur frequently in many branches of physics, in the theory
of aerodynamics and scattering theory etc and this integral has attracted many researchers in
the past. Some of the well known researchers who have formulated quadrature rules for numer-
ical integration are Price [11], Chawla and Jayarajan [2], Diethelm [4], Hunter [7], Elliott and
Paget [6], Iokamidis and Theocaris [8], Paget and Elliot [5], Theocaris and Kazantzakis [12],
Monegato [10] etc, and many are still engaged in the study of this type integrals in order to
frame quadrature rules for general and specific integrands. However, the quadrature rules for the
numerical integration of the integral (1) behave very much unstable when these rules are applied
for the approximation of the integrals:The quadrature rules when applied to the CPV integrals
of the form Z b
F (x)
J(f, c) = α
dx, α > 1, a < c < b (3)
a (x − c)
are much unstable due to the presence of higher order singularity at x=c. Further, the divergent
integral:
Z b Z c− Z b 
f (x) f (x) f (x)
2
dx = lim dx + dx ; (4)
a (x − c) →0+ a (x − c)2 c− (x − c)
2

is very much expressed as;


Z b Z b
f (x) f (x) 2f (c)
dx = H dx + lim . (5)
a (x − c)2 a (x − c) 2 →0 + 
The integral with H on the right of the equation (5) is called as the Hadamard finite-part of
the integral (4).The basic purpose of this paper is to verify the preference of the quadrature
rules hybrid functions and uniform haar wavelets for the approximate evaluation of the singular
integral and hyper singular integrals.Which have better approximations than the Newton’Cotes
and these are uniformly convergent to the Cauchy principal value of the integral of the type (1).
The same rule has been employed for the approximate evaluation of the finite part integral (4)
by reducing the order of singularity. These rules can be applied for the numerical integration of
real definite integralsRwithout having any kindRof singularity. For this we consider the integrals
a a f (x)
of the type I(f ) = P −a f (x)
x dx and J(f ) = H −a x2 dx. By transformation x =
(b−a)t
2 + (b−a)
2 ,
we can transform any interval into this interval.

2 Haar wavelets functions based quadrature


Rb
Haar wavelets based quadrature [1] for the computation of the integral a f (x) dx is
2M
(b − a) X
HW Q ≈ f (xk )
2M
p=1
(6)
b 2M  
(b − a) X (b − a)(p − 0.5)
Z
f (x)dx = f a+ .
a 2M 2M
p=1

3 Hybrid Functions
The hybrid function [1] computes the integrals of the type
Z b
f (x)dx (7)
a

124
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

where a, b ∈ R. For the case when m = 1, the formula for integral (7) is
Z b n  
(b − a) X (b − a)(2i − 1)
f (x)dx ≈ f a+ . (8)
a n 2n
i=1

Similar derivation can be made for m = 2, 3, ...8. The formula for m=8, that gives more accuracy
and is used for approximating the integrals having singular points, is given by putting m = 8,
b n     
b−a X (b − a)(16i − 15) (b − a)(16i − 13)
Z
f (x) dx ≈ 295627f a + + 71329f a +
a 1935360n 16n 16n
i=1
   
(b − a)(16i − 11) (b − a)(16i − 9)
+ 471771f a + + 128953f a +
16n 16n
   
(b − a)(16i − 7) (b − a)(16i − 5)
+ 128953f a + + 471771f a +
16n 16n
   
(b − a)(16i − 3) (b − a)(16i − 1)
+ 71329f a + + 295627f a + . (9)
16n 16n

4 Newton-Cote’s quadrature rule


The (4n-1)-point rule is generated by decomposing the interval of integration [-a,a] into (4n-2)
equal parts by the points:
a 2a 3a 2a − 1
0, ± , ± , ± , ..., ± . (10)
2n 2n 2n 2n
The proposed rule based on these nodes is denoted by Rn (f ) and is defined as:
(2n−1)    
X ka ka
Rn (f ) = wn0 f (0) + wnk f − f (− ) (11)
2n 2n
k=1

For example the 3-point rule is obtained for n = 1 as:


h a a i
R1 (f ) = w10 f (0) + w11 f ( ) − f (− ) (12)
2 2
where w10 and w11 are the weights associated with the rule R1 (f ): Since the nodes are prefixed,
thus it is only remain to determine the coefficients wn0 and wnk ; for k = 1(1)(2n-1) associated
with f(0) and with the block    
ka ka
f − f (− ) (13)
2n 2n
respectively. It is important to note that in such rules the coefficient of f(0) i.e. wn0 is zero,
for all n: The method of undermined coefficient has been adopted to find the coefficients wnk
of the above rule Rn (f ): For the sake of convenance, we have presented here the procedure for
determining the coefficients of the 3-point rule R1 (f ) based on the following definition. The
moment equation in [9] is
AW = B, (14)
where    
1 2 ... (2n − 1) wn1
 13 23 ... (2n − 1)3   wn2 
A= , W = , (15)
   
.. .. .. ..
 . . ... .   . 
12n−1 22n−1 . . . (2n − 1)2n−1 wn(2n−1)

125
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

and  
2n/1
 (2n)3 /3 
B= , (16)
 
..
 . 
(2n)2n−1
The rules corresponding to n = 1, 2, 3 and 4 are noted below.
    
a −a
R1 = 2 f +f (17)
2 2

         
134 a −a 206 a −a
R2 = f +f − f +f
45 4 4 225 2 2
     (18)
214 3a −3a
+ f +f .
225 4 4
         
4433 a −a 2717 a −a
R3 = f +f − f +f
525 6 6 337 3 3
         
14779 a −a 422 2a −2a
+ f +f − f +f (19)
2450 2 2 201 3 3
    
489 5a −5a
+ f +f .
614 6 6
and
         
14687 a −a 4316 a −a
R4 = f +f − f +f
289 8 8 63 4 4
         
4044 3a −3a 4376 a −a
+ f +f − f +f
71 8 8 139 2 2
          (20)
2927 5a −5a 1124 3a −3a
+ f +f − f +f
232 8 8 357 4 4
    
485 7a −7a
+ f +f .
671 8 8

In the light of the definition of the degree of accuracy of the rules given in the above formulas
are of accuracy 2, 6, 10 and 14 respectively. In general the degree of precision of the rule Rn (f )
is (4n-2). It is relevant to note here that for all n, the rule Rn (f ) is an open type rule since both
the end points −a and a of the interval of integration [−a, a] are excluded in the set of (4n − 1)−
nodes. As a result rules Rn (f ) for each n meant for the numerical approximation of real CPV
integrals can also be applied for the real definite integrals having the singularity at the end point
of interval of integration.

5 Numerical Results
The following examples are dedicated to numerical evaluation of singular integrals with singularity
at origin by the quadrature rules framed in this paper.We have numerically verified our proposed

126
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Table 1: Absolute error Lab by the hybrid functions,haar wavelets and Newton-cotes types of
quadrature rules for approximate Evaluation of Real CPV Integrals with Singularity at Origin

Integrals Nodes HyF 4 HyF 8 HaarF NC


I1 3 2.9188e − 05 4.5550e − 12 3.3996e − 03 3.2222e − 02
7 2.9188e − 05 5.4046e − 13 6.2542e − 04 8.8000e − 06
11 3.1105e − 05 5.5822e − 13 2.5332e − 04 2.5000e − 10
15 3.1298e − 05 5.3779e − 13 1.3624e − 04 0.00000000
I2 3 3.9122e − 05 1.7630e − 12 2.7939e − 03 3.2222e − 02
7 3.9110e − 05 1.3625e − 12 5.1237e − 04 8.8000e − 06
11 3.9108e − 05 1.3662e − 12 2.0745e − 04 2.5000e − 10
15 3.9108e − 05 1.3671e − 12 1.1155e − 04 0.00000000
I3 3 3.7876e − 05 2.5999e − 09 5.9132e − 04 1.0000e − 02
7 3.7865e − 05 2.6109e − 09 1.0831e − 04 3.7000e − 05
11 3.7863e − 05 2.6110e − 09 4.3843e − 05 2.8000e − 07
15 3.7863e − 05 2.6110e − 09 2.3574e − 05 0.00000000

schemes for the numerical approximate Evaluation of CPV Integrals in the tables given below.The
integrals considered here are: Z 1 x
e
I1 = dx (21)
−1 x
Z 1/2
sinx
I2 = dx (22)
−1/2 x

tan−1 x
Z 1/2
I3 = dx (23)
−1/2 x

127
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Table 2: CPU time comparison (in seconds)of HF4, HF8 and Newton-Cote’s type of quadrature
rules for approximate Evaluation of Real CPV Integrals with Singularity at Originthe integrals

Integrals Nodes HyF 4 HyF 8 HaarF


I1 3 3.4300e − 04 5.7000e − 04 1.4300e − 04
7 4.1000e − 04 6.8200e − 04 2.0400e − 04
11 4.5100e − 04 8.0000e − 04 2.2600e − 04
15 2.5800e − 04 8.7900e − 04 2.4400e − 04
I2 3 6.5160e − 03 2.6980e − 03 2.7939e − 03
7 4.1000e − 04 6.4700e − 04 5.1237e − 04
11 4.6000e − 04 7.5600e − 04 2.0745e − 04
15 5.0600e − 04 8.5540e − 04 1.1155e − 04
I3 3 3.4600e − 04 2.9470e − 03 1.4400e − 04
7 4.0200e − 04 7.0600e − 04 1.9500e − 04
11 8.5500e − 04 7.8100e − 04 3.1500e − 04
15 5.6300e − 04 8.7300e − 04 4.2700e − 04

6 Conclusion
In this paper, a quadrature rule based on hybrid functions and uniform haar wavelets is preferred
on Newton-cotes to find approximate values of definite integrals.These rules have been tested
numerically by some standard test integrals and evaluated by the proposed methods NC, HW,
HY4 and HY8. Lab and CPU time (in seconds) comparison are shown in the tables. It is shown
in the table that the proposed method HY8 gives an accuracy of high order while the rest of
methods decreases the accuracy) at small nodal points. The NC method oscillates the accuracy
due to the instability of the method at greater nodal points. The HY4, HY8 and HW increase the
accuracy on dense nodes. This means that Haar wavelets and hybrid functions based quadratures
are stable on greater nodes.

Acknowledgements
We would like to thank the reviewers for their valuable suggestions towards the improvement of
the paper.

References
[1] I. Aziz, Siraj-ul-Islam, and W. Khan. Quadrature rules for numerical integration based on
haar wavelets and hybrid functions. Comput. Math. with Applic., 61(27702781), 2011.

[2] M. M. Chawla and N. Jayarajan. Quadrature formulas for cauchy principal value integrals.
Computing, 15:347–355, 1975.

[3] P. J. Davis and P. Rabinowitz. Methods of numerical integration. Courier Corporation, 2007.

[4] K. Diethelm. A method of practical evaluation of the hilbert transform on the real line. J.
Comput. Appl. Math, 112:45–53, 1993.

128
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

[5] D. Elliott. Three algorithms for hadamard finite part integrals and fractional derivatives. J.
Comput. Appl. Math, 62:267–283, 1995.

[6] D. Elliott and D. Paget. Gauss type quadrature rules for cauchy principal value integrals.
Mathematics of Computation, 33(145):301–309, 1979.

[7] D. Hunter. Some gauss-type formulae for the evaluation of cauchy principal values of inte-
grals. Numerische Mathematik, 19(5):419–424, 1972.

[8] N. Ioakimidis and P. Theocaris. On the numerical evaluation of cauchy principal value
integrals. Revue Roumaine des Sciences Techniques, Serie de Mecanique Appliquee, 22:803–
818, 1977.

[9] P. K. Mohanty and M. K. Hota. Quadrature rules for evaluation of hyper singular integrals.
Applied Mathematical Sciences, 8(117):5839–5845, 2014.

[10] G. Monegato. The numerical evaluation of one-dimensional cauchy principal value integrals.
Computing, 29:337–354, 1982.

[11] J. Price. Discussion of quadrature formulas for use on digital computers. Boeing. Sci. Res.
Labs, 1960.

[12] P. Theocaris, N. Ioakimidis, and J. Kazantzakis. On the numerical evaluation of two-


dimensional principal value integrals. International Journal for Numerical Methods in En-
gineering, 15(4):629–634, 1980.

129
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Meshfree method of 1D Fredholm integral equation


having oscillatory discontinuous kernel

Mati-ur-Rahmana , Zaheer-ud-Dina,b∗, Siraj-ul-Islama

a Department of Basic Sciences, UET Peshawar, Pakistan.


b Department of Basic Sciences, CECOS University Peshawar, Pakistan.

Abstract
In this paper, numerical meshless solution algorithm for the model in hand is put forward. The proposed algorithm is
based on Levin’s quadrature incorporating global multi-quadric radial basis function and is specially designed to handle
the case when the kernel function is discontinuous.

1 Introduction
A one-dimensional Fredholm integral equation having oscillatory kernel model of the second kind can
be expressed like
Z b
w(n) = f (n) + X(n, t)eiω Θ(n,t) w(t)dt, n[a, b], (1)
a
where f, Θ are smooth functions and X is a discontinuous or piecewise function defined as

X1 (n, t), a≤n<c
X(n, s) = (2)
X2 (n, t), c ≤ n < b.
Θ is an oscillator function and ω is the frequency of oscillator. As the value of ω increases, the
above model become oscillatory Fredholm integral equation. Because of profoundly oscillatory bit
work, the current quadrature standards neglects to introduce exact and stable arrangement of (1).
Recent work regarding the same type of models can be found in [3–6]. The next is that, how to
comprehend such sort of integral equations precisely and productively. To do this, we have used a
uniform global differential matrix procedure (GDMP) to solve oscillatory integral equations having
discontinuous kernel accurately and efficiently.

2 Numerical method
Meshless technique in view of global MQ-RBF interpolation has been considered. The global MQ-
RBF differential matrix incorporated in this work has been taken from our earlier findings [4].

2.1 Quadrature rule


Discretization of (1) on a square mesh, leads to
Z b
w(nr ) = f (nr ) + X(nr , t)w(t)eι ω Θ(nr ,t) dt, r = 0, 1, ..., N, (3)
a

Corresponding author. Email address: zaheeruddin@cecos.edu.pk (Zaheer-ud-Din)
130
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

where Z b
Ir = X(nr , t)w(t)eι ω Θ(nr ,t) dt, r = 0, 1, ..., N.
a
Since X is discontinuous at c, therefore, we have
Z c Z b
ι ω Θ(nr ,t)
Ir = X1 (nr , t)w(t)e dt + X2 (nr , t)w(t)eι ω Θ(nr ,t) dt = Ir1 + Ir2 , r = 0, 1, ..., N, (4)
a c

where Z c
Ir1 = X1 (nr , t)w(t)eι ω Θ(nr ,t) dt, r = 0, 1, ..., N, (5)
a
and Z b
Ir2 = X2 (nr , t)w(t)eι ω Θ(nr ,t) dt, r = 0, 1, ..., N. (6)
c
The integrals (4) can be assessed by finding meshless estimates x(t) and y(t) for the accompanying
ODEs:
x0 (t) + ι ω Θ0 (nr , t)x(t) = X1 (nr , t)w(t), (7)
y 0 (t) + ι ω Θ0 (nr , t)y(t) = X2 (nr , t)w(t). (8)
To continue further, we solve first (7) numerically to get unknown x(t). To do this, we substitute (7)
into (5) to get
Z c
d
Ir1 = (x(t)eι ω Θ(nr ,t) )dt = x(c)eι ω Θ(nr ,c) − x(a)eι ω Θ(nr ,a) . (9)
a dt
The matrix vector form of (7) can be expressed as:

(Dg + ιω Λ1r )Xr = diag(H1r )µ1 , r = 0, 1, ..., N, (10)

Subsequently,
Xr = (Dg + ιω Λ1r )−1 diag(H1r )µ1 . (11)
(5) can be written as

Ir1 = Q1r Xr , (12)

where
Q1r = eι ω Θ(nr ,c) , 0, · · · , 0, −eι ω Θ(nr ,a) .
 
(13)
Equation (9) can then be composed of

Ir1 = Q1r (Dg + ι ω Λ1r )−1 diag(H1r )Ψ1 = Q1r (Dg + ι ω Λ1r )−1 diag(H1r )Ml1 Ψ. (14)
Hence (4) gets the form

Ir = {Q1r (Dg + ι ω Λ1r )−1 diag(H1r )Ml1 + Q2r (Dg + ι ω Λ2r )−1 diag(H2r )Ml2 }Ψ. (15)

Equation (15) is then simplified as


Ir = Zr Ψ, (16)
Finally, incorporating (16), leads to
Ψ = (I − Z)−1 F, (17)
giving the proposed approximate solution of the current problem.
131
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

GDMP

Figure 1: Test 1, εr versus N while ω = 106 is fixed; and εr versus ω for fixed N = 40.

3 Numerical Experiments
In the following we discussed the numerical examples to validate the proposed approach GDMP.

Test Problem 1. Consider the following differential equation.


Z 1
w(n) = f (n) + X(n, t)eiwΘ(n,t) w(t)dt,
−1

where 
X1 (n, t), −1 ≤ n < 0
X(n, t) =
X2 (n, t), 0 ≤ n < 1,
12
−28t3 + iw(2t + 5 ) −n2 −7t4
X1 = e ,
cos(5t)
12
−28t3 + iw(2t + 5 ) −7t4
X2 = e ;
cos(5t)
n2 6
Θ(n, t) = + (t + )2 ,
20 5
and
1 2 + 1 iw−7 1 2 + 36 iw 2 1 2 + 36 iw 1 2 + 121 iw−7
f (n) = cos(5n) − (−e 20 iwn 25 + e 20 iwn 25 )e−n + e 20 iwn 25 − e 20 iwn 25

The exact solution of the problem is w(n) = cos(5n). The Fig. 1 shows that the proposed method is
accurate.

References
[1] A. Iserles, S. P. Norsett, Efficient quadrature of highly oscillatory integrals using derivatives, Proc.
R. Soc. 461 (2005) 1388–1399.

132
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

[2] D. Levin, Procedures for computing one and two-dimensional integrals of functions with rapid
irregular oscillations, Math. Comput. 38 (1982) 531–538.

[3] Siraj-ul-Islam, I. Aziz, Zaheer-ud-Din, Meshless methods for multivariate highly oscillatory Fred-
holm integral equations, Eng. Anal. Bound. Elem. 53 (2015) 100–112.

[4] Zaheer-ud-Din, Siraj-ul-Islam, Meshless methods for one-dimensional oscillatory Fredholm inte-
gral equations, App. Math. Comput. 324 (2018) 156–173.

[5] Siraj-ul-Islam, Zaheer-ud-Din, Meshless methods for two-dimensional oscillatory Fredholm inte-
gral equations, j. Comput. Appl. Math. 335 (2018) 33–50.

[6] J. Li, X. Wang, S. Xiao, T. Wang, A rapid solution of a kind of 1D Fredholm oscillatory integral
equation, J. Comput. Appl. Math. 236 (2012) 2696–2705.

[7] Zaheer-ud-Din and Siraj-ul-Islam, Numerical Solution of Highly Oscillatory Fredholm Integral
Equation- A Mesh free Approach, 2nd SPI Conference Proceedings, UET Peshawar, (2014) 6-11.

[8] Zaheer-ud-Din and Siraj-ul-Islam, A numerical solution technique of 1D Fredholm integral equa-
tion having oscillatory kernel with stationary points, 3rd SPI Conference Proceedings, UET Pe-
shawar, (2016) 198-200.

133
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Fuzzy Selective Image Segmentation Model


Hybrid with Local Image Data and Target
Region Energy
Ali Ahmad∗, Noor Badshah†

Abstract
In this paper, we have proposed a fuzzy selective image segmentation model, which is based
on novel approximate image (local image data) and selective energy term. Local data analysis
helps in decreasing the inhomogeneity of the image and selective energy keeps the evolution
of the pseudo level set (fuzzy membership) towards target region. In this way, a novel hybrid
fuzzy selective energy functional is formulated, which is the hybridization of three concepts: (1)
the coefficient of variation (2) approximate image and (3) novel selective region term. Medical
and inhomogeneous image selective segmentation demonstrate the effectiveness of the proposed
model.
Keywords. Selective Image Segmentation, Pseudo Level Set, Approximate Image, Target
Image Region.

1 Introduction

Image segmentation is to divide an image into meaningful and important parts and is
considered as one of the most fundamental and crucial task in fields of image processing
and computer vision. In a nutshell, image segmentation is nothing but to make the image
representation more simple and understandable for further image analysis. Intensity inho-
mogeneity usually caused by the imperfect image devices acquisition is often found in real
world images. A well reputed segmentation model is the Mumford and Shah model (MS
model)[9], is based on the assumption of piecewise smooth variation of the intensities in
the image, suitable for segmentation of images having intensity inhomogeneity. However,
due to its complex structure, a number of simplified version has been developed[2, 3, 6].
The most popular of these is the piecewise constant (PC) model known as the Chan and
Vese model (the CV model) [3], a level set [10], technique has been used for the solution of
their energy functional. This model does not depend on the gradient of the image, rather
it uses image statistics. This model is successful in segmenting images having objects
with weak or blur edges, and objects with noise but may be unable to segment images

Department of Basic Sciences, UET Peshawar, Pakistan. email. aliahmadmath@gmail.com

Department of Basic Sciences, UET Peshawar, Pakistan. email. noor2knoor@gmail.com

134
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

with intensity inhomogeneity due to its assumption of homogeneity of regions. Similarly,


a fuzzy sets [11], based active contour model known as fuzzy energy based active contour
model (FEBAC) is proposed by Krindis et al [5], in the pseudo level set framework similar
to level set method. Due to its convex formulation and use of fast algorithm [12], achieves
global minimum in very few iteration, but unable to segment heterogenous images.
In general, it is either surgery, object tracking or medical diagnosis etc, often people
are interested in segmenting a particular object from the image. In situation like this,
selective segmentation models play an important role. Up to now very less research has
been done on this very important research area, specially, in fuzzy set context. Firstly, idea
of selective segmentation has been given by Gout et al [4]. It was improved by Badshah et
al [1], by introducing data term of the CV model. Zhang et al [13], used local information
to capture more complex features. To further improve the selective segmentation in
inhomogeneous and textural objects Mabood et al [8], use the idea of extended structure
tensor (EST). Recently, Liu et al [7], proposed a two stage selective segmentation model,
works well in extracting objects of interest from medical and inhomogeneous images.
In this work, we have developed a fuzzy set approach based hybrid fuzzy selective
image segmentation model. We have developed a novel approximate image, easily decrease
the inhomogeneity of the image and utilized in coefficient of variation based data term.
Next, we have developed a novel selective term which utilizes edge and distance function
information to extract object of interest. The proposed model is helpful in extracting
objects from medical as well as images having intensity inhomogeneity.
This paper is organized as: section 2, describe the proposed method in detail. Ex-
perimental work is given in section 3. Finally, conclusive remarks is given in section
4.

2 Proposed Method

2.1 Proposed Energy Functional


Let ζ ⊂ R2 be an image domain. C be a curve in ζ and f : ζ → R be a given image. The
proposed model need to solve the following energy functional:
∫ ∫
n (f0 (z) − p1 ) n (f0 (z) − p2 )
2 2
E(C, p1 , p2 , w) = ν1 [w(z)] dz + ν2 [1 − w(z)] dz. (1)
ζ (p1 )2 ζ (p2 )2
Here ν1 , ν2 are positive constants. w is a fuzzy membership function such that w lies in
between the closed interval 0 and 1. p1 and p2 are constant averages inside and outside
C. The evolving curve C is implicitly represented by the 0.5 or pseudo level set of the
fuzzy membership function w and is linked as:
 { }

 C = z ∈ ζ : w(z) = 0.5

 { }
in(C) = z ∈ ζ : w(z) > 0.5 (2)

 { }

 out(C) = z ∈ ζ : w(z) < 0.5 .

135
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

f0 is a local image and is defined in the following way:

(f Avc − Avc2 )
f0 = η .
Avc
η is normalized constant, to preserve the average intensity of the local image f0 . In order
to extract more local image information a low-pass average circular filter Avc , is defined
in a circular shape window centered at each pixel z = (z1 , z2 ) and has the following
formulation:
1 ∑
Avc (z) = f (y),
m0 y∈S
c

such that { √ }
Sc = y : (y1 − z1 ) + (y2 − z2 ) ≤ γ0 .
2 2

Sc a circular region having radius γ0 . Radius is scalable and be chosen according to


the data. m0 counts the number of pixels in circular region.

2.2 Minimization
The energy functional in Eq. (1) is minimized w.r.t the constant averages p1 and p2 as
follows while keeping the membership function w fixed:

∫ n 2
ζ∫(w(z)) (f0 (z)) dz
p1 = (w(z)) n f (z)dz
0
(3)
ζ

∫ n (f (z))2 dz
ζ∫(1−w(z)) 0
p2 = (1−w(z)) n f (z)dz .
0
(4)
ζ

Now, keep the constant averages p1 and p2 fixed and minimize the energy functional in
Eq. (1) w.r.t w, we have the following:
1
w(z) = ( 1 .
) n−1 (5)
(p2 )2 ν1 (f0 (z)−p1 )2
1+ (p1 )2 ν2 (f0 (z)−p2 )2

Next, to get the evolution equation, we minimize the energy functional in Eq. (1)
with rest to w and introducing the artificial time variable, we get the following steepest
descent formulation.
∂w (f0 (z) − p1 )2 (n−1) (f0 (z) − p2 )
2
= nν1 (w(z))(n−1) − nν 2 (1 − w(z)) . (6)
∂t (p1 )2 (p2 )2

The main aim of the proposed work is to extract object of interest from the image, usually
very helpful in security and medical diagnosis. Let us consider that Z = (z1 , z2 , z3 , ...zT ),
be the set of points near the boundary of object of interest or inside are available. The

136
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

main aim by using these T points is to obtain a contour best reaches to points while
detecting target object in an image. The distance function d(z) is defined as in [4]:
T (
∏ )
|z − zi |2
∀z ∈ ζ d(z) = 1 − exp , (7)
j=1
2σ12

σ1 is constant and plays important role in selection of target object and should be chosen
according to the images in hand. d(z) is approximately zero in the neighborhood of set
of points Z and one far away. Other important function which is usually obtain edge
information of the image is the edge detector function [4], and is mostly used in the
segmentation models is defined as:
1
g(∇f0 ) = , (8)
1 + µ|∇f0 |2
µ is a positive constant. Edge detector function is one in the flat regions whereas it is zero
on edges. The edge detector function tries to stop the evolving curve C on the edges of
the target object in an image. Now, we are in a position to define our novel target region
term:

TRT = (1 − d(z)g(z))2 f02 (1 − d(z))w(z) (9)

Hence the overall energy functional of the proposed model is:


∂w (f0 (z) − p1 )2 (n−1) (f0 (z) − p2 )
2
= nν1 (w(z))(n−1) − nν 2 (1 − w(z)) + αTRT. (10)
∂t (p1 )2 (p2 )2
α a constant. Finite difference method is used for the solution of the above equation.
Time derivative is approximates using forward difference. The approximate solution is
given by:
k+1 k k
wi,j = wi,j + ∆tB(wi,j ). (11)
k
B(wi,j ), a right side approximation of Eq. (10), ∆t time step. The symbols (i, j) and k
stands for spatial and temporal indices respectively. By solving above approximation to
Eq. (10), a 0.5 or pseudo level set is obtained, which is further used in updating fuzzy
membership function and constant averages.

3 Experiments and Discussion

In this part of the paper, we will check the segmentation capability of the proposed method
on real medical images and images having intensity inhomogeneity. By using TRT, it is
necessary that the intensity of the target object from the surrounding be greater. It is
also important to note that the window size of the circular average filter and σ1 is scalable
and is selected according to the images. Other parameters like ν1 , ν2 , and α are fixed
otherwise will be stated.

137
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

In Fig. 1, a medical image with a tumor is given. Usually, medical images have very
much intensity inhomogeneity. The effectiveness of the proposed method in segmenting
a tumor from an eye is successful. In Fig. 1, an inhomogeneous image with three objects
of the same intensity variation a the complicated background is shown. The proposed
model shows good selective segmentation performance.

(a) Initialization (b) Target region (c) Final Contour with (d) Segmented result
marker points

Fig. 1: Segmentation result of the proposed model on medical eye image is given. (a)
initialization of pseudo level set (b) Target object (c) Final contour with marker
points in blue (d) segmented part of the image with size 256 × 256.

4 Conclusion

In this paper, a novel fuzzy selective image segmentation model is developed. To extract
out a particular object from inhomogeneous images, images with complicated background
and medical images, is a complicated task. First, we perform local image analysis in
circular shape region to extract more image information. Then a target region energy is
developed to extract an object of interest. Experimental results validate the effectiveness
of the proposed method in selective segmentation either medical or heterogeneous images.

References

[1] Noor Badshah and Ke Chen, Image selective segmentation under geometrical con-
straints using an active contour approach, Communications in Computational Physics
7 (2010), no. 4, 759.

[2] Tony F Chan, Selim Esedoglu, and Mila Nikolova, Algorithms for finding global min-
imizers of image segmentation and denoising models, SIAM journal on applied math-
ematics 66 (2006), no. 5, 1632–1648.

[3] Tony F Chan and Luminita A Vese, Active contours without edges, IEEE Transactions
on image processing 10 (2001), no. 2, 266–277.

138
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

(a) Initialization (b) Target region (c) Final Contour with (d) Segmented result
marker points

(e) Initialization (f) Target region (g) Final Contour with (h) Segmented result
marker points

(i) Initialization (j) Target region (k) Final Contour with (l) Segmented result
marker points

Fig. 2: Segmentation results of the proposed method with an inhomogeneous image with
multi-objects is given. First column initialization of pseudo level sets. Second
column with Target objects. Third column with final contour with marker points
in blue. Fourth column segmented part of the image with size 256 × 256.

139
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

[4] Christian Gout, Carole Le Guyader, and Luminita Vese, Segmentation under geo-
metrical conditions using geodesic active contours and interpolation using level set
methods, Numerical algorithms 39 (2005), no. 1-3, 155–173.

[5] Stelios Krinidis and Vassilios Chatzis, Fuzzy energy-based active contours, IEEE
Transactions on Image Processing 18 (2009), no. 12, 2747–2755.

[6] Yibao Li and Junseok Kim, An unconditionally stable numerical method for bimodal
image segmentation, Applied Mathematics and Computation 219 (2012), no. 6, 3083–
3090.

[7] Chunxiao Liu, Michael Kwok-Po Ng, and Tieyong Zeng, Weighted variational model
for selective image segmentation with application to medical images, Pattern Recog-
nition 76 (2018), 367–379.

[8] Lutful Mabood, Haider Ali, Noor Badshah, Ke Chen, and Gulzar Ali Khan, Ac-
tive contours textural and inhomogeneous object extraction, Pattern Recognition 55
(2016), 87–99.

[9] David Mumford and Jayant Shah, Optimal approximations by piecewise smooth func-
tions and associated variational problems, Communications on pure and applied
mathematics 42 (1989), no. 5, 577–685.

[10] Stanley Osher and James A Sethian, Fronts propagating with curvature-dependent
speed: algorithms based on hamilton-jacobi formulations, Journal of computational
physics 79 (1988), no. 1, 12–49.

[11] Fuzzy Sets, L. zadeh, Information and Control.–NY (1965), no. 8/3, 338–353.

[12] Bing Song and Tony Chan, A fast algorithm for level set based optimization, UCLA
Cam Report 2 (2002), no. 68.

[13] Jianping Zhang, Ke Chen, Bo Yu, and Derek A Gould, A local information based vari-
ational model for selective image segmentation, Inverse Problems Imaging 8 (2014),
no. 1, 293–320.

140
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Orthotropic – Winkler Like Model for Buckling of


Microtubules Due to Bending and Torsion.
M. Taj,1)
1Department of Mathematics University of Azad Jammu and Kashmir, Muzaffarabad 13100,

Abstract
Cytoskeleton is composed of three microfilaments: microtubules, intermediate filaments and
actin filaments. These filaments are interconnected and they support each other mechanically to
give shape and strength to the cytoskeleton. As the most rigid cytoskeletal filaments,
microtubules bear compressive forces in living cells by balancing the tensile forces within the
cytoskeleton to maintain the cell shape. Microtubules buckle under bending and torsion and this
property has been studied for free microtubules before using orthotropic elastic shell model. But
as microtubules are embedded in other elastic filaments and it is experimentally showed that
these elastic filaments affect the critical buckling moment and critical buckling torque of the
microtubules. To prove that we developed orthotropic Winkler like model and demonstrated that
the critical buckling moment and critical buckling torque of the microtubules are orders of higher
magnitude than those found for free microtubules. Our results show that Critical buckling
moment is about 6.04 nN nm for which the corresponding curvature is about 𝜃 = 1.33 rad /𝜇m
for embedded MTs, and critical buckling torque is 0.9 nN nm for the angle of 1.33 rad/𝜇m. Our
results well proved the experimental findings.
Key words: Microtubule, Orthotropic material, Buckling, Bending, Torsion, Winkler like Model,
Orthotropic elastic shell Model.

1. Introduction

Microtubules (MTs) are one of the most important parts of cytoskeleton of all living cells. The
main purpose of MTs is to give the strength and shape to the cell (Howard, 2001). MTs provide
path for vesicular transport (Schliwa and Woehlke, 2003) and chromosomes to approach the
poles in mitosis (Shaw et al., 2000). Various experiments proved that deformation of MTs is a
result of chemical reaction. For example, rise in tension on cells of nervous system leads to the

1
Corresponding author. Tel.: +00923460725331

E-mail address: muhammad_taj75@yahoo.com

141
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

assembly of MTs (Zheng et al., 1993) and increase in local curvature would result in
disintegration of MTs (Odde et al., 1999). Mechanical characteristics are crucial for complete
understanding of biological processes of MTs.
Bending in MTs may occur due to several physiological processes in living cells like
polymerization, acto-myosin contractility, motor activity and thermal motion of MTs
(Waterman-Storer and Salmon, 1997). Experiments on fibroblast cells revealed a mean curvature
of 0.4 rad/µm (Odde et al., 1999) in MTs to answer the question of bending buckling of MTs
during its biological functions.
By applying some form of bending to MTs, other mechanical characteristics like flexural
rigidity of MTs are calculated (Venier et al., 1994). Application of bending deformation caused
by hydrodynamic flow were applied to calculate the flexural rigidity of MTs (Felgner et al.,
1996), gained by studying the relaxation process of following MTs through laser trap (Felgner et
al., 1997), or deduced from thermal fluctuation of MT shapes (Mickey and Howard, 1995).
During the experiment for measuring the bending rigidity of carbon nanotubes (Poncharal et al.,
1999). Prediction of buckling of MTs is very important to ensure that buckling does not occur
on bent MTs during experimental tests.
Not much work has been done about bending buckling of MTs. Many simulations on bending
buckling were developed with the help of elastic sheet model (Janosi et al., 1998) and three-
dimension finite model (Hunyadi et al., 2007) which were related to the effect of MT helicity.
They also demonstrate that orthotropic walls has great effect on buckling of cylindrical shells.
But effect of wall orthotropicity on critical buckling parameters are yet to be discovered.
Orthotropic elastic shell model was developed for study of buckling behavior of MTs due
to bending and torsion. This model helped to estimate both parameters under bending and
torsion. This model also showed that almost every MT in cells buckled. Effect of shear modulus
and wall orthotropicity on critical buckling force were also calculated by orthotropic elastic shell
model.
MTs are like long hollow tubes with inner diameter about 20 nm and outer diameter 30
nm. Whereas length of MTs vary from 1 𝜇m to 10 𝜇m in cells and 50 𝜇m to 100 𝜇m in axons.
MTs are made up by assembly of dimers composed of α-tubulin and β-tubulin. Head to tail
assembly of these dimers makes parallel protofilaments whose length runs along the length of
MTs and lateral contact between nearby protofilaments make walls of MTs. N-S integers is the
most suitable description to explain the formation of MTs, here N and S are protofilament
number and helix start number respectively. Most common type of assembly of MTs is
assembled in vivo, in this case protofilaments run parallel with the longitudinal axis (Ishida et al.,
2007). There are many others type of MTs except in vivo configuration in different species and
cell types.

2. Orthotropic Winkler like Model

142
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

In this section, we develop an “Orthotropic Winkler like Model” for buckling of MTs within an
elastic medium due to bending and torsion. Since this model has five independent material
constants, longitudinal modulus, circumferential modulus, shear modulus, Poisson ratio along the
longitudinal direction and the elastic modulus of the surroundings (Ventsel, E. and
Krauthammer, 2004), denoted by 𝐸𝑥 , 𝐸𝜃 , 𝐺𝑥𝜃 , 𝜐𝑥 and 𝐾 respectively. The range of the values of
these material constants for MTs are obtained from the data available in the literature are
summarized in Table 1. The cross section of MTs will be treated as an equivalent circular
annular shape with equivalent thickness about ℎ ≈ 2.7 𝑛𝑚 (Pablo, P. J., Schaap, I. A. T.,
Mackintosh, F. C., and Schmidt, C. F, 2003), (Sirenko, M., Stroscio, M., and Kim, K. W., 1996).
Thus the elastic moduli, in-plan stiffnesses and the mass density are found based on such a
thickness, ℎ ≈ 2.7 𝑛𝑚. The bending thickness of MTs can be calculated by using so called the
“bridge” thickness, 1.1 nm (Pablo, P. J., Schaap, I. A. T., Mackintosh, F. C., and Schmidt, C. F,
2003), which is much smaller than 2.7 nm. Thus, just like the single walled carbon nanotubes
(Flugge, W., 1960) the effective bending stiffness of MTs, modeled as elastic shell, should be
considered to be an independent material constant. On buckling of individual MT by “Shell like
Model”, according to experimental data, the bending stiffness of MTs can be estimated by
effective thickness is about 1.6 nm (Pablo, P. J., Schaap, I. A. T., Mackintosh, F. C., and
Schmidt, C. F, 2003).

2.1 Governing Equations of Microtubule


Upon incipient bending buckling and torsional buckling of MTs, the surrounding filaments
network of cytoskeleton is deformed. In turn, the surrounding fibres exert a distributed force on
MTs in the opposite direction of the bending buckling and torsional buckling. Inspired by the
valid application of Winkler-like model to the buckling of MTs due to axial and radial force and
on the buckling of Carbon Nanotubes (Ru. C. Q., 2000), we use this model to relate the effects
of surrounding on bending buckling and torsional buckling of MTs.
The model is

𝑃 = −𝐾𝑤 (3.1)
Where negative sign show that the pressure ‘P’ is opposite to the incipient buckling mode and
‘K’ is the Elastic constant of fibres surrounding the MTs. Considering prestresses as 𝑁𝑥 , 𝑁𝜃 and
𝑁𝑥𝜃 for a MTs modelled as orthotropic elastic shell is described by following three equations
(Eslami, M. R. and R. Javaheri., 1999).

𝐹1 = 𝐴1 𝑢 + 𝐵1 𝑣 + 𝐶1 𝑤 = 0,

𝐹2 = 𝐴2 𝑢 + 𝐵2 𝑣 + 𝐶2 𝑤 = 0, (3.2)

𝐹3 = 𝐴3 𝑢 + 𝐵3 𝑣 + 𝐶3 𝑤 = 0.

143
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Where,

𝜕2 𝜕2 𝐾𝑥𝜃 𝑅 2 + 𝐷𝑥𝜃 𝜕2
𝐴1 = (𝐾𝑥 + 𝑁𝑥 )𝑅 2 + 2𝑅𝑁 𝑥𝜃 + ( + 𝑁 𝜃 )
𝜕𝑥 2 𝜕𝑥𝜕𝜃 𝑅2 𝜕𝜃 2
𝜕2
𝐵1 = 𝑅(𝑣𝑥 𝐾𝜃 + 𝐾𝑥𝜃 )
𝜕𝑥𝜕𝜃
𝜕 𝜕3 𝐷𝑥𝜃 𝜕 3
𝐶1 = −𝑅(𝑣𝜃 𝐾𝑥 − 𝑁𝜃 ) + 𝑅𝐷𝑥 3 −
𝜕𝑥 𝜕𝑥 𝑅 𝜕𝑥𝜕𝜃 2
2
𝜕
𝐴2 = 𝑅(𝑣𝜃 𝐾𝑥 + 𝐾𝑥𝜃 )
𝜕𝑥𝜕𝜃
2 2
𝜕 𝜕 𝐾𝑥𝜃 𝑅 2 + 3𝐷𝑥𝜃 2
𝜕2
𝐵2 = (𝐾𝜃 + 𝑁𝜃 ) 2 + 2𝑅𝑁𝑥𝜃 +( + 𝑁 𝑥 ) 𝑅
𝜕𝜃 𝜕𝑥𝜕𝜃 𝑅2 𝜕𝑥 2
𝜕 𝜕 𝜕3
𝐶2 = −(𝐾𝜃 + 𝑁𝜃 ) − 2𝑅𝑁𝑥𝜃 + (𝑣𝜃 𝐷𝑥 + 3𝐷𝑥𝜃 2 ,
𝜕𝜃 𝜕𝑥 𝜕𝑥 𝜕𝜃
𝜕 𝜕 3 𝐷𝑥𝜃 𝜕 3
𝐴3 = 𝑅(𝑣𝜃 𝐾𝑥 − 𝑁𝜃 ) − 𝑅𝐷𝑥 3 +
𝜕𝑥 𝜕𝑥 𝑅 𝜕𝑥𝜕𝜃 2
𝜕 𝜕 𝜕3
𝐵3 = (𝐾𝜃 + 𝑁𝜃 ) + 2𝑅𝑁𝑥𝜃 − (𝑣𝜃 𝐷𝑥 + 3𝐷𝑥𝜃 ) 2
𝜕𝜃 𝜕𝑥 𝜕𝑥 𝜕𝜃
4 2 4 2
2
𝜕 𝜕 𝜕 𝐷𝜃 𝜕 2 𝜕2
𝐶3 = −𝑅 𝐷𝑥 4 + 2𝑅𝑁𝑥𝜃 − (2𝑣𝜃 𝐷𝑥 + 4𝐷𝑥𝜃 ) − ( + 1) + 𝑁𝜃 2
𝜕𝑥 𝜕𝑥𝜕𝜃 𝜕𝑥 2 𝜕𝜃 2 𝑅 2 𝜕𝜃 2 𝜕𝜃
2
𝜕
+ 𝑁𝑥 𝑅 2 2 − 𝐾𝜃
𝜕𝑥
Here longitudinal coordinate is represented by 𝑥 and circumferential coordinate is
represented by 𝜃, and 𝑢, 𝑣 are axial and circumferential displacement respectively. Inward
deflection is represented by 𝑤, density represented by ρ and R is average radius. Furthermore
longitudinal and circumferential Poisson ratios are denoted by 𝑣𝑥 and 𝑣𝜃 respectively. These
Poisson ratios full fill the condition 𝑣𝜃 ⁄𝑣𝑥 = 𝐸𝜃 ⁄𝐸𝑥 , where 𝐸𝑥 and 𝐸𝜃 are longitudinal and
circumferential Young’s moduli, shear modulus is denoted by 𝐺𝑥𝜃 , in plane stiffness in
longitudinal direction is denoted by 𝐾𝑥 [= 𝐸𝑥 ℎ/(1 − 𝑢𝑥 𝑣𝜃 )] and in circumferential direction
is 𝐾𝜃 [= 𝐸𝜃 ℎ/(1 − 𝑢𝑥 𝑣𝜃 )], and 𝐾𝑥𝜃 (= 𝐺𝑥𝜃 ℎ) is in plane shear. Furthermore effective
bending stiffness is represented by 𝐷𝑥 , 𝐷𝜃 , and 𝐷𝑥𝜃 respectively (Flugge, W., 1960).
Following the literature, the cross section of MTs will be treated as an equivalent circular
ring shape, with an equivalent thickness ℎ ≈ 2.7 nm (Sirenko, M., Stroscio, M., and
Kim, K. W., 1996). Thus, throughout this paper, all elastic moduli and in-plane
Parameters Symbols Values

144
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Longitudinal modulus 𝐸𝑥 0.5 − 2 GPa


Circumferential modulus 𝐸𝜃 1 − 4 MPa
Shear modulus in 𝑥-𝜃 plane 𝐺𝑥𝜃 1 KPa −1 MPa
Poisson’s ratio in axial direction 𝑣𝑥 0.3
3
Mass density per unit volume 𝜌 1.48 g/cm
Equivalent thickness ℎ 2.7 nm
Effective thickness for bending ℎ0 1.6 nm

Table 3.1. Values of orthotropic material constants for MTs

Stiffness (not including bending stiffness), and the mass density 𝜌 are defined based on such
a thickness ℎ ≈ 2.7 nm. The range of the values of these material constants for MTs is
identified from the data available in literatures (Kawaguchi, K., S. Ishiwata and T.
Yamashita., 2008) and summarized in Table 3.1.
Bending stiffness of MTs is measured by “bridge” thickness because of the latticed
structure of MTs which is smaller than stiffness. We considered effective bending stiffness of
MTs as independent material constant during modelling as an elastic shell, in contrary to
elastic shell theory based on the thickness of MTs.
Experiments on individual MTs suggest that effective thickness ℎ0 can be used to predict the
bending stiffness, whose estimated value is 1.6 nm (Pablo, P. J., Schaap, I. A. T.,
Mackintosh, F. C., and Schmidt, C. F, 2003). So if the longitudinal modulus 𝐸 = 1 GPa,
longitudinal effective bending stiffness 𝐷𝑥= 𝐸𝑥 ℎ03 ⁄12(1 − 𝑣𝑥 𝑣𝜃 ) = 0.342 nN nm with
effective thickness ℎ0 = 1.6 nm. Similarly circumferential bending stiffness and in shear can
be estimated as 𝐷𝜃 = 𝐸𝜃 ℎ03 ⁄12(1 − 𝑣𝑥 𝑣𝜃 ) and 𝐷𝑥𝜃 = 𝐺𝑥𝜃 ℎ03 ⁄12. 𝐸𝑥 , 𝐸𝜃 , 𝐺𝑥𝜃 , and 𝑣𝑥 are
the four constants on which orthotropic elastic shell model depends for given ℎ and ℎ0 . We
substitute,

𝛼 = 𝑣𝜃 ⁄𝑣𝑥 = 𝐸𝜃 ⁄𝐸𝑥 = 𝐾𝜃 ⁄𝐾𝑥 = 𝐷𝜃 ⁄𝐷𝑥


and
𝛽 = 𝐺𝑥𝜃 ⁄𝐸𝑥 ≈ 𝐺𝑥𝜃 ⁄𝐸𝑥 (1 − 𝛼𝑣𝑥2 ) = 𝐷𝑥𝜃 ⁄𝐷𝑥 = 𝐾𝑥𝜃 ⁄𝐾𝑥 , (𝛼. 𝑣𝑥2 → 0)
So orthotropic elastic shell model can be described by four parameters 𝐸𝑥 , 𝑣𝑥 , 𝛼, and 𝛽. It
can be easily verified that isotropic elastic shell model can be derived from orthotropic elastic
shell model for Poisson ratio 𝑣 and using 𝛼 = 1 and 𝛽 = 1(1 − 𝑣)/2.
The bending approach can be taken as,

𝑚𝜋
𝑢(𝑥, 𝜃) = cos ( 𝑥) ∑ 𝐴𝑛 cos(𝑛, 𝜃)
𝐿
𝑛=1

𝑚𝜋
𝑣(𝑥, 𝜃) = sin ( 𝑥) ∑ 𝐵𝑛 sin(𝑛, 𝜃)
𝐿
𝑛=1

145
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

𝑚𝜋
𝑤(𝑥, 𝜃) = sin ( 𝑥) ∑∞
𝑛=1 𝐶𝑛 cos(𝑛, 𝜃). (3.3)
𝐿
In which 𝐴𝑛 , 𝐵𝑛 and 𝐶𝑛 are real constants, 𝑛 represent the circumferential wave number, 𝑚(≠
0) half axial wave number, 𝐿 is the length of MT, and the dimensionless axial wavelength
calculated as 𝐿/(𝑅𝑚). By the combination of (3.1), (3.2) and (3.3), we obtained the
“Winkler like Model” for MTs with in an elastic medium as,
2𝐿(𝑚2 𝜋 2 𝑅 4 𝐾𝑥 + 𝐿2 𝑅 2 𝐾𝑥𝜃 + 𝐿2 𝐷𝑥𝜃 )𝐴1 − 2𝑚𝜋𝐿2 𝑅 3 (𝐾𝑥𝜃 + 𝐾𝜃 𝜈𝑥 )𝐵1
− 2𝑚𝜋𝑅{𝐿2 𝐷𝑥𝜃 − 𝑅 2 (𝑚2 𝜋 2 𝐷𝑥 + 𝐿2 𝐾𝑥 𝑣𝜃 )}𝐶1 − 𝜋𝑚2 𝑀𝐿𝑅 2 𝐴2 = 0
2𝜋𝑚𝑅𝐿(𝐾𝑥𝜃 + 𝐾𝑥 𝑣𝜃 )𝐴1 − 2{𝐿2 𝐾𝜃 + 𝜋 2 𝑚2 (3𝐷𝑥𝜃 + 𝑅 2 𝐾𝑥𝜃 )}𝐵1
+ 2{𝐿2 𝐾𝜃 + 𝜋 2 𝑚2 (3𝐷𝑥𝜃 + 𝐷𝑥 𝑣𝜃 )}𝐶1 + 𝜋𝑚2 𝑀𝐵2 = 0
2𝜋𝑚𝐿{−𝐿2 𝐷𝑥𝜃 + 𝑅 2 (𝜋 2 𝑚2 𝐷𝑥 + 𝐿2 𝐾𝑥 𝑣𝜃 )}𝐴1 − 2𝐿2 𝑅{𝐿2 𝐾𝜃 + 𝜋 2 𝑚2 (3𝐷𝑥𝜃 + 𝐷𝑥 𝑣𝜃 )}𝐵1
+ 2𝑅{𝐿4 𝐾𝜃 + 𝜋(4𝜋𝑚2 𝐿2 𝐷𝑥𝜃 + 2𝐿4 𝑅𝐾 + 𝜋 3 𝑚4 𝑅 2 𝐷𝑥 + 2𝜋𝑚2 𝐿2 𝐷𝑥 𝑣𝜃 )}𝐶1
− 𝜋𝑚2 𝑀𝐿2 𝑅𝐶2 = 0
−𝜋𝑚2 𝑀𝐿𝑅 2 𝐴1 + 2𝐿(4𝐿2 𝐷𝑥𝜃 + 4𝐿2 𝑅 2 𝐾𝑥𝜃 + 𝜋 2 𝑚2 𝑅 4 𝐾𝑥 )𝐴2 − 4𝜋𝑚𝐿2 𝑅 3 (𝐾𝑥𝜃 + 𝐾𝜃 𝑣𝑥 )𝐵2
+ 2𝜋𝑚𝑅{−4𝐿2 𝐷𝑥𝜃 + 𝑅 2 (𝜋 2 𝑚2 𝐷𝑥 + 𝐿2 𝐾𝑥 𝑣𝜃 )}𝐶2 − 𝜋𝑚2 𝑀𝐿𝑅 2 𝐴3 = 0
𝜋𝑚2 𝑀𝐵1 + 4𝜋𝑚𝐿𝑅(𝐾𝑥𝜃 + 𝐾𝑥 𝑣𝜃 )𝐴2 − 2{4𝐿2 𝐾𝜃 + 𝜋 2 𝑚2 (3𝐷𝑥𝜃 + 𝑅 2 𝐾𝑥𝜃 )}𝐵2
+ 4{𝐿2 𝐾𝜃 + 𝜋 2 𝑚2 (3𝐷𝑥𝜃 + 𝐷𝑥 𝑣𝜃 )}𝐶2 + 𝜋𝑚2 𝑀𝐵3 = 0
−𝜋𝑚2 𝑀𝐿2 𝑅 2 𝐶1 + 2𝜋𝑚𝐿𝑅{−4𝐿2 𝐷𝑥𝜃 + 𝑅 2 (𝜋 2 𝑚2 𝐷𝑥 + 𝐿2 𝐾𝑥 𝑣𝜃 )}𝐴2
− 4𝐿2 𝑅 2 {𝐿2 𝐾𝜃 + 𝜋 2 𝑚2 (3𝐷𝑥𝜃 + 𝐷𝑥 𝑣𝜃 )}𝐵2
+ 2[9𝐿4 𝐷𝜃
+ 𝑅 2 {𝐿4 𝐾𝜃 + 𝜋(16𝜋𝑚2 𝐿2 𝐷𝑥𝜃 + 2𝐿4 𝑅𝐾 + 𝜋 3 𝑚4 𝑅 2 𝐷𝑥 + 8𝜋𝑚2 𝐿2 𝐷𝑥 𝑣𝜃 )}]𝐶2
− 𝜋𝑚2 𝑀𝐿2 𝑅 2 𝐶3 = 0
−𝜋𝑚2 𝑀𝐿𝑅 2 𝐴2 + 2𝐿(9𝐿2 𝐷𝑥𝜃 + 9𝐿2 𝑅 2 𝐾𝑥𝜃 + 𝜋 2 𝑀2 𝑅 4 𝐾𝑥 )𝐴3 − 6𝜋𝑚𝐿2 𝑅 3 (𝐾𝑥𝜃 + 𝐾𝜃 𝑣𝑥 )𝐵3
+ 2𝜋𝑚𝑅{−9𝐿2 𝐷𝑥𝜃 + 𝑅 2 (𝜋 2 𝑚2 𝐷𝑥 + 𝐿2 𝐾𝑥 𝑣𝜃 )}𝐶3 = 0
𝜋𝑚2 𝑀𝐵2 + 6𝜋𝑚𝐿𝑅(𝐾𝑥𝜃 + 𝐾𝑥 𝑣𝜃 )𝐴3 − 2{9𝐿2 𝐾𝜃 + π2 𝑚2 (3𝐷𝑥𝜃 + 𝑅 2 𝐾𝑥𝜃 )}𝐵3
+ 6{𝐿2 𝐾𝜃 + 𝜋 2 𝑚2 (3𝐷𝑥𝜃 + 𝐷𝑥 𝑣𝜃 )}𝐶3 = 0
−𝜋𝑚2 𝑀𝐿2 𝑅 2 𝐶2 + 2𝜋𝑚𝐿𝑅{−9𝐿2 𝐷𝑥𝜃 + 𝑅 2 (𝜋 2 𝑚2 𝐷𝑥 + 𝐿2 𝐾𝑥 𝑣𝜃 )}𝐴3 − 6𝐿2 𝑅 2 {𝐿2 𝐾𝜃 +
𝜋 2 𝑚2 (3𝐷𝑥𝜃 + 𝐷𝑥 𝑣𝜃 )}𝐵3 + 2[64𝐿4 𝐷𝜃 + 𝑅 2 {𝐿4 𝐾𝜃 + 𝜋(36𝜋𝑚2 𝐿2 𝐷𝑥𝜃 + 2𝐿4 𝑅𝐾 +
𝜋 3 𝑚4 𝑅 2 𝐷𝑥 + 18𝜋𝑚2 𝐿2 𝐷𝑥 𝑣𝜃 )}𝐶3 = 0 (3.4)
Now writing the system of equations, (3.4) in matrix form we obtained
𝐴1
𝐴2
𝐴3
𝐵1
𝐿
𝑴 (𝑀𝑐𝑟 , 𝐾, 𝑅𝑚) 𝐵2 = 0 (3.5)
9×9
𝐵3
𝐶1
𝐶2
[ 𝐶3 ]
We are looking for nontrivial solution which leads to det M=0, which result to critical
buckling load, hence the buckling mode.

146
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Figure 2.2. Dependence of critical buckling 𝑀𝑐𝑟 on the dimensionless axial wavelength (normalized by
diameter 2R) L/Rm is obtained for isotropic shell model (𝛼 = 1, 𝛽 = 0.35

Figure 2.1 The dependence of critical buckling Mcr on the dimensionless wavelength (normalized by the
diameter 2R). L/Rm is obtained for orthotropic model with α = β = 0.001

147
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Figure 3.1. Dependence of critical buckling 𝑀𝑐𝑟 on the dimensionless axial wavelength (normalized by
diameter 2R) L/Rm is obtained for orthotropic Winkler-like model.

Fig 3.1 graphically demonstrate critical buckling moment 𝑀𝑐𝑟 plotted against normalized
length 𝐿⁄𝑅𝑚, where R is the radius of MTs and m is the half axial wave number. K shows
the effects of surroundings on critical buckling of MTs in natural environment where they lie.
Here it is clear that without considering the effects of surroundings, the critical buckling
moment of MTs was about 0.85 nN nm. In our work where we considered the embedded
MTs, this value increases upto 6.04 nN nM which is about 7 times more than value for free
MTs. The proximity of elastic medium increases stiffness of MTs by a considerable value
which requires special attention that how the MTs when embedded in elastic medium provide
the shape and rigidity to the cell.
This value of moment corresponds to the critical buckling curvature of about 0.16 rad/𝜇m,
𝑀
calculated by the expression 1⁄𝜌 = 𝑐𝑟⁄𝜋𝐸 ℎ𝑅 3 . We use the same value for longitudinal
𝑥
modulus as used for free MTs. Experimental value for critical buckling curvature was 0.4

148
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

rad/𝜇m (Odde, D. J., L. Ma, A. H. Briggs, A. Demarco and M. W. Kirchner., 1999 ), which
is very close to our theoretical value. This proves that due to elastic effects on MTs, its
rigidity increases and it can provide shape and rigidity to the cell to maintain its shape for
proper functioning of organelles of the cell.
2.2 Effects of Elastic Medium on Torsional buckling of MTs
During many physiological processes such as in moving motor proteins along MTs,
movement of cilia and flagella, movement of chromosomes, and crawling with

Figure 2.3 Dependence of critical buckling torque 𝑇𝑐𝑟 on the dimensionless axial wavelength (normalized by
diameter 2R) L/Rm is obtained for isotropic elastic shell model (𝛼 = 1, 𝛽 = 0.35)

149
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Figure 2.4 Dependence of critical buckling torque 𝑇𝑐𝑟 on the dimensionless axial wavelength (normalized by
diameter 2R) L/Rm is obtained for anisotropic elastic shell model (𝛼 = 𝛽 = 0.001)

150
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Figure 3.2 Dependence of critical buckling torque Tcr on the dimensionless axial wavelength
(normalized by diameter 2R) L/Rm is obtained for orthotropic Winkler-like model.

Figure 3.3 Dependence of critical buckling torque 𝑇𝑐𝑟 on the dimensionless axial
skewed angle
wavelength on the inner
(normalized surface2R)
by diameter of L/Rm
plasma membrane.
is obtained MTs rotateWinkler-like
for orthotropic within the cell. Before
torsion of MTs was studied without considering the effect of elastic medium (Yi, L., T. C.
model
Chang and C. Q. Ru., 2008). In this study the author demonstrate the critical buckling load
due to torsion. But the surrounding medium may affect the torsional behaviour of MTs. Due
to coupling with the surrounding, the critical buckling load due to torsion may rise. To
confirm the above questions, we discussed in this paper the effect of medium on torsional
mechanics of MTs. We calculated the buckling torque and corresponding critical torsional
angle and the corresponding filament length.
For a MT embedded in elastic medium, shearing force 𝑁𝑥𝜃 is very vital. But 𝑁𝑥 =
𝑁𝜃 = 0, then the buckling mode for embedded MTs due to torsion can be represented by
following (Flugge, W., 1960).
𝑚𝜋
𝑢(𝑥, 𝜃) = 𝑈 cos ( 𝑥 − 𝑛𝜃)
𝐿
𝑚𝜋
𝑣(𝑥, 𝜃) = 𝑉 cos ( 𝑥 − 𝑛𝜃)
𝐿
𝑚𝜋
𝑤(𝑥, 𝜃) = 𝑊 sin ( 𝐿 𝑥 − 𝑛𝜃) (3.6)
In which 𝑈, 𝑉 and 𝑊 are real constants. 𝑛 denotes the circumferential wave number, 𝐿 is
length of MT and nonzero ‘𝑚’ is half-axial wave number. Putting (3.1) and (3.6) into the
orthotropic elastic shell model (3.2) and obtained the set of equations,

151
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

2𝜋𝑚𝑛𝑅𝑁𝑥𝜃 𝜋 2 𝑚2 𝑅 2 (𝐾𝑥 + 𝑁𝑥 ) 𝑛2 {𝑅 2 (𝐾𝑥𝜃+ 𝑁𝜃 ) + 𝐷𝑥𝜃 } 𝜋𝑚𝑛𝑅(𝐾𝑥𝜃 + 𝐾𝜃 𝑣𝑥


[ − − ] 𝑈 + [ ]𝑉
𝐿 𝐿2 𝑅2 𝐿
𝜋𝑚[𝑛2 𝐿2 𝐷𝑥𝜃 + 𝑅 2 {𝐿2 (𝑁𝜃 − 𝐾𝑥 𝑣𝜃 ) − 𝑛2 𝐿2 𝐷𝑥𝜃 }]
+[ ]𝑊 = 0
𝐿3 𝑅
𝜋𝑚𝑛𝑅(𝐾𝑥𝜃 + 𝐾𝑥 𝑣𝜃 ) 2𝜋𝑚𝑛𝑅𝑁𝑥𝜃 𝑛2 {𝑅 2 (𝐾𝑥𝜃 + 𝑁𝜃 ) + 𝐷𝑥𝜃 }
[ ]𝑈 + [ − − 𝑛2 (𝐾𝜃 + 𝑁𝜃 )] 𝑉
𝐿 𝐿 𝑅2
𝜋 2 𝑚2 𝑛(3𝐷𝑥𝜃 + 𝐷𝑥 𝑣𝜃 ) 2𝜋𝑚𝑅𝑁𝑥𝜃
+[ − + 𝑛(𝐾𝜃 + 𝑁𝜃 )] 𝑊 = 0
𝐿2 𝐿
𝜋𝑚[𝑛2 𝐿2 𝐷𝑥𝜃 + 𝑅 2 {𝐿2 (𝑁𝜃 − 𝐾𝑥 𝑣𝜃 ) − 𝜋 2 𝑚2 𝐷𝑥 }]
[ ]𝑈
𝐿3 𝑅
2
𝜋 2 𝑚2 𝑛(3𝐷𝑥𝜃 +𝐷𝑥 𝑣𝜃 ) 2𝜋𝑚𝑅𝑁𝑥𝜃 (𝑛2 −1) 𝐷𝜃
+[ − − 𝑛(𝐾𝜃 + 𝑁𝜃 )] 𝑉 − [ +
𝐿2 𝐿 𝑅2
𝑅 2 [𝜋 2 𝑚2 𝐿2 (4𝑛2 𝐷𝑥𝜃 +𝑅2 𝑁𝑥 +2𝑛2 𝐷𝑥 𝑣𝜃 )+]
+ 𝐿4 (2𝜋𝑅𝐾 + 𝑛2 𝑁𝜃 ) + 𝜋 4 𝑚4 𝑅 2 𝐷𝑥 − 2𝜋𝑚𝑛𝐿3 𝑅𝑁𝑥𝜃 +
𝐿4 𝑅2

𝐿4 𝐾𝜃 ] 𝑊 = 0 (3.7)
We are interested in the nonzero solution of (3.7), which lead to 𝑑𝑒𝑡𝑴 = 𝟎 where
𝑈
𝐿
𝑴 (𝑇𝑐𝑟 , 𝑅𝑚 , 𝑛) [𝑉] = 0 (3.8)
3×3
𝑊
(3.8) is the matrix form of (3.7). Eqs. (3.7) are derived by setting 𝑅 = 13 nm, 𝐸𝑥 = 1
GPa, 𝑣𝑥 = 0.3, 𝛼 = 0.001 and 𝛽 = 0.001, the critical buckling torque 𝑇𝑐𝑟 with different
length 𝐿⁄(𝑅𝑚) and 𝑛 is plotted in Fig. 3.2. With the comparison of orthotropic elastic shell
model for free MTs, it is clear that critical buckling torque is near 0.95 nN nm, due to which
the critical torsional angle is about 𝜃 = 1.33 rad /𝜇m ≈ 76.24°/𝜇m and corresponds to a
skew angle of filament about . 𝛾 = 𝑅𝜃 ≈ 0.99°.
For a MT of significant length, m=3 and n=2 correspond to minimum buckling load. In this
case, it can be verified from Eqs. (3.7) that the force for critical buckling derived by
relation (𝑁𝑥𝜃 )𝑐𝑟 = 𝜋𝑅𝐸𝑥 ℎ⁄𝐿(1 − 𝑣𝑥 𝑣𝜃 ). Moreover the critical torque can be related
as 𝑇𝑐𝑟 = 2𝜋𝑅 2 (𝑁𝑥𝜃 )𝑐𝑟 , with the help of this equation, critical buckling torque can be derived
as 𝑇𝑐𝑟 = 2𝜋𝑅 3 𝐸𝑥 ℎ⁄𝐿(1 − 𝑣𝑥 𝑣𝜃 ) ≈ 2𝜋 2 𝑅 3 𝐸𝑥 ℎ⁄𝐿. Our result pointed out that the embedded
MTs are stiffer than the free MTs which were calculated earlier (Yi, L., T. C. Chang and C. Q.
Ru., 2008). But elastic medium in the surrounding of MTs significantly increase the rigidity
of MTs which is not easy to ignore. Our results shows that embedded MTs can 12 time more
force than free MTs.
Conclusions
We combined orthotropic elastic shell model with Winkler like model to develop orthotropic
Winkler-like model to investigate the effects of elastic surrounding on the buckling
behaviour of MTs under bending and torsion. Critical buckling moment of about 6.04 nN nm
is obtained to which the corresponding curvature is about 𝜃 = 1.33 rad /𝜇m ≈ 76.24°/𝜇m

152
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

for embedded MTs, critical buckling torque of 0.9 nN nm for the angle of 1.33 rad/𝜇m for a
MT is derived. Our designed results of orthotropic Winkler-like model are compared with
orthotropic elastic shell model for free MTs. It is clear that, by orthotropic elastic shell
model, we cannot obtain the values of the critical bending buckling, which are obtained by
experiment (Odde, D. J., L. Ma, A. H. Briggs, A. Demarco and M. W. Kirchner., 1999 ).
But our proposed model well agree with the practical values obtained in the laboratory. Our
calculation shows surrounding medium has drastic effect on the stiffness of MTs hence on
the cells. In our this paper, we tried to prove some experimental results about embedded
MTs. We demonstrated theoretically that coupling of medium in which MTs lie and perform
their function greatly affect the mechanical properties of MTs.
Particularly, we used Orthotropic Winkler-like model for bending buckling and torsional
buckling and proved that due to coupling, how bending moment and torsional buckling load
increase and give strength to the cell.
In future, we can consider the non-linear and viscous effects of surrounding medium on
MTs. Similar procedure can be applied to calculate the effect of medium on other
components of cytoskeleton. We can also develop a mathematical model which can
formulate all the components of cytoskeleton as all component together give shape and
maintain the cell rigidity.

REFERENCES

Howard, J. 2001. Mechanics of Motor Proteins and the cytoskeleton. Sinauer Associates Inc.,
Sunderland.
Schliwa, M., and G. Woehlke. 2003. Molecular motors. Nature (London), 422(6933):759-
765.
Shaw, M. K., H. L. Compton, D. S. Roos and L. G. Tilney. 2000. Microtubules, but not actin
filaments, drive daughter cell budding and cell division in Toxoplasma gondii. J. Cell.
Sci., 113(7):1241-1254.
Zheng, J., R. E. Buxbaum and S. R. Heidemann, 1993. Investigation of microtubule assembly
and organization accompanying tension-induced neurite initiation. J. Cell Sci.,
104(4): 1239-1250.
Odde, D. J., L. Ma, A. H. Briggs, A. Demarco and M. W. Kirchner. 1999. J. Cell. Sci.,
112(19): 3283-3288.
Waterman-Storer, C. M. and E. D. Salmon. 1997. Actomyosin based retrograde flow of
microtubules in the lamella of migrating epithelial cells influences microtubule
dynamic instability and turnover and is associated with microtubule breakage and

153
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

treadmilling. J. Cell Biol., 139(2): 417-434.


Venier, P., A. C. Maggs, M. F. Carlier and D. Pantaloni. 1994. Analysis of microtubule
rigidity using hydrodynamic flow and thermal fluctuations. J. Biol. Chem., 269(18):
13353-13360.
Felgner, H., R. Frank and M. Schliwa. 1996. Flexural rigidity of microtubules measured with
the use of optical tweezers. J. Cell. Sci., 109(2): 509-516.
Felgner, H., R. Frank, J. Biernat, E. M. Mandelkow, B. Ludin, A. Matus and M. Schliwa.
1997. Domains of neuronal microtubule-associated proteins and flexural rigidity of
microtubules. J. Cell Biol., 138: 1067-1075.
Mickey, B. and J. Howard. 1995. Rigidity of microtubules is increased by stabilizing agents.
J. Cell Biol., 130(04): 909-917.
Poncharal, P., Z. L. Wang and D. Ugarte. 1999. Electrostatic deflections and
electromechanical resonances of carbon nanotubes. Science, 283(5407): 1513-1516.
Chang, T. C., and J. Hou. 2006. Molecular dynamics simulations on buckling of
mulatiwalled carbon nanotubes under bending. J. Appl. Phys., 100(11): 114327-
114332.
Janosi, I. M., D. Chretien and H. Flyberg. 1998. Modeling elastic properties of microtubule
tips and walls. Eur. Biophys. J., 27(5):501-513.
Hunyadi, V., D. Chretien. H. Flyberg and M. Janosi. 2007. Why is the microtubules lattice
helical?. Biol. Cell, 99(2): 117-128.
Ishida, T., S. Thitamadee and T. Hashimoto. 2007. Twisted growth and organization of
cortical microtubules. J. Plant. Res., 120(1): 61-70.

Ventsel, E. and Krauthammer, T. Thin Plates and Shells, Marcel Dekker, New York (2004)

Pablo, P. J., Schaap, I. A. T., Mackintosh, F. C., and Schmidt, C. F. Deformation and
Collapse of Microtubules on the nanometer scale. Phys. Rev. Lett., 91(9), 098101–098114
(2003)

Sirenko, M., Stroscio, M., and Kim, K. W. Elastic vibrations of microtubules in a fluid.

Phys. Rev. E, 53(1), 1003–1010 (1996)

Flugge, W. Stresses in Shells, Springer-Verlag, Berlin (1960)

Ru. C. Q. 2000. Effective bending stiffness of carbon nanotubes. Phys. Rev. B., 62: 9973-
9978.

Eslami, M. R. and R. Javaheri. 1999. Buckling of composite cylindrical shells under


mechanical and thermal loads. J. Therm. Stresses., 22(6): 527-545.

154
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Kawaguchi, K., S. Ishiwata and T. Yamashita. 2008. Temperature dependence of the flexural
rigidity of single microtubules. Biophys. Res. Commun., 336: 637-642.

Odde, D. J., L. Ma, A. H. Briggs, A. Demarco and M. W. Kirchner. 1999. J. Cell. Sci.,
112(19): 3283-3288.

Yi, L., T. C. Chang and C. Q. Ru. 2008. Buckling of microtubules under bending and torsion.
J. App. Mech., 103(10): 103516-22.

Odde, D. J., L. Ma, A. H. Briggs, A. Demarco and M. W. Kirchner. 1999. J. Cell. Sci.,
112(19): 3283-3288.

155
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

An Efficient Hybrid Distance Variational Image


Segmentation Model
Noor Badshah∗, Fazli Rehman†, Ali Ahmad‡

Abstract
Intensity inhomogeneity is a major challenge in the field of image segmentation. For various
images with intensity inhomogeneity and complex structure, it is difficult to segment them,
because majority of region-based models relay on the intensity distributions. In this paper, we
suggested a new hybrid model by combing Euclidean and kernel distance metrics. Experimental
results on various images show that our suggested model is superior than CV, Wu et al. and
Salah et al. models.
Keywords. Image segmentation, kernel function, kernel tricks, level set, functional mini-
mization, Euler’s Lagrange equation, finite difference method.

1 Introduction

Image segmentation is a curial and complicated task, in the area of computer vision and
image processing. Its main aim is to alter the representation of a given image into some-
thing new that is more meaningful and simple for further analysis. To decipher image
segmentation problems, researcher have been suggested several models. They also have
done remarkable work to mend the efficiency of the existing segmentation models.
Active contour model or snake model suggested by Kass et al. [6] has been found a
reliable segmentation model. Its basic idea is to start with an initial curve around the
detecting object, then it moves in the direction of its interior normal and stops on the
original boundary of the detecting object. This technique depends on the minimization
of the energy functional. The foremost demerit of this method is its sensibility to the
position of initial contour. Several methods have been suggested for the improvement
of the active contour model, among them most crucial and prosperous one is the level
set method suggested by Osher and Sethian [3]. The current active contour models may
be classified into edge-based models [13, 14, 15, 16, 17, 18] and Region-based models
[1, 2, 5, 7, 8, 9, 10, 11, 12]. Each of these models have their significance and drawbacks.
Edge-based models depend on the gradient of image for stopping the evolving contour on
the boundaries of the desire object. Generally, edge-based models contain a balloon force

Department of Basic Sciences, UET Peshawar. email. noor2knoor@gmail.com

Department of Basic Sciences, UET Peshawar. email. fazalpeshawaree@gmail.com

Department of Basic Sciences, UET Peshawar. email. aliahmadmath@gmail.com

156
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

term and an edge-based stopping term which shrink and expand the contour. Although
edge-based models work very well in those images in which there is a good intensities
contrast among various regions, but still it has some intrinsic limitations. For example,
these models can segment those images whose boundaries defined by the image gradient.
Also, for these models suitable selection of balloon force term is difficult. The efficiency
of edge-based models is not good in those images in which there are too many edges or
edges are ill defined.
Region-based models are based on homogeneity of regions i.e., these models are based
on grouping similar characteristics pixels into homogeneous regions. In these techniques
statistical information are used. Region-based models have many significance over the
edge-based models, for example these models do not use image gradient and give betters
segmentation results in those images having weak boundaries. Although region-based
segmentation is very easy and useful, still it has some intrinsic limitations e.g. computa-
tionally as well as memory point of view these models are expensive.
Chan and Vese (shortly CV this work) suggested a region-based model [2] which is a
particular case of Mumford-Shah model [4]. It is a representative and popular one among
the region-based models. It can easily segment those images having high noise and also
it can segment those objects whose boundaries cannot be defined by gradient. Wu et al.
suggested a convex variational model [1]. It works well in low intensity inhomogeneity
but may failed in sever intensity inhomogeneous images. Salah et al. model [5] used the
idea of kernel induced non-Euclidean distance. Their model works well in noisy images
but it is difficult for this model to segment images having high intensity inhomogeneity
and multi-intensity regions.
In this work, we suggested a new hybrid model i.e. we suggested kernel based Chan Vese
(KBCV) model by combining the idea of Euclidean and kernel induced non-Euclidean
distances used in CV model [2] and Salah et al. model [5]. The suggested KBCV model
to some extent overcomes the limitation of the discussed CV, Salah et al. and Wu et al.
models [1, 2, 5].
The remaining of this paper is arranged as follow: In Section 2, we concisely reviewed
the CV, Wu et al. and Salah et al. models. In Section 3, our suggested hybrid KBCV
model is presented. In section 4, the suggested model KBCV is examined on some real
and synthetic images. Finally, conclusion of the suggested model is given in Section 5.

2 Previous Work

In this section we discuss some well known and latest models which are designed for
segmenting images with intensity inhomogeneity and multi-regions.

157
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

2.1 Active Contour without Edges (CV) Model


Chan and Vese (CV in this work) [2] suggested an active contour strategy for two phase
image segmentation which in the real sense a specific case of Mumford-Shah model ( [4]. The
fundamental concept of CV model is to dig out partition of a particular image P◦ i.e. P◦ =
)
P◦ (X), where X = (x, y) in two regions i.e. background and foreground. Let ξ denotes
a boundary which separate the background and foreground, further suppose Υ be the two
dimensional domain of P◦ . CV model defined the energy function as under:


E CV
(ϱ1 , ϱ2 , ξ) = ρ.length(ξ) + α1 |P◦ − ϱ1 |2 dX
inside(ξ)

+α2 |P◦ − ϱ2 |2 dX, (1)
outside(ξ)

Here, ϱ2 and ϱ1 denotes average intensity value of P◦ outside and inside the curve ξ,
while the parameter α1 , α2 and ρ are positive. In level set method Eq. (1) can be written
as follow:

∫ ∫
E CV
(ϱ1 , ϱ2 , Φ) = ρ δ(Φ)|∇Φ|dX + α1 |P◦ − ϱ1 |2 H(Φ)dX
Υ∫ Υ

+α2 |P◦ − ϱ2 |2 (1 − H(Φ))dX, (2)


Υ

Where Φ, H and δ are the level set, Heaviside unit step and dirac delta functions,
respectively. H and δ are defined as under:
{
1 if y ≥ 0
H(y) = and δ(y) = H′ (y)
0 if y < 0

Since H is not differentiable at the origin, therefore CV utilized regularized versions


of H and δ which defined as follow:

(1 1 τ ) ϵ( 1 )
Hϵ (τ ) = + arctan( ) , δϵ (τ ) = H′ ϵ (τ ) = ,
2 π ϵ π ϵ2 + τ 2
So, regularized version of Eq. (2) is

∫ ∫
E CV
(ϱ1 , ϱ2 , Φ) = ρ δϵ (Φ)|∇Φ|dX + α1 |P◦ − ϱ1 |2 Hϵ (Φ)dX
Υ∫ Υ

+α2 |P◦ − ϱ2 |2 (1 − Hϵ (Φ))dX (3)


Υ

Considering Φ constant and minimizing Eq. (3) with respect to ϱ1 and ϱ2 we get:

158
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

( ∫ )

P H (Φ)dX P
Υ ◦
1 − H ϵ (Φ) dX
∫ ◦ ϵ
ϱ1 (Φ) = Υ , ϱ2 (Φ) = ∫ ( )
H (Φ)dX
Υ ϵ 1 − Hϵ (Φ) dX
Υ

Similarly, minimizing Eq. (3) with respect to Φ and considering ϱ1 and ϱ2 constant,
we get:

[ ( ∇Φ ) ]
δϵ (Φ) ρdiv − α1 (P◦ − ϱ1 )2 + α2 (P◦ − ϱ2 )2 = 0 in Υ, (4)
|∇Φ|
δϵ (Φ) ∂Φ
= 0 on ∂Υ.
|∇Φ| ∂⃗n
CV model successfully segment having noise and those images whose boundaries can-
not be defined by gradient. Also, this model easily segment intensity homogeneous images.
However, intensity inhomogeneous images are not easily segmented by this model. More-
over, this model gives weak segmentation outputs, as it is non-convex.

2.2 Effective Level Set Image Segmentation With a Kernel Induced


Data Term
Generally image data in different portions may not be linearly separable. For such kind
data majority of linear separable models give weak segmentation results. To overcome
this problem Salah et al. [5] used the idea of kernel function and non-Euclidean distance
and suggested a new segmentation model. Kernel function [19] and kernel trick [27] play
very crucial role in data classification problems.
Suppose, give image be P◦ : Υ → R and φ be a transformation function, which transform
the given image data to higher dimensional feature space. Salah et al. [5] defined the
following energy function:
∑N ∫
Sa
E (Υr , nr ) = ∥ φ(P◦ ) − φ(nr ) ∥2 dX + λ | ∂Υr |, (5)
r=1 Υr

where Υ = { Υr : r = 1, 2, . . . N }, while first and second terms on the right hand side of
Eq. (5) are data
( term and ) regularization term. Salah et al. used RBF kernel function i.e.
∥x−y∥2
K(x, y) = exp − σ2 , with the property K(x, y) = φ(x)T .φ(y) and got the following
1
relation:
( )T ( )
∥ φ(P◦ ) − φ(nr ) ∥2 = φ(P◦ ) − φ(nr ) . φ(P◦ ) − φ(nr ) (6)
= K(P◦ , P◦ ) + K(nr , nr ) − 2K(P◦ , nr )
The energy functional ESa measures kernel induced non-Euclidean distance between
P◦ and nr . Salah et al. defined the segment parameter nr as follow:
∫ ( )
P K P◦ , nr dX
Υr ◦
nr = ∫ ( ) r = 1, 2, . . . , N. (7)
Υr
K P◦ , n r dX

159
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

This model gives good segmentation results in noisy images. However, like the discussed
models, this model has the same problem in segmenting intensity inhomogeneous images,
multi-intensity regions images.

2.3 Convex Variational Level Set Method for Image Segmentation


Wu et al. put forward a convex variational level set model [1] which depend on the
coefficient of variation. Wu et al. model defined the energy function as under:


Wu (P◦ − a1 )2
E (Φ) = γ 2
(Φ + 1)2 dX (8)
a1
∫Υ
(P◦ − a2 )2
+ 2
(Φ − 1)2 dX , f or Φ ∈ L2 (Υ),
Υ a2
where γ > 0 and the values of the constants a1 and a2 are defined as:
∫ ( )

P H(Φ)dX
2 P
Υ ◦
2
1 − H(Φ) dX
a1 = ∫Υ ◦ , a2 = ∫ ( ) .
P H(Φ)dX
Υ ◦ P◦ 1 − H(Φ) dXΥ

Minimizing Eq. (8) by gradient descent technique, we get the following equation:

(P◦ − a1 )2 (P◦ − a2 )2
Φt = −γ (Φ + 1) − (Φ − 1) (9)
a21 a22
[ (P − a )2 (P − a )2 ] [ (P − a )2 (P − a )2 ]
◦ 1 ◦ 2 ◦ 1 ◦ 2
= − γ + Φ − γ − .
a21 a22 a21 a22
Wu et al. model is limited to low intensity inhomogeneous images where as it does
not work well in segmenting sever intensity homogeneous images. Moreover, it is difficult
for this model to segment those images having multi-intensity regions.

Motivation to hybrid data term

Chan and Vese utilized Euclidean distance in their model [2] which is successful in seg-
menting two phase images and noisy images up to some limits. But this model is not
completely successful to segment multi-intensity and sever inhomogeneous regions images.
Computationally this model is also expensive. Salah et al. [5] used kernel function and
non-Euclidean metric concept in their model. This model gives good segmentation results
in noisy, intensity homogeneous and low intensity inhomogeneous images. However, this
model has the same limitations as in CV model. Similarly, Wu et al. model has the same
drawbacks.
In this work we suggested a hybrid KBCV model i.e. kernel based Chan Vese model, by
combing the idea of Euclidean and non-Euclidean distances utilized in CV and Salah et

160
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

al. model [2, 5]


The suggested model surmounts the limitations of the models discussed in section 2. In
addition, this model works well in adding speckle noise in the given image but not in
multi-intensity images. The performance and outcomes of the suggested model can be
judged from Fig. (1–5) and table (1).

3 Suggested Model

Salah et al. model [5] used kernel function and non-Euclidean distance. The performance
of kernel function is very crucial in the classification of non-linearly separable data [20, 21,
22, 23]. It projects implicitly the given image data into higher dimensional space through
some transformation where it can be separated by a linear hyper-plane. The hyper-plane
which is linear in the higher dimensional feature space, projecting back to the original
space is non-linear [27].
Motivated by the models discussed in section 2, in this section we suggested a hybrid
model based on Euclidean and kernel distance metric i.e. in the fitting terms of CV
model (i.e. in Eq.(3)) we utilized the kernel distance metric as follows:


E(b1 , b2 , Φ) = β1 [|P◦ − b1 |2 + (1 − K1)]Hϵ (Φ)dX

Υ

+β2 [|P◦ − b2 |2 + (1 − K2)](1 − Hϵ (Φ))dX, (10)


Υ

where K1 and K2 are Gaussian kernels (RBF kernels) and defined as follow:

(P◦ − b1 )2 (P◦ − b2 )2
K1 = exp(− ) and K2 = exp(− ).
σ2 σ2
The parameter σ (σ ≥ 0) is known the bandwidth, it depend on the distance variance
of the entire data points in the given image. The value of σ can be find as:
Suppose “N ” be the the total data points (pixels) in the given image P◦ (x, y). Now we
find the average intensity of the given image P◦ (x, y) at all pixels i.e. {uj = P◦ (xi , yj ) :
i, j = 1, 2, 3, . . . , N }, then

1 ∑
N
uj = uj
N j=1

Let Dj =∥ uj − u ∥ be the distance between the data point uj and the data center v .
Further suppose that D be the mean of Dj , which can be calculated as:

1 ∑
N
D= Dj
N j=1

161
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Hence, the bandwidth parameter σ can be calculated by the distance variance as:
(
1 ∑
N ) 12
σ= (Dj − D)
N − 1 j=1

The distance variance of the data points show a measure of how the data points closed
to the data cluster.

Keeping Φ fixed, minimizing equation (10) with respect to b1 and b2 we get:


∫ ∫
P ◦ (1 − K1)H (Φ)dX P◦ (1 − K2)(1 − Hϵ (Φ))dX
b1 = Υ∫ , b2 = Υ∫
ϵ

Υ
(1 − K1)Hϵ (Φ)dX Υ
(1 − K2)(1 − Hϵ (Φ))dX
Now keeping b1 and b2 fixed, minimizing equation (10) with respect to Φ we get the
following gradient descent equation:

∂Φ [ ]
= δϵ (Φ) β1 [(P◦ − b1 ) + (1 − K1)] − β2 [(P◦ − b2 ) + (1 − K2)]
2 2
(11)
∂t
Usually CV and other models utilize length term in their model for smoothing and
re-initializing the contour but in our model it is not effective. In the suggested model we
used gaussian smoothing filter instead of length term.
( x2 + y 2 )
G(x, y) = exp − (12)
2σ◦2
Gaussian smoothing filter reduces noise and undesired features present in the given
image. For effective segmentation outcomes we take the value of σ◦ in the interval (0, 1).

4 Experimental Result

In this section, we examined the suggested KBCV model on different real and synthetic
images of various intensities. The experimental results of the suggested model are also
compared with CV, Salah et al. and Wu et al. models results. All the experimental results
obtained on window 8.1, core i3 operating system with 4-GB RAM, 1.7-GHz processer
and MATLAB R2013a software.

4.1 Segmentation results of suggested KBCV model for images


with various intensities
In Fig. (3), the suggested KBCV model is examined on different intensity images. In
this figure, image in the first row is intensity inhomogeneous, while in the second row it
is multi-intensity region image. In this figure first, second and third column represents
initial contour, final contour and segmented results, respectively. It clearly reveals that
the suggested KBCV model easily segment these images in few iterations.
Fig. (4) shows the efficiency of the suggested KBCV model in speckle noise added in the
same images shown in Fig. (3). The amount of speckle noise added in these images 0.01.

162
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

4.2 Performance of the suggested KBCV model in noisy and


medical images
In Fig. (5), the efficiency of the suggested KBCV model is examined on real, synthetic
and noisy images. This figure clearly shows that the suggested model segment all the
images in a few iterations.

4.3 Comparison of the suggested KBCV model with CV model,


Salah et al. model and Wu et al. model results
Fig. (1) and Fig. (2) show comparison among the segmented results of CV model [2],
Salah et al. model [5], Wu et al. model [1] and the suggested KBCV model. Both of these
figures demonstrates that the performance of the suggested model is far better than each
model.

(a) (b) (c) (d)

(e) (f) (g) (h)

(i) CV model result (j) Salah et al. model result (k) Wu et al. model result (l) suggested model result

Fig. 1: Comparison of the suggested KBCV model result with CV, Salah et al. and Wu et al.
models.

163
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

(a) (b) (c) (d)

(e) (f) (g) (h)

(i) CV model result (j) Salah et al. model result (k) Wu et al. model result (l) suggested model result

Fig. 2: From left to right the first, second, third and forth column represents CV model results,
Salah et al. model results, Wu et al. model results, and suggested KBCV model results,
respectively.

164
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

(a) (b) (c)

(d) (e) (f)

Fig. 3: Performance of suggested KBCV model on images having different intensities, for each
image first, second and third column shows initial contour, final contour and segmented
results, respectively. Parameter used for images in 1st and 2nd row are β1 = 5, β2 =
20, σ = 0.1, iteration = 1 and β1 = 2, β2 = 20, σ = 0.35, iteration = 1, respectively.

165
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

(a) (b) (c)

(d) (e) (f)

Fig. 4: Performance of the suggested KBCV model in different intensities images in which speckle
noise is added. For each image in first, second and third column shows initial contour,
final contour and segmented results, respectively. Parameter used for each image in
1st and 2nd row are β1 = 5, β2 = 20, σ = 0.468, iterations = 500 and β1 = 2, β2 =
20, σ = 0.35, iterations = 100, respectively.

166
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

Fig. 5: Performance of the suggested KBCV model in medical and sever noisy images, for each
image in first, second and third column shows initial contour, final contour and segmented
results, respectively. Parameter used for each image in 1st , 2nd , and 3rd row are β1 =
25, β2 = 15, σ = 0.45, iterations = 200, β1 = 24, β2 = 25, σ = 0.3, iteration = 1, and
β1 = 24, β2 = 25, σ = 0.3, iteration = 1, respectively.

167
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Test Problems in Figure → 1 2


Model used ↓ itr cpu(s) itr cpu(s)
CV model 500 752.79 1000 1107.14
Salah et al. model 8000 2739.34 10000 1808.96
Wu He model 1100 84.38 4300 480.55
suggested model 2 1.16 2 1.013

Tab. 1: Quantitative comparison of the suggested KBCV model results with CV, Salah et al,
and Wu et al. models.

5 Conclusion

In this work, we suggested a hybrid model for image segmentation which is based on Eu-
clidean and non-Euclidean distance metrics. Experimental results show better efficiency
and robustness of the suggested model on real and synthetic images having noise, inten-
sity inhomogeneity and multi-intensity regions. Besides this, the suggested KBCV model
gives good segmentation results by adding speckle noise in the given image. Efficiency
and computational cost of the suggested model is far better than CV, Salah et al. and
Wu et al. models.

168
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

References

[1] Wu, Yongfei and He, Chuanjiang, A convex variational level set model for image
segmentation, Signal Processing, 106,pp.123–133,2015.

[2] Chan, Tony F and Vese, Luminita A, Active contours without edges, IEEE Transac-
tions on image processing,10(2),pp.266–277,2001.

[3] Osher, Stanley and Sethian, James A, Fronts propagating with curvature-dependent
speed: algorithms based on Hamilton-Jacobi formulations, Journal of computational
physics,79(1),pp.12–49,1988.

[4] Mumford, David and Shah, Jayant, Optimal approximations by piecewise smooth
functions and associated variational problems, Communications on pure and applied
mathematics, 42(5),pp.577–685,1989.

[5] Salah, Mohamed Ben and Mitiche, Amar and Ayed, Ismail Ben, Effective level set
image segmentation with a kernel induced data term, IEEE Transactions on Image
Processing, 19(1),pp.220–232,2010.

[6] M.Kass, A.Witkin, D.Terzopoulos, Snakes, active contour models,International Jour-


nal of Computer Vision, Vision 1(4),pp.321331,1988.

[7] Paragios, Nikos and Deriche, Rachid, Geodesic active regions and level set meth-
ods for supervised texture segmentation, International Journal of Computer Vision,
46(3),pp223–247,2002.

[8] Tsai, Andy and Yezzi, Anthony and Willsky, Alan S, Curve evolution implementation
of the Mumford-Shah functional for image segmentation, denoising, interpolation, and
magnification, IEEE transactions on Image Processing, 10(8),pp1169–1186,2001.

[9] Hanbury, Allan, Image segmentation by region based and watershed algorithms, Wiley
Encyclopedia of Computer Science and Engineering, 2008.

[10] Horowitz, Steven L, Picture segmentation by a directed split-and-merge procedure,


IJCPR, pp424–433,1974.

[11] Gao, Song and Bui, Tien D, Image segmentation and selective smoothing by us-
ing Mumford-Shah model, IEEE Transactions on Image Processing, 14(10),pp1537–
1549,2005.

[12] Mumford, David and Shah, Jayant, Optimal approximations by piecewise smooth
functions and associated variational problems, Communications on pure and applied
mathematics, 42(5),pp577–685,1989.

169
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

[13] Caselles, Vicent and Kimmel, Ron and Sapiro, Guillermo, Geodesic active contours,
International journal of computer vision, 22(1),pp61–79,1997.

[14] Kass, Michael and Witkin, Andrew and Terzopoulos, Demetri, Snakes: Active con-
tour models, International journal of computer vision, 1(4),pp321–331,1988.

[15] Li, Chunming and Liu, Jundong and Fox, Martin D, Segmentation of external force
field for automatic initialization and splitting of snakes, 38(11),pp1947–1960,2005.

[16] Li, Chunming and Xu, Chenyang and Gui, Changfeng and Fox, Martin D, Level set
evolution without re-initialization: a new variational formulation, In IEEE Conference
on Computer Vision and Pattern Recognotion (CVPR),1,pp430–436,2005.

[17] Caselles, Vicent and Catté, Francine and Coll, Tomeu and Dibos, Françoise, A
geometric model for active contours in image processing, Numerische mathematik,
66(1),pp1–31,1993.

[18] Malladi, Ravi and Sethian, James A and Vemuri, Baba C, IEEE transactions on
pattern analysis and machine intelligence, 17(2),pp158–175,1995.

[19] Cover, Thomas M, Geometrical and statistical properties of systems of linear in-
equalities with applications in pattern recognition, IEEE transactions on electronic
computers, 3,pp326–334,1965.

[20] Dhillon, Inderjit S and Guan, Yuqiang and Kulis, Brian, Weighted graph cuts without
eigenvectors a multilevel approach, IEEE transactions on pattern analysis and machine
intelligence, 29(11),2007.

[21] Jain, Anil K and Duin, Robert P. W. and Mao, Jianchang, Statistical pattern recog-
nition: A review, IEEE Transactions on pattern analysis and machine intelligence,
22(1),pp4–37,2000.

[22] Muller, K-R and Mika, Sebastian and Ratsch, Gunnar and Tsuda, Koji and
Scholkopf, Bernhard, An introduction to kernel-based learning algorithms, IEEE
transactions on neural networks, 12(2),pp181–201,2001.

[23] Schölkopf, Bernhard and Smola, Alexander and Müller, Klaus-Robert, Non-
linear component analysis as a kernel eigenvalue problem, Neural computation,
10(5),pp1299–1319,1998.

[24] Boser, Bernhard E and Guyon, Isabelle M and Vapnik, Vladimir N, A training al-
gorithm for optimal margin classifiers, Proceedings of the fifth annual workshop on
Computational learning theory, pp144–152,1992.

[25] Noble, William Stafford and others, Support vector machine applications in compu-
tational biology, Kernel methods in computational biology, pp71–92,2004.

170
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

[26] ázquez, Carlos and Mitiche, Amar and Ayed, Ismail Ben, Image segmentation as
regularized clustering: A fully global curve evolution method, Image Processing, 2004.
ICIP’04. 2004 International Conference on, 5,pp3467–3470,2004.

[27] Schölkopf, Bernhard, The kernel trick for distances, Advances in neural information
processing systems, pp301–307,2001.

171
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Joint Image Dehazing and Segmentation


Awal Sher∗, Maryum†, Haider Ali‡

Abstract
In this paper, we propose a joint variational model which will be competent to segment and
de-haze a given image together. Most of the recent segmentation models may not efficiently
segment foggy images. To validate the performance of our model we will compare our results
with the state of art.

1 introduction
The atmospheric particles mainly water droplets cause absorption and scattering of light
which degrade the quality of natural images [6]. This degradation is caused by two
fundamental phenomenons attenuation and the air-light [7, 10] which are the distance
functions. According to koschmieder law the model of fog effected can be represented as
[1, 2, 3]:

I(x, y) = I0 (x, y)e−kd(x,y) + I∞ (1 − e−kd(x,y) ).

According to W.H cho et al [5] proposed the following model:


∫ ∫
E(A(x, y)) = (H(W (x, y) − A(x, y))dxdy + λ 2
ϕ(||∇A(x, y)||)dxdy
Ω Ω

where A(x, y) is air-light or atmospheric veil and W (x, y) = min(I(x, y)), defined as the
image of the minimal components of I(x, y) for each pixel. This algorithm can be applied
for color as well as for grey image.
On the other hand segmentation is define as the process of partitioning an images into
something that is more meaningful and easier to analyze [8, 9]. More precisely , image
segmentation is the process of assigning a label to every pixel in an image such that pixels
with the same level share contain characteristics.
The best known method for segmentation was proposed by Chane-vese(CV) [4] which is
given below:

(1)
∫ ∫
F CV (c1 , c2 , C) = µ length(C) + λ1 |I(x) − c1 |2 dx + λ2 |I(x) − c2 |2 dx.
inside(C) outside(C)


Department of Mathematics, UOP, Peshawar, email: awalsherstd@uop.edu.pk

Department of Mathematics, UOP, Peshawar, email: maryumuop488@gmail.com

Department of Mathematics, UOP, Peshawar, email: dr.haider@upesh.edu.pk

172
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

2 The Proposed Model


We conclude the following energy functional from above mention equations. The aim
of the proposed model is to deal with removing fog and segment objects within a given
images at the same time which are a big deals to received a clean and segmented results.
∫ ∫ ∫
F (ϕ, A, c1 , c2 ) = µ δ(ϕ)|∇ϕ|dxdy + ν (|∇A|)dxdy + λ 2
H(W − A)dxdy
∫ (
Ω Ω
)2

I(x, y) − A
+ λ1 − c 1 H(ϕ)dxdy (2)
Ω 1 − IA∞
∫ ( )2
I(x, y) − A
+ λ2 − c2 (1 − H(ϕ))dxdy
Ω 1 − IA∞

3 Experimental Results
In this section we propose some experimental results in order to explain that our model
is new and performs well as compared to other existing models. All parameter are fixed
in our model. MATLAB is used for computation process. Segmented and defogg result
are given below.

(a) Initial Contour (b) LBF Result (c) Wu Result (d) Our Result

Fig. 1: The performance of the proposed model, LBF and Wu models on images with fog.

(a) Initial Contour (b) LBF Result (c) Wu Result (d) Our Result

Fig. 2: The performance of the proposed model, LBF and Wu models on images with fog.

173
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

(a) Initial Contour (b) Our defog Result (c) Initial Contour (d) Our defog Result

Fig. 3: The performance of the proposed model.

References
[1] S. G. Narasimhan and S. K. Nayar, Contrast restoration of weather degraded images,
IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 6, pp. 713-724, June 2003.

[2] S. G. Narasimhan and S. K. Nayar, Vision and the atmosphere, Int. J. Comput. Vis.,
vol. 48, no. 3, pp. 233-254, July 2002.

[3] R. Fattal, Single image dehazing, ACM Trans. Graph., vol. 27, no. 3, pp. 1-9, Aug.
2008.

[4] Chan TF, Vese LA. Active contours without edges. IEEE Transactions on image pro-
cessing. 2001 Feb;10(2):266-77.

[5] Cho, W.H., Na, I.S., Seo, S.C., Kim, S.K. and Park, S.Y., 2013. Single Image Defog-
ging Method Using Variational Approach for Edge-Preserving Regularization. World
Academy of Science, Engineering and Technology, International Journal of Computer,
Electrical, Automation, Control and Information Engineering, 7(6), pp.829-833.

[6] He, Kaiming, Jian Sun, and Xiaoou Tang. ”Single image haze removal using dark
channel prior.” IEEE transactions on pattern analysis and machine intelligence 33.12
(2011): 2341-2353.

[7] Saggu, M. K., Singh, S. (2015). A review on various haze removal techniques for
image processing. International journal of current engineering and technology, 5(3),
1500-1505.

[8] Pinheiro PO, Lin TY, Collobert R, Dollr P. Learning to refine object segments. InEu-
ropean Conference on Computer Vision 2016 Oct 8 (pp. 75-91). Springer, Cham.

[9] Kaur, Dilpreet, and Yadwinder Kaur. ”Various image segmentation techniques: a
review.” International Journal of Computer Science and Mobile Computing 3.5 (2014):
809-814.

[10] Anwar, Md Imtiyaz, and Arun Khosla. ”Vision enhancement through single image fog
removal.” Engineering Science and Technology, an International Journal 20.3 (2017):
1075-1083.

174
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Segmentation model for texture images by piecewise


smooth approximations
Hassan Shah∗, Noor Badshah, Fahim Ullah

Abstract
Image segmentation plays a fundamental role in image processing. An active contour model for
the segmentation and smoothing of images by using piecewise constant and smooth approximation
is developed by Chan and Vese. A similar technique is also used by Tsai et al. for segmentation
and smoothing of images. In both models Mumford Shah variational approach and the level
set method are employed in the fidelity and regularization terms. Badshah et al. proposed a
model for smoothing and segmentation of texture images using L0 norm. This model produces
good smoothing and segmentation results in images having intensity inhomogeneity, texture or
noise. However, it may not efficiently segment some texture images having blurred or unclear
boundaries. To resolve this issue, we develop a new model for segmentation of texture images via
L0 norm smoothing. In this model, first, in the fidelity term instead of constant intensity means we
approximated the image with piecewise smooth functions. Second, a signed pressure force (SPF)
function is utilized to stop the contours at minor or blurred boundaries and speedup the process
of contour evolution. Third, an L0 norm of the image gradient is employed to smooth the texture,
which performs well as compared to the total variation approach which is used in many research
papers for de-noising. Experimental results demonstrate that our proposed model produces good
segmentation results.

Keywords: Active contours, Texture images, Variational model, Fast fourier transform, Image
smoothing, Image segmentation.

1 Introduction
Image segmentation aims at extracting meaningful objects in an image. Variational models based
on active contour approach are mainly divided into edge-based active contour models (EBACMs)
[5, 13, 14] or region-based active contour models (RBACMs) [6, 8, 12]. EBACMs use local image
information to stop the movement of the contour at object boundaries [6], while RBACMs utilize
image global information to change the contours movement towards object boundaries [12]. Effective
use of intensity information in the different regions, RBACMs got more attraction as compared to
EBACMs [12, 14]. Mumford Shah (MS) model [19] is the well-known RBACM that has been utilized
in [10,20,25,27]. Chan and Vese developed an active contour model (ACM) by using piecewise constant
case of an MS-model known as CV-model [6] and a piecewise smooth case called VC-model [27]. A
similar technique is also used by Tsai et al. for smoothing and segmentation of images [25]. The level
set method is applied in these models that was earlier introduced by Osher and Sethian [22]. ACMs,
such as [1, 7, 15–18, 23, 28, 29, 31, 33] have been developed for the segmentation of synthetic and real
world natural images. But these models are unable to segment texture images efficiently.
Recently, Badshah et al. proposed a smoothing and segmentation model for texture images [3]. In
this model an L0 norm of image gradient is used for image smoothing and Mumford shah data terms
with L2 norm is utilized for segmentation. This model produces good smoothing and segmentation

Department of Basic Sciences, University of Engineering and Technology, Peshawar Pakistan

175
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

results in images containing intensity inhomogeneity, texture or noise. However, it may not efficiently
segment some texture images having blurred or unclear boundaries. To overcome this problem, we
develop a model for segmentation of texture images via L0 norm smoothing and SPF function. In
this model, we approximated the image with piecewise smooth functions. An SPF function is utilized
to stop the contours at minor or blurred boundaries and also to speed up the process of contour
movement. An L0 norm is employed to smooth the texture, which performs well as compared to the
total variation approach which is used in many research papers for de-noising.
The remainder of the paper is summarized as follows: In section 2, we briefly review some smooth-
ing and segmentation models. Our proposed model are presented in section 3. In section 4, experi-
mental results and comparison of our proposed model with piecewise smooth approximation of CV
model [27], selective local or global segmentation based on ACM [33] and a smoothing and seg-
mentation model for texture images [3] are given. Finally, some conclusion are included in section
5.

2 Related Smoothing an Segmentation Models:


Overview of some smoothing and segmentation models are given in this section.

2.1 A Smoothing Model using L0 Norm


L0 norm is used for smoothing images [32]. It is defined in the following way:
{ }
∂Sq ∂Sq
E(Sq ) = # q + ̸= 0 , (1)
∂x ∂y

where #{q} is the cardinal number that counts the number of a set q. L0 norm is incorporated as a
regularizer, so that the model is formulated as:
{ }
∑ 2

min Sq − Iq + β1 E(Sq ) , (2)
Sq
q

where β1 is the smoothing parameter. An alternating minimization (AM) algorithm [30] is applied
on the model (2). In AM-algorithm, a technique of half quadratic splitting is utilized by nominating
∂S ∂S
rq and hq as auxiliary variables that is replaced by ∂xq and ∂yq respectively.
{ ( }
∑ 2 2 ∂S 2 )
∂Sq q
min Sq − Iq + β2 − rq + − hq + β1 E(rq , hq ) , (3)
Sq ,rq ,hq ∂x ∂y
q
( )
∂S ∂S
where parameter β2 measures the difference between (rq , hq ) and ∂xq , ∂yq respectively.
Minimization of Eq. (3) for (rq , hq ), and Sq is computed alternatively. The objective function for
(rq ,hq ) is
{ ( 2 2 ) }
∑ β1 ∂Sq ∂Sq
min
E(rq , hq ) +
− rq + − hq . (4)
rq ,hq
q
β2 ∂x ∂y

A binary function H (|rq | + |hq |) is incorporated in Eq. (4) as:


{ ( 2 2 ) }
∑ ∂Sq ∂Sq β
1
min
rq ,hq ∂x − rq + ∂y − hq + β2 H (|rq | + |hq |) . (5)
q

176
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

The value of H (|rq | + |hq |) is:


{
1 if |rq | + |hq | ̸= 0.
H (|rq | + |hq |) = (6)
0 otherwise,
Eq is defined for each pixel q as:
{ ( 2 2 ) }
∂Sq ∂Sq β
1
Eq = ∂x − rq + ∂y − hq + β2 H (|rq | + |hq |) . (7)

The minimum value Eq ∗ is obtained using condition:


{
(0, 0) (∂x Sq )2 + (∂y Sq )2 < ββ12
(rq , hq ) = (8)
(∂x Sq , ∂y Sq ) otherwise.

Finally, for each pixel value q, the global optimum solution is q Eq ∗ .
Minimization of Eq. (3) for Sq is obtained by:
{ ( 2 2 ) }
∑ ∂S ∂S
|Sq − Iq |2 + β2 − rq + − hq
q q
min . (9)
Sq ∂x ∂y
q

A global minimum of Eq. (9) is achieved due to the reason that the equation is quadratic in Sq . Thus
finally an optimum solution for Sq is obtained when a Fast Fourier Transform (FFT) is applied.
{ ( T ) }
F(I ) + β F ∂ ∗ r + ∂ T
∗ h
Sq = F−1
q 2 x q y q
, (10)
F(1) + β2 F(∂x T ∗ ∂x ) + F(∂y T ∗ ∂y )

F is the FFT operator and F−1 denotes its inverse operator. F(1) is the FFT of the delta function.
Due to the incorporation of L0 norm, the model (2) can retain and heighten all prominent edges and
can remove noise from the image.

2.2 Piecewise Smooth Approximation of CV Model (M-1)


Piecewise constant approximation of CV model is developed for the segmentation of binary images [6].
This model can segment and divide homogeneous images into background and foreground. However,
it is unable to detect images having intensity inhomogeneity. Segmentation of intensity inhomogeneity
images is addressed by Vese and Chan [27] and Tsai et al. [25]. They proposed two similar RBACMs
for gray-scale images independently. These models are known as piecewise smooth (PS) models. The
energy functional is defined by:
∫ ∫
E (U1 , U2 , ϕ) = ϑ1
PS
|I − U1 | Hε (ϕ)dxdy + µ1
2
|∇U1 |2 Hε (ϕ)dxdy
∫ Ω
∫ Ω

+ϑ2 |I − U2 |2 (1 − Hε (ϕ))dxdy + µ2 |∇U2 |2 (1 − Hε (ϕ))dxdy


Ω Ω

+ν δε (ϕ)|∇ϕ|dxdy, (11)

( )
where ϑ1 , ϑ2 ,µ1 ,µ2 and ν are all the positive parameters and Hε (ϕ) = 12 1 + π2 arctan( ϕε ) .
U1 and U2 are smooth functions that approximate the given image I at the inner side and outer side
of the contour respectively. Two damped Poisson equations containing U1 and U2 are given by:
 { }
 U1 − I = µ1 ∆U1 in (x, y) : ϕ(x, y) > 0
{ } (12)
 ∂U1
= 0 on (x, y) : ϕ(x, y) = 0 ,
∂⃗n

177
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

and
 { }
 U2 − I = µ2 ∆U2 in (x, y) : ϕ(x, y) > 0
{ } (13)
 ∂U2
=0 on (x, y) : ϕ(x, y) = 0 ,
∂⃗
n


where the partial derivative ∂⃗ n at the corresponding boundary. U1 and U2 are
n is perpendicular to ⃗
obtained from the solution of damped Poisson Eqs. (12) and (13) respectively. U1 and U2 produces
a smoothing and de-noising effect on the image I at the inner side of homogeneous regions, and not
across edges.
Minimization of Eq. (11) for ϕ is achieved by using Euler Lagrange equation and then using artificial
time t ≥ 0 to get the gradient flow equation:
∂ϕ [ ( ∇ϕ ) ]
= δϵ (ϕ) ν∇ · − ϑ1 (I − U1 )2 − µ1 |∇U1 |2 + ϑ2 (I − U2 )2 + µ2 |∇U2 |2 , (14)
∂t |∇ϕ|
( )
ε
where δε (ϕ) = π1 ε2 +ϕ2 , is a smooth Dirac Delta function and it is obtained by differentiating Hε (ϕ)

with respect to ϕ. Computational cost of the PS-model is very expensive due to the implementation
and extension of both U1 and U2 for each iteration to the whole image domain.

2.3 Active Contours with Selective Local or Global Segmentation (M-2)


Zhang et al. proposed a selective local or global segmentation model [33]. In this model they
combined the merits of geodesic ACM [5] and active contour without edges model [6]. An SPF
function is utilized that can stop the movement of contour at significant minor or blurred edges. The
values of SPF function lies in [−1, 1] [33]. It has topological properties of shrinking, bending and
expanding. The SPF function is constructed as

I − c1 +c 2
SP F (I) = 2 , (15)
max I − c1 +c
2
2

where c1 and c2 are the constant average intensities lie in the inner and outer side of the contour
which are defined in CV-model [6].
The variational level set formulation of the model is
∂ϕ [ ( ∇ϕ ) ]
= SP F (I) . ∇ · + α |∇ϕ| + ∇SP F (I) · ∇ϕ, (16)
∂t |∇ϕ|

where α is the balloon term that is used to increase the propagation speed. For selective segmentation
a gaussian filter Gσ , σ is the standard deviation is used to regularize ϕ after
( each
) iteration which is
∇ϕ
unnecessary for global segmentation. Therefore, the regularization term ∇· |∇ϕ| is neglected as it is
unnecessary. Moreover, as this model has the ability to capture larger range and to control the anti-
edge leakage therefore the term ∇SP F (I) · ∇ϕ is also removed. Thus after removal of unnecessary
terms Eq. (16) is given by
∂ϕ
= SP F (I) · α|∇ϕ|. (17)
∂t
The model is robust to noise and performed well in segmenting weak or without edges images.

2.4 Smoothing and Segmentation Model for Texture Images (M-3)


Recently Badshah et al. designed a model for smoothing and segmentation of texture images [3].
This model do smoothing and segmentation jointly by using the property of L0 norm smoothing and

178
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Mumford Shah data fidelity for segmentation. The model is defined by


{ ( 2 2 )
∑ ∂S ∂S
|Sq − Iq |2 + β1 E(rq , hq ) + β2 − rq + − hq
q q
min
Sq ,rq ,hq ,ϕ,c1 ,c2 ∂x ∂y
q
}
∑ ∑ ∑
+ϑ1 |Sq − c1 | Hε (ϕ) + ϑ2 |Sq − c2 | (1 − Hε (ϕ)) + ν
2 2
|∇Hε (ϕ)| , (18)
q q q

where β1 is the smoothing parameter and β2 is an updating parameter which is used to balance
(rq , hq ) and their corresponding gradients. Sq is the smoothed image which is to be segmented, the
term E(rq , hq ) is used to measure the nonzero gradients in the image.
Minimization of Eq. (18) is solved alternatively for rq , hq ,c1 , c2 , Sq and ϕ by using alternating
minimizing algorithm [30].

2.4.1 Computing (rq , hq )


Minimization of Eq. (18) for (rq , hq ) is given in [32].

2.4.2 Computing c1 and c2 :


Minimization of Eq. (18) for c1 and c2 is obtained by:
{ }
∑ ∑
min ϑ1 |Sq − c1 | Hε (ϕ) + ϑ2 |Sq − c2 | (1 − Hε (ϕ)) ,
2 2
(19)
c1 ,c2
q q

that gives

q Sq Hε (ϕ)
c1 = ∑ , (20)
q Hε (ϕ)

and

q Sq (1 − Hε (ϕ))
c2 = ∑ . (21)
q (1 − Hε (ϕ))

2.4.3 Computing Sq :
Minimization of Eq. (18) for Sq is obtained by:
{ ( 2 2 )
∑ ∂S ∂S
|Sq − Iq |2 + β2 − rq + − hq
q q
min
Sq ∂x ∂y
q
}
∑ ∑
+ϑ1 |Sq − c1 | Hε (ϕ) + ϑ2 |Sq − c2 | (1 − Hε (ϕ)) .
2 2
(22)
q q

Eq. (22) is quadratic in Sq , therefore the global optimum value is achieved. Fast Fourier Transform
(FFT) is applied and then after some calculation solution of Sq is obtained:
{ }
F(I ) + β D + ϑ c F(H (ϕ)) + ϑ c (F(1) − F(H (ϕ)))
Sq = F−1 {
q 2 1 1 1 ε 2 2 ε
} , (23)
F(1) + ϑ2 D2 + ϑ1 F(Hε (ϕ)) + ϑ2 (F(1) − F(Hε (ϕ)))

where ( )
D1 = F ∂x T ∗ r + ∂y T ∗ h ,
and
D2 = F(∂x T ∗ ∂x ) + F(∂y T ∗ ∂y ).

179
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

2.4.4 Computing ϕ:
Solution of ϕ is obtained through the minimization of Eq. (18) w.r.t ϕ:
{ }
∑ ∑ ∑
min ϑ1 |Sq − c1 | Hε (ϕ) + ϑ2 |Sq − c2 | (1 − Hε (ϕ)) + ν
2 2
|∇Hε (ϕ)| . (24)
ϕ
q q q

Euler Lagrange’s equation is applied on Eq. (24) to find its minimizer, which is the steady state of
the following PDE:
∂ϕ [ ( ∇ϕ ) ]
= δε (ϕ) ν∇ · + ϑ1 (S − c1 )2 − ϑ2 (S − c2 )2 . (25)
∂t |∇ϕ|

The PDE defined in Eq. (25) is discretized numerically by using additive operating splitting (AOS)
method. This model produces good results for segmenting texture, inhomogeneous or noisy images.

3 Proposed Model (M-4)


A lot of models have been developed for the segmentation of texture images [2, 4, 9, 11, 20, 21, 24,
26]. Yet these models are unable to segment complicated rich textured images. A smoothing and
segmentation model proposed by Badshah et al. is designed for both synthetic and real world texture
images [3]. This model has the ability of segmenting noisy, inhomogeneous or texture images. But
it got unsatisfactory results while segmenting such images that have unclear or diffuse edges. In our
proposed method we design a new model that can overcome the problem of segmenting texture images
containing blurred or unclear boundaries. In this model, an L0 norm of the image gradient is used to
smooth the texture and then the smoothed image is approximated with piecewise smooth functions
instead of constant intensity means. An SPF function is utilized to stop the contours at minor or
blurred boundaries and to speedup the movement of contour. Due to the properties of both L0 norm
and SPF function, we proposed a model by incorporating L0 norm and SPF function in the model
to obtained an efficient segmented result. First, a smoothed image S is obtained by utilizing an L0
norm [32] and then the smoothed image S is used in the following energy functional to obtained a
segmented image:
∫ ∫
E PM
(U1 , U2 , ϕ) = ϑ1 |S − U1 | Hε (ϕ)dxdy + µ1
2
|∇U1 |2 Hε (ϕ)dxdy
∫ Ω
∫ Ω

+ϑ2 |S − U2 | (1 − Hε (ϕ))dxdy + µ2
2
|∇U2 |2 (1 − Hε (ϕ))dxdy
Ω ∫ Ω

+ν δε (ϕ)|SP F (S) ∇ϕ|dxdy, (26)


U1 and U2 are smooth functions that approximate the smoothed image S at the inner side and outer
side of the contour respectively. Two damped Poisson equations containing U1 and U2 are given by
 { }
 U1 − S = µ1 ∆U1 in (x, y) : ϕ(x, y) > 0
{ } (27)
 ∂U1
= 0 on (x, y) : ϕ(x, y) = 0 ,
∂⃗
n

and
 { }
 U2 − S = µ2 ∆U2 in (x, y) : ϕ(x, y) > 0
{ } (28)
 ∂U2
=0 on (x, y) : ϕ(x, y) = 0 ,
∂⃗
n

180
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018


where the partial derivative ∂⃗ n at the corresponding boundary. U1 and U2 are
n is perpendicular to ⃗
obtained from the solution of damped Poisson Eqs. (27) and (28) respectively. U1 and U2 have a
smoothing and de-noising effect on the image S, only at the inner sides of homogeneous regions, and
not across edges. It retains and enhances the edges too.
Minimization of Eq. (26) for ϕ is obtained by using Euler Lagrange equation, which is the steady
state of the following PDE:
∂ϕ [ ( ∇ϕ ) ]
= δϵ (ϕ) ν∇ · SP F (S) − ϑ1 (S − U1 )2 − µ1 |∇U1 |2 + ϑ2 (S − U2 )2 + µ2 |∇U2 |2 . (29)
∂t |∇ϕ|

The PDE defined in Eq. (29) is discretized numerically by using AOS-method. The segmentation
results of our new model overcome the deficiency that is caused by other related state of the art
existing models.

4 Experimental Results and Analysis


In this section, we present experiments on some of the image segmentation models. Our proposed
model (M-4) is compared with piecewise smooth approximation of CV model (M-1) [27], active
contours with selective local or global segmentation (M-2) [33] and a smoothing and segmentation
model for texture images (M-3) [3]. We used synthetic and real world natural texture images, which
have a size 256 × 256 pixels and we mainly focus on the edge preservation, noise robustness and
stair-casing artifact effect. Some parameter values are kept fixed, while the other parameter values
are adjusted manually and it is presented in the figure captions. The fixed parameter values in our
proposed model are: ϑ1 = 0.5, ϑ2 = 0.5, σ = 1, µ1 = µ2 = 10. First the image is smoothed by using L0
norm of the image gradient and then the smoothed image is used for segmentation in the proposed
model. The smoothing performance depends on β1 (the smoothing parameter) which is used in Eq.
(18) [32] and segmentation depends on length parameter ν.
The segmentation results of our proposed model are more satisfactory which are presented in Fig.
1 to 3. It removes the noise and texture and also improve the localization accuracy. Referable to the
utilization of SPF function, the model detects and preserve weak boundaries efficiently. Fig. 1 clearly
indicates that our proposed model (M-4) detects the boundary of sand-green image more accurately
as compared to models M-1, M-2, and M-3.
Our proposed model (M-4) preserved the edges of a tree image which is shown in Fig. 2, while M-1
and M-2 give unsatisfactory segmentation results due to inhomogeneity and M-3 is not efficiently
detected the edges. In Fig. 3, model M-4 overcome the problem of stair-casing effect that is produced
in the image when using models M-1, M-2 and M-3. Segmented results of a synthetic inhomogeneous
texture image on models M-1, M-2, M-3 and M-4 are shown in Fig. 4. It can be assured that M-4 give
more satisfactory results as compared to M-1, M-2 and M-3. Fig. 5 shows the successful segmentation
results of our new model (M-4) on different types of texture images having inhomogeneity or unclear
boundary.

181
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

(a) M-1 (b) M-2

(c) M-3 (d) M-4

Figure 1: Segmentation result of Sand-green image: 1(a) M-1: ϑ1 = 0.5, ϑ2 = 6.5, ν = 100, 1(b) M-2:
α = 1000, 1(c) M-3: ν = 0.01, β1 = 0.03, 1(d) M-4: ν = 50, β1 = 0.02.

5 Conclusion
We have developed a texture segmentation model that uses an L0 norm of the image gradient to
smooth the texture and an SPF function is utilized to stop the contours at minor or blurred boundaries
and to speed up the process of contour movement. Inspired with the idea of Zhang et al., we
approximated the image with piecewise smooth functions instead of constant average intensity means.
Experimental results demonstrate that our proposed model produces good segmentation results as
compared to other existing segmentation models.

182
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

(a) M-1 (b) M-2

(c) M-3 (d) M-4

Figure 2: Segmentation result of tree image: 2(a) M-1: ϑ1 = 0.5, ϑ2 = 2.5,ν = 500, 2(b) M-2:
α = 1000, 2(c) M-3: ν = 0.002, β1 = 0.04, 2(d) M-4: ν = 50, β1 = 0.04.

References
[1] Mohand Saı̈d Allili and Djemel Ziou. Object tracking in videos using adaptive mixture models
and active contours. Neurocomputing, 71(10):2001–2011, 2008.

[2] Gilles Aubert, Michel Barlaud, Olivier Faugeras, and Stéphanie Jehan-Besson. Image segmenta-
tion using active contours: Calculus of variations or shape gradients. SIAM Journal on Applied
Mathematics, 63(6):2128–2154, 2003.

[3] Noor Badshah and Hassan Shah. Model for smoothing and segmentation of texture images using
l 0 norm. IET Image Processing, 12(2):285–291, 2017.

[4] Alan C. Bovik, Marianna Clark, and Wilson S. Geisler. Multichannel texture analysis using local-
ized spatial filters. IEEE transactions on pattern analysis and machine intelligence, 12(1):55–73,
1990.

[5] Vicent Caselles, Ron Kimmel, and Guillermo Sapiro. Geodesic active contours. International
journal of computer vision, 22(1):61–79, 1997.

183
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

(a) M-1 (b) M-2

(c) M-3 (d) M-4

Figure 3: Segmentation result of synthetic image: 3(a) M-1: ϑ1 = 0.5, ϑ2 = 0.5, ν = 6000, 3(b) M-2:
α = 6000, 3(c) M-3: ν = 0.0003, β1 = 0.03, 3(d) M-4: ν = 50, β1 = 0.01.

[6] Tony F Chan and Luminita A Vese. Active contours without edges. IEEE Transactions on
image processing, 10(2):266–277, 2001.

[7] Yunjie Chen, Jianwei Zhang, Arabinda Mishra, and Jianwei Yang. Image segmentation and bias
correction via an improved level set method. Neurocomputing, 74(17):3520–3530, 2011.

[8] Daniel Cremers, Mikael Rousson, and Rachid Deriche. A review of statistical approaches to
level set segmentation: integrating color, texture, motion and shape. International journal of
computer vision, 72(2):195–215, 2007.

[9] Itzhak Fogel and Dov Sagi. Gabor filters as texture discriminator. Biological cybernetics,
61(2):103–113, 1989.

[10] Song Gao and Tien D Bui. Image segmentation and selective smoothing by using mumford-shah
model. IEEE Transactions on Image Processing, 14(10):1537–1549, 2005.

[11] Simona E Grigorescu, Nicolai Petkov, and Peter Kruizinga. Comparison of texture features based
on gabor filters. IEEE Transactions on Image processing, 11(10):1160–1167, 2002.

184
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

(a) M-1 (b) M-2

(c) M-3 (d) M-4

Figure 4: Segmentation result of synthetic image: 4(a) M-1: ϑ1 = 0.5, ϑ2 = 2.5, ν = 500, 4(b) M-2:
α = 5000, 4(c) M-3:β1 = 0.053, ν = 0.0003, 4(d) M-4: ν = 500, β1 = 0.053.

[12] Lei He, Zhigang Peng, Bryan Everding, Xun Wang, Chia Y Han, Kenneth L Weiss, and William G
Wee. A comparative study of deformable contour methods on medical image segmentation. Image
and Vision Computing, 26(2):141–163, 2008.

[13] Michael Kass, Andrew Witkin, and Demetri Terzopoulos. Snakes: Active contour models. In-
ternational journal of computer vision, 1(4):321–331, 1988.

[14] Ron Kimmel, Arnon Amir, and Alfred M. Bruckstein. Finding shortest paths on surfaces us-
ing level sets propagation. IEEE Transactions on Pattern Analysis and Machine Intelligence,
17(6):635–640, 1995.

[15] Chunming Li, Chiu-Yen Kao, John C Gore, and Zhaohua Ding. Implicit active contours driven
by local binary fitting energy. In 2007 IEEE Conference on Computer Vision and Pattern
Recognition, pages 1–7. IEEE, 2007.

[16] Chunming Li, Chiu-Yen Kao, John C Gore, and Zhaohua Ding. Minimization of region-scalable
fitting energy for image segmentation. IEEE transactions on image processing, 17(10):1940–1949,
2008.

185
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

(j) (k)

Figure 5: Segmentation result of proposed model: 5(a) : ν = 50, β1 = 0.001, 5(b) : ν = 50,
β1 = 0.001, 5(c) : ν = 10, β1 = 0.0001,5(d) :ν = 50, β1 = 0.001,5(e) : ν = 20, β1 = 0.05,5(f) : ν = 20,
β1 = 0.01,5(g) : ν = 50, β1 = 0.01,5(h) : ν = 20, β1 = 0.001, 5(i) : ν = 10, β1 = 0.04, 5(j) : ν = 50,
β1 = 0.003, 5(k) : ν = 50, β1 = 0.06.

[17] Maria Lianantonakis and Yvan R Petillot. Sidescan sonar segmentation using texture descriptors
and active contours. IEEE Journal of Oceanic Engineering, 32(3):744–752, 2007.

[18] Akshaya K Mishra, Paul W Fieguth, and David A Clausi. Decoupled active contour (dac)
for boundary detection. IEEE Transactions on Pattern Analysis and Machine Intelligence,
33(2):310–324, 2011.

186
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

[19] David Mumford and Jayant Shah. Optimal approximations by piecewise smooth functions and
associated variational problems. Communications on pure and applied mathematics, 42(5):577–
685, 1989.

[20] Nikos Paragios and Rachid Deriche. Geodesic active regions and level set methods for supervised
texture segmentation. International Journal of Computer Vision, 46(3):223–247, 2002.

[21] Berta Sandberg, Tony Chan, and Luminita Vese. A level-set and gabor-based active contour
algorithm for segmenting textured images. In UCLA Department of Mathematics CAM report.
Citeseer, 2002.

[22] James A Sethian et al. Level set methods and fast marching methods. Journal of Computing
and Information Technology, 11(1):1–2, 2003.

[23] Ken Tabb, Neil Davey, Rod Adams, and Stella George. The recognition and analysis of animate
objects using neural networks and active contour models. Neurocomputing, 43(1):145–172, 2002.

[24] TN Tan. Texture edge detection by modelling visual cortical channels. Pattern Recognition,
28(9):1283–1298, 1995.

[25] Andy Tsai, Anthony Yezzi, and Alan S Willsky. Curve evolution implementation of the mumford-
shah functional for image segmentation, denoising, interpolation, and magnification. IEEE trans-
actions on Image Processing, 10(8):1169–1186, 2001.

[26] Mark R Turner. Texture discrimination by gabor functions. Biological cybernetics, 55(2-3):71–82,
1986.

[27] Luminita A Vese and Tony F Chan. A multiphase level set framework for image segmentation
using the mumford and shah model. International journal of computer vision, 50(3):271–293,
2002.

[28] Xiao-Feng Wang and De-Shuang Huang. A novel density-based clustering framework by using
level set method. IEEE Transactions on Knowledge and Data Engineering, 21(11):1515–1531,
2009.

[29] Xiao-Feng Wang, De-Shuang Huang, and Huan Xu. An efficient local chan–vese model for image
segmentation. Pattern Recognition, 43(3):603–618, 2010.

[30] Yilun Wang, Junfeng Yang, Wotao Yin, and Yin Zhang. A new alternating minimization algo-
rithm for total variation image reconstruction. SIAM Journal on Imaging Sciences, 1(3):248–272,
2008.

[31] Qinggang Wu, Jubai An, and Bin Lin. A texture segmentation algorithm based on pca and
global minimization active contour model for aerial insulator images. IEEE Journal of Selected
Topics in Applied Earth Observations and Remote Sensing, 5(5):1509–1518, 2012.

[32] Li Xu, Cewu Lu, Yi Xu, and Jiaya Jia. Image smoothing via l 0 gradient minimization. In ACM
Transactions on Graphics (TOG), volume 30, page 174. ACM, 2011.

[33] Kaihua Zhang, Huihui Song, and Lei Zhang. Active contours driven by local image fitting energy.
Pattern recognition, 43(4):1199–1206, 2010.

187
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Efficient Variational Model for Image


Segmentation Based on Multi-Scale Low Pass
Filtering
Noor Badshah, Ijaz Ullah, Ali Ahmad
Abstract
In image segmentation, intensity inhomogeneity is one of the challenging and complicated prob-
lem. Most of variational models fail to segment inhomogeneous images. In this paper, we
proposed a new region-based variational model, with data term based on CoV (coefficient of
variation), hybrid with multi-scale low pass filtering. In the CoV, hybrid model, for obtaining
more local region information, local region window is set in circular shape. The model is helpful
in segmenting images even with severe intensity inhomogeneity. For better performance and
fast convergence, we add spf (singed pressure force) function to the energy functional. The
experimental results compared with existing models, show advantages of our proposed model.
Keywords. Coefficient of Variations, Level Set Method, Multi-Scale, Image Segmentation,
spf function, Intensity Inhomogeneity.

1 Introduction

Image segmentation is a fundamental and challenging task in image processing. Generally,


image segmentation is a process of dividing an image into sub-regions, such that each of
these sub-regions is similar to each other in some characteristics like intensity, color or
texture etc. In other words, the aim of image segmentation, to change representation
of the given image into something new that can easily understandable for analysis. The
process of segmentation be carry out as long as the object in image be segmented. In
image processing intensity inhomogeneity is one of the challenging and basic task for
image segmentation which usually arise from imperfect factor of image acquisition process.
The presence of intensity inhomogeneity in the intensity distribution greatly influence
performance of the image segmentation in image because of the overlap of background
and of foreground. To overcome this problem there have been many promising method
proposed [11].
The level set models is divided in two main groups: that is edge-based models [2, 4, 5]
and region-based models [1, 3, 6, 10, 7, 9]. Edge-based models are those models which
depend on the information of image gradient to operate curve evolution for detecting
images by sharp gradient of each pixel intensity. However, edge-based models work well
in images having high intensities contrast among various regions, but it has yet some

188
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

intrinsic limitations. Because, of these limitations of gradient dependence, the edge-base


methods fail to segment the weak object boundaries and also sensitive to noise. For
detection of object boundaries the initial contour were placed near the target object. In
edge-based models GAC model [5] is the most popular and successful one.
Region-based image segmentation models are those models which are based on homo-
geneity of regions, in which each pixel characteristics is homogeneous. To detect regions
in an image, some extraordinary terms are added to detect region in these images. The
term used for detection of region is called region detector or fidelity term. Region-based
segmentation models use statistical information like mean, standard deviation, coefficient
of variation and variance etc. The region-based segmentation models, have various advan-
tages over edge-base segmentation models, for example region-base models do not need
to use image gradient but still have the capability to detect those images which have
weak boundaries of object and noise. Also, region-based models are not sensitive to the
initial position of evolving contour. The Chan-Vese (CV) [3] is the most interpreter and
common, which is special case of piecewise constant Mumford-Shah (MS) image segmen-
tation model [8]. It works well in noisy, as well as in intensity homogeneity images. The
Chan-Vese cv [3] model assume that each regions in an image are statistically homoge-
neous. But, the assumption of the model fail in images with intensity inhomogeneity,
which restrict its application.
To improve the performance of region-based models for inhomogeneous regions, some
of the traditional models [7, 9, 10] are presented in recent years. One of these models,
which is discuss in detail is the Zhang et al. model [7]. This model work well to segment
images with homogenous regions. Model is sensitive to the initial contour, and also model
use spf (singed pressure force) function which is based on global information, which may
not segment inhomogeneous images. Murtaza et al. [9], proposed a variational model for
image segmentation. This model works well in low intensity inhomogeneity images, but
fail to segment images with severe intensity inhomogeneity. More recently, Wang et al.
model [10] proposed a variational model. This model works well in low intensity as well
as in severe intensity inhomogeneity. To segment noisy images this model fails.
In this paper, we propose a new region-based variational model, with data term based
on (CoV), hybrid with Multi-scale low pass filtering. In our model, we combine the
advantages of Zhang et al. Murtaza et al. and Wang et al. models. For using more local
region information, local region window is set in circular shape instead of rectangular
shape. For better performance and fast convergence, we add spf (singed pressure force)
function to the energy functional. The advantage of our proposed Cov, based hybrid
model to work in noisy images is due to using of Gaussian smoothing falter.
We organized this paper as follows: In section 2, we discus the existing models. Section
3, we present in detail our proposed model. In section 4, the experimental results. Finally,
in section 5, conclusion is given.

189
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

2 Related Works

2.1 Zhang Model


Let us suppose that J : Γ ⊂ R2 −→ R be a gray-scale image. Let Γ be the domain of
given image and B(x) : [0, 1] −→ R2 be a parameterized curve. The energy functional of
the Zhang et al. model is derived from the GAC model [5] which is given by:

∂Φ ( ( ∇Φ ) )
= g|∇Φ| div + β + ∇g · ∇Φ, (1)
∂t |∇Φ|

where g is edge stoping function and β is a balloon force term.


The level set formulation of Zhang et al. model as given by:
∂Φ ( ( ∇Φ ) )
= spf (J(x)). div + β |∇Φ| + ∇spf (J(x)).∇Φ, (2)
∂t |∇Φ|

where spf (J) in the above Eq.(2) is the SPF function which is defined as follows:

J(x) − b1 +b 2

spf (J) = ( 2 ), (3)


max |J(x) − b1 +b2
2
|

where b1 , b2 are constants, which are defined as follows.



J(x)Hε (Φ)dx
b1 (Φ) = Γ ∫ (4)
Γ
Hε (Φ)dx

J(x)Hε (Φ)dx
b2 (Φ) = ∫ Γ (5)
Γ
(1 − Hε (Φ))dx
The model utilize Gaussian filter for regularization, so term which based on curvature
div(∇Φ|∇Φ|)|∇Φ| is not necessary also, from Eq.(2) the term ∇spf.∇Φ can be eliminate
because model utilizes region information statistically, which has high segmenting range
and capacity of anti-edge leakage. Finally, level set formulation is given by:
∂Φ
= βspf (J).|∇Φ|. (6)
∂t
The Zhang et al. model work well to segment images with homogenous regions. This
model use the spf(singed pressure force) function which is based on global information,
which may not segment images with intensity inhomogeneity. Also, the model is sensitive
to the initial contour.

2.2 Murtaza Model


Murtaza et al.[7], proposed a model in which coefficient of variation CoV is used as a
fidelity term instead of variance, because in energy functional using coefficient of variation

190
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

as a fidelity term show best result in terms of inhomogeneous objects in an image. The
energy functional of Murtza et al. [7] model is given by:
∫ ∫
(J − s1 )2 (J − s2 )2
F (Γ, s1 , s2 , t1 , t2 ) = η(length(Γ)) + η1 dS + η 1 dX
inside(Γ) s21 outside(Γ) s22
∫ ∫
(J ∗ − t1 )2 (J ∗ − t2 )2
+ η2 dS + η2 dS, (7)
inside(Γ) t21 outside(Γ) t22

where S=(x,y), η1 , η2 , are positive constants and J ∗ (x) = gk ∗ J(x) − J(x). The level set
formulation of Murtaza et al. model is as follows:

F (Φ, s1 , s2 , t1 , t2 ) = ν δε (Φ)|∇Φ|dxdy
∫Γ ∫
(J(x, y) − s1 )2 (J(x, y) − s2 )2
+ η1 H ε (Φ)dS + η1 (1 − Hε (Φ))dX(8)
s21 s22
∫ Γ

Γ
(J ∗ (x, y) − t1 )2 (J ∗ (x, y) − t2 )2
+ η2 H ε (Φ)dS + η 2 (1 − Hε (Φ))dS.
Γ t21 Γ t22

Keeping Φ fixed minimize the above equation with respect to s1 , s2 , t1 , t2 as lead to the
following solutions,

(J)2 (x, y)Hε (Φ)dS
s1 (Φ) = ∫Γ
, (9)
Γ
J((x, y))Hε (Φ)dS


(J)2 (x, y)(1 − Hε (Φ))dS
s2 (Φ) = ∫ Γ
, (10)
Γ
J((x, y))(1 − Hε (Φ))dS

∫ ∗ 2
(J ) (x, y)Hε (Φ)dS
t1 (Φ) = ∫Γ ∗ , (11)
Γ
(J )(x, y)Hε (Φ)dS

∫ ∗ 2
(J ) (x, y)(1 − Hε (Φ))dS
t2 (Φ) = ∫Γ ∗ , (12)
Γ
(J )(x, y)(1 − H ε (Φ))dS

keeping s1 , s2 , t1 , t2 fixed and minimizing the above equation with respect to Φ the asso-
ciated Euler-Lagrange equation is obtained is as follows:
[ ( ∇Φ ) ( (J − s )2 (J − s )2 ) ( (J ∗ − t )2 (J ∗ − t )2 )]
∂Φ 1 2 1 2
= δε (Φ) ν∇. +η1 − + +η2 − + ,
∂t |∇Φ| s21 s22 t21 t22
(13)
with Neumann boundary condition, and Φ(x, y, t) = Φ0 (x, y, 0).
This model work well in low contrast images and slight intensity inhomogeneity. In
segmenting images with severe intensity inhomogeneity this model fails.

191
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

2.3 Wang Model


In this model, Wang et al. [10] proposed variational model a to segment images with low
as well as severe intensity inhomogeneity. The energy functional of the model is consist
of two main terms that is regularization and data terms. The energy functional of Wang
et al. [10] model is given by:
∫ ∫
D R
Fk (r1 , r2 , Φ) + F (Φ) = |J − r1 | Hε (Φ(x))dR + |Jˆ − r2 |2 (1 − Hε (Φ(x)))dR
ˆ 2
Γ∫ ∫Γ
1

+ ν. δε (Φ(x))|∇Φ(x)|dR + ( ∇Φ(x) − 1)2 dR, (14)
Γ Γ 2

In Eq.(14) Jˆ is the approximated image of J which is free of inhomogeneity can be


calculated by dividing normalized weighted image JAˆ N by mean of multi-scale filter Bk (x).
which as given by:
J(x)AN
Jˆ = , (15)
Bk (x)
where AN is normalized constant and Bk (x) is mean of multi-scale average filter which as
given by:
1∑
k
Bk (x) = M SAFi (x) (16)
k i=1
Where M SAFi (x) denote the multi-scale average filter and k express the number of scale.
The value of k should be properly selected not to take too much large and too much small
value of k because to take too small value of k then only some of local circular region be
analyzed for each pixel. On the other hand if to take too much large value of k, too many
local circular region be analyzed for each pixel in image. The model take the fixed value
of k = 32, where r1 and r2 are positive constant. Keeping Φ fixed, and minimizing the
energy functional with respect to r1 and r2 is obtained as follows:

ˆ ε (Φ(x))dX
JH
r1 (Φ) = ∫Γ . (17)
Γ
Hε (Φ(x))dX

ˆ − Hε (Φ(x)))dX
J(1
r2 (Φ) = ∫Γ . (18)
Γ
(1 − Hε (Φ(x)))dX
Keeping r1 and r2 fixed and minimizing the energy functional with respect to Φ, the
following Euler-Lagrange equation for Φ is obtained as given by:
∂Φ [ ( ∇Φ ) ( ( ∇Φ ))]
= δε (Φ)[(J − r1 ) − (J − r2 ) ] + ν.δε (Φ)div
ˆ 2 ˆ 2
+ ∇ Φ − div
2
(19)
∂t |∇(Φ)| |∇(Φ)|

The Wang et al. model [10] work well in low intensity as will as in severe intensity
inhomogeneity. Model fail to segment noisy images.

192
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

3 Proposed Model

In this section, we present and discuss in detail our proposed coefficient of variation (CoV),
based hybrid model. To use more local region information we defined local region window
in circular shape. Also, for fast convergence, of the evolving curve we add spf (singed
pressure force) function to the energy functional. For the regularization and controlling
the smoothness of level set function, we used Gaussian smoothing filter in our proposed
model, which is defined as follows.

1
G(x) = √ x2
. (20)
2Πσe− 2σ 2

Where σ is the standard deviation of th distribution. Where, Jˆ in the energy functional


is the approximated image i.e, J which is free of inhomogeneity. The energy functional
of our proposed CoV, based hybrid model is as follows:
∫ ˆ ∫ ˆ
(J(x) − ρ1 )2 (J(x) − ρ2 )2
F (ρ1 , ρ2 , Γ) = µ1 dS + µ2 dS, (21)
Γ ρ21 Γ ρ22
where µ1 , µ2 are positive constants. To solve this minimization problem the energy func-
tional F (ρ1 , ρ2 , Γ) can be reformulated in term of level set formulation as follows:
∫ ˆ ∫ ˆ
(J(x) − ρ1 )2 (J(x) − ρ2 )2
F (ρ1 , ρ2 , Φ) = µ1 H ε (Φ(x))dS + µ2 (1 − Hε (Φ(x))dS, (22)
ρ21 ρ22
minimizing the above equation with respect to ρ1 and ρ2 and keeping Φ fixed we have:
∫ 2
Jˆ Hε (Φ(x))dX
ρ1 (Φ) = ∫Γ . (23)
ˆ ε (Φ(x))dX
JH
Γ
∫ 2
Jˆ (1 − Hε (Φ(x)))dX
ρ1 (Φ) = ∫Γ . (24)
ˆ − Hε (Φ(x)))dX
J(1 Γ
Minimization of the above eq.(22) with respect to Φ Euler-Lagrange is obtained which is
as follows:
∂Φ [ (Jˆ − ρ1 )2 (Jˆ − ρ2 )2 ]
= − µ1 + µ2 δε (x). (25)
∂t ρ21 ρ22
For better performance and fast convergence we add spf (singed pressure force) function
to our CoV hybrid model which is as follows:
Jˆ − ρ1 +ρ2
ˆ =
spf (J) ( 2 ), (26)
max |Jˆ − ρ1 −ρ
2
2

we get the final Euler-Lagrange as follows:


∂Φ [ (Jˆ − ρ1 )2 (Jˆ − ρ2 )2 ]
= − µ1 + µ 2
ˆ
δε (x) + βspf (J). (27)
∂t ρ21 ρ22
Our suggested model over come the limitations of Wang et al. [10] model and also our
proposed model has the ability to segment both intensity inhomogeneity and noisy images.

193
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

4 EXPERIMENTAL RESULTS

In this section we tested and compare experimental results of our proposed model with art
of stat models. All the experiment for each model has conducted in MATLAB R2013a on
Core i3 PC with 2.00GB RAM and 2.40 GHz processer and window 7 operating system.
For each experiment, for our proposed model we used the following parameter: µ1 , µ2 , β
and σ has set according to the images and time step ∆t = .1. Our proposed Cov, based
hybrid model work well in multi-regions images, intensity inhomogeneity images and noisy
images. Experimental results show that our proposed new CoV, based hybrid model is
perform better than art of stat models in segmenting those images which has having noise
and intensity inhomogeneity.
For clarity, the comparison of our proposed model with art of stat models on experi-
mental results , we shall denote by
B-1. Zhang et al. model
B-2. Murtaza et al. model
B-3. Wang et al. model
B-4. Proposed CoV hybrid model

Tab. 1: Comparison of proposed model results with Zhang et al, Murtaza et al, and Wang
et al. model.
Test Problems in Figure 1
Model used itr cpu(s)
B-1 1000 20.85
B-2 1000 512.52
B-3 1000 97.91
B-4 200 4.39

In Fig(1) we have tested our proposed model and other three art of stat models on
ultrasound image. Usually medical images have lot of noise and inhomogeneity, due to
light and other factors like movement of patient and devices etc. It is clear from the figure
that all the three models fail to segment the given image which is shown in column one to
three (B-1, B-2 and B-3), but our proposed model successfully segment the given image
which show robustness of the proposed model in segmenting inhomogeneous and noisy
image.
In Fig(2) we tested our proposed on intensity inhomogeneity, real, medical and multi-
object with deferent intensity images. This figure shows that the performance of proposed
CoV based hybrid model is very well to segment these images.

5 CONCLUSION

In this paper, we proposed a new novel region-based variational model, with data term
based on (CoV), hybrid with Multi-scale low pass filtering for segmenting images with

194
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

(a) B-1 (b) B-2 (c) B-3 (d) B-4

(e) B-1 (f) B-2 (g) B-3 (h) B-4

20 20
20
40 40

60 40 60

80 80
60
100 100

80
120 120

140 140
100

160 160
120
180 180
20 40 60 80 100 120 140 160 180 20 40 60 80 100 120 20 40 60 80 100 120 140 160 180

(i) B-1 (j) B-2 (k) B-3 (l) B-4

Fig. 1: From left to right the first, second third and forth column represents B-1 model
results, B-2 model results, B-3 model results, and proposed CoV hybrid model
results. For each model from top to bottom first, second, and third row shows
initial contours, final contours, and segmented results respectively.

195
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

(a) (b) (c) (d)

(e) (f) (g) (h)

(i) (j) (k) (l)

Fig. 2: This figure shows the performance of proposed CoV, based hybrid model on real,
different intensities and intensity inhomogeneity images. For each image first,
second, third and forth row shows initial contours, final contours, and segmented
results respectively.

196
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

intensity inhomogeneity and noise. Experiments on medical and synthetic images showed
that the result of our proposed CoV, based hybrid model is robust and more efficient than
the existing models.

References

[1] N. Badshah, K. Chen, H. Ali and G. Murtaza. Coefficient of Variation Based Im-
age Selective Segmentation Using Active Contour. East Asian Journal on Applied
Mathematics, 2(2): 150-169, May 2012.

[2] V. Caselles, R. Kimmel and G. Sapiro. Geodesic active contours. International Jornal
of Computer Vision, 22(1): 61-79, 1997.

[3] T. F. Chan and L. A. Vese. Active contours without edges, IEEE Trans. Image Proc.,
10(2): 266-277, 2001.

[4] C. Gout, C. L. Guyader and L. A. Vese. Segmentation under geometrical conditions


with geodesic active contour and interpolation using level set method. Numerical Al-
gorithms, 39:155-173, 2005.

[5] C. L. Guyader and C. Gout. Geodesic active contour under geometrical conditions
theory and 3D applications. Numerical Algorithms, 48:105-133, 2008.

[6] C. Li, C. Xu, C. Gui and M. D. Fox. Level set evolution without re-initialization: a
new variational formulation, IEEE Computer Society Conference on Computer Vision
and Pattern Recognition, 1:430-436, 2005.

[7] W. Xiao-Feng and M. Hai and Z. Yi-Gang. Multi-scale local region based level
set method for image segmentation in the presence of intensity inhomogeneity,
Neurocomputing,151:1086-1098, 2015

[8] D. Mumford and J. Shah. Optimal approximation by piecewise smooth functions


and associated variational problems, Communications on Pure Applied Mathematics,
42:577-685, 1989.

[9] G. Murtaza, H. Ali and N. Badshah. A robust local model for segmentation based on
coefficient of variation, J. Information & Communication Tech., 5(1): 30-39, 2011.

[10] Z. Kaihua and Z. Lei and S. Huihui and Z. Wengang. Active contours with selective
local or global segmentation: a new formulation and level set method, Image and
Vision computing, 28(4): 668-676, 2010.

[11] V. Uro and P. Franjo and L. Botjan. A review of methods for correction of intensity
inhomogeneity in MRI, IEEE transactions on medical imaging, 26(3):405-421, 2007

197
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

[12] S. Osher and J. A. Sethian. Fronts propagating with curvature-dependent speed:


algorithms based on HJ formulations. J. Comput. Phys., 79(1):12-49, 1988.

[13] G. Song and B. Tien D. Image segmentation and selective smoothing by using
Mumford-Shah model, IEEE Transactions on Image Processing, 14(10):1537-1549,
2005

198
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

TOTAL VARIATION REGULARIZATION VIA RADIAL BASIS FUNCTION


APPROXIMATION FOR SPECKLE NOISE REMOVAL

1 2 3 4 5
Mushtaq Ahmad Khan , Wen Chen , Asmat Ullah , Muhammad Sadiq , and Sajad Ali

(d2014017@hhu.edu.cn, chenwen@hhu.edu.cn, asmatullah75@gmail.com, sadiqorakzai@hotmail.com, sajadali040@hotmail.com)

1,2,3
State Key Laboratory of Hydrology-Water Resources and Hydraulic Engineering, College of Mechanics and Materials,
Hohai University, Nanjing, Jiangsu 210098, P. R. China.
4
Electrical Department, Government College of Technology, Peshawar, KPK, Pakistan .
5
College of Science, Hohai University, Nanjing, Jiangsu 210098, P. R. China.

ABSTRACT

In this paper we study a new meshless algorithm for the removal of speckle noise from the measurements. This approach
used Total Variation (TV) regularization along with Radial Basis Functions (RBFs) approximation for the numerical solution
of TV-based model to remove the speckle noise. This algorithm is structured on local collocation and multiquadric radial
basis function. Numerical experiments show that the new algorithm can get good quality and maintain signal-to-noise ratio
compared with the recent traditional variational Partial Differential Equation (PDE)-based method [23].

Keywords — Image denoising, Total variation, Radial basis functions, mesh-based method, Meshless method.

1. INTRODUCTION

Multiplicative noise often appeared in various screens such as real images, synthetic images (SAR), and medical images. The
noise intensity of the image related to the gray image value. Speckle noise is mainly a form of multiplicative noise. This
paper, we concern with the speckle noise removing the problem. The equation depicts an image containing multiplicative
noise can be summed as:
f  g  , (1)
where f is the degrade image having multiplicative ( speckle ) noise  and g is the clean image.
In literature, there are many useful models have been used to tickle this problem, variational models having great success in
reducing noise with the use of total variation (TV) regularization., for instance, see [2,12,15,16,20,23]. Also, there are many
mesh-based numerical approaches have been utilized to solve such variational models. For more information, see
[10,11,12,24,25,26,29]. But there is space for improvement.

During the last few decades, meshless methods have been developed and efficiently applied to solve many problems in
science and engineering [1,7,8,19]. Due to their extensive applicability, examples of many kinds of meshless methods can be
found in the current literature [5,17]. One of the existing categories includes a class of meshless techniques that focus on the
Radial Basis Functions (RBFs) , which are especially useful for solving PDEs [4,6,8,12,13,14,27]. In 1990, Kansa
demonstrated methods utilizing RBFs to deal with multivariate data, both for scattered data approximation and then for the
solution of PDEs [18]. This method is also called the RBF collocation method (RBFCM) [5,17,18].

2. PREVIOUS WORK

2.1 Total variation (TV) based Rudin-Lions-Osher (RLO) model

The total variation (TV) regularization is one of the essential tools in inverse problem and numerical . So the TV
regularization for an image g :   R is defined as flow:
2

199
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

TV ( g )   g dxdy, where g  g x2  g y2 .

The RLO model was presented by Rudin et.al in [23] for the removal of multiplicative noise. The minimization functional of
equation (1) is given as
 f  f 
2

min  J ( g ) dxdy  1  dxdy+ 2    1 dxdy  , (2)

g g 
 
g

where J ( g )   g dxdy. In equation (2) the first term is called the regularized term in TV functional, which preserve

the edges and the last two terms are called data fidelity terms, where  1 and  2 are called the parameters and are used for
image de-noising. The minimization functional (2) leads to the flowing time marching Euler-Lagrange equation.
dg g f2 f
   1 3   2 2 ,
dt g g g
or (3)

dg   gx   
 
gy  2
  1 f   2 f

dt x  g x2  g y2  y  g x2  g y2  g3 g2
   
g
for given u  x, y, 0  and  0 on . For further detail, see [23].
n
2.2 Radial basis functions

Suppose R d be d-dimensional Euclidean space. Let s  R d , and let  : R d  R be an invariant function whose value at
any point x  R depends on the distance from the fixed point (center) s , which can be written as
d
  x  s . The function

 is a Radial Basis Function (RBF) , where s represents the center of the RBF  . The variable r  x  s is used in RBF
and is Euclidean norm. Some commonly used RBFs are listed in Table 1.

y 
N
Let f ( x), x   R be a multivariate function interpolation function values, where  is bounded
n
j j 1
be the N
domain. For the data location points (centers)  xi 
N
i 1
  R n , then by RBF approximation f ( x ) is defined as

 
N
f ( x)    j x  x j , x  , (4)
j 1

in which  j is the unknown coefficient and will be determined. By collocation method the above equation (4) can be
rewritten as

 
N
yi  f ( xi )    j xi  x j , for i, j = 1,2,...,N. (5)
j 1

The above equation (5) can be written in the following NN matrix linear system:
C   e,
   1 , 2 ,...,  N  and will be determined and e   y1 , y2 ,..., yN  . The RBF interpolation matrix is
T T
in which
defined as

 
C  i , j    x j  x j 
1i , j  N 
with  i , j   j ,i .

200
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

In the above system C is N  N matrix,  and e are N 1 matrices. The invariability of the above system is discussed in
[21,22]. Hence polynomials are augmented to the equation (4) to guarantee that the resultant interpolation matrix is
invertible. Such a formulation is expressed as follows

   
N M
f  x     j x  x j P  x,
N 1 i (6)
2
j 1 i 1
with constraints

  P  x   0,
M

j i j i  1, 2, ..., M , (7)
i 1

in which Pi   m1 , i  1, 2,..., M ,


where  m represents the polynomial space in which the total degree of all polynomials is then m in N variables [11],
 N  m  1
 .
 m 1 
Then, equations (6) and (7) yields a matrix system of M  N  M  N 
A P    b 
 PT 
O   0  0 
,

where the elements of the matrix A are


Ai , j   ij    xi  x j
2
 1i , j  N
,

the elements of P are Pi , j  Pi  x j  , 1i  N ,1 j  M , and O is also M  M matrix.


Moreover, details of positive definite RBFs, conditionally positive definite (CPD) RBFs, and RBFs containing the shape
parameter c are discussed in [18,21,22] and listed in Table 1.

Table 1: [ k ] denotes the nearest integers less than or equal to k , and N the natural number, c a positive constant
which is known as the shape parameter, and CPD denotes the m-order conditionally positive definite

Name of RBF Definition CPD order (m)


Multiquadrics (MQ)
 (r , c)   r 2  c  if k >0, k  N
k
k   1
 ( r , c)   r 2  c 
Inverse Multiquadrics (IMQ) k
if k >0, k  N 0
Gaussian (GA)
 r 2  0
 (r , c)  exp  2 
 c 
Polyharmonic Splines (PS) k 
r 2 k 1 if k  N  2  1
 (r )   2 k 1  
r lnr if k  N
Thin Splines Plates (TPS)
0
 (r )  r 2 ln r

201
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

3. NUMERICAL SCHEMES USED FOR RLO MODEL

3.1 Gradient projection method (GP)

Rudin et al. used gradient projection scheme in [23] to solve equation (3). The numerical approximation for equation (4) is
given as flow:
 
 
0
 g xxn
n
g yy  f2 f0
  1 3  2
n 1
g  g  dt 
n
 , (8)
g  g  g  g     
n n n n n n

2
x
2
y
2
x
2
y  g g2

here the time step dt is set to be 0.2. For further details, see [23].

3.2 Proposed meshless scheme (MS)

In the subsection we propose a novel meshless Scheme by the combination of TV regularization and Radial Basis Function
approximation for the numerical solution of model (3) for clean image g. The resultant Euler-Lagrange equation in this case
is given as follow:


  
dg  g xx  g yy  g x  g y  2 g x g y g xy  g x g xx  g y g yy
2 2 2 2
f2 f
 1 3   2 2
 (9)
3
dt
g x2  g y2 2   g g

Assume  xi  is the   2 where  is a closed domain. So, for any RBF the
N
i 1
N distant evaluation points in

 r   r r   x, y  . For  xc j 
Nc
in  , i.e.
2
following equations satisfied, , given Nc centers, defining of RBF
2 j 1
without polynomial term one may write:

 ,
Nc
f  x     j x  xc j (10)
2
j 1

where  j coefficients in RBF, and is resolved using upholding the interpolation condition
f  xj   f , (11)
a set of points that usually coincides with Nc centers. The interpolation condition at Nc centers results in a Nc  Nc linear
system
D  f ,
 . Where    1 , 2 ...,  Nc  and f   f1 , f 2 ..., f Nc  are Nc1
T T
which is solved for expansion coefficients of
matrices. The matrix D is called interpolation matrix or system matrix, and is given by

 
D  ij    xci  xc j
2
 1i , j  Nc
. This system matrix D is Nc  Nc ordered and is always invertible [21]
because it is always a positive definite matrix [22].
Thus we have
  D 1 f , (12)

where  is Nc1 matrix. The is evaluated using (8) at N evaluation points x   , through forming  N  Nc 
i i 1
N

evaluation matrix E which is given as

 
E  ij    xi  xc j
2
 1i  N , 1 j  Nc
.

202
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

The interplant is then evaluated at N points using the matrix vector product to produce g as follows.
g=E . (13)
Combining equations (12) and (13) the following equation is obtained.
g  ED 1 f , or g =Ff , where F  ED 1 , (14)
which gives approximate solution at any point in  . Where g is N 1 matrix. Now, from equations (9) and (14) we get a
new restoration PDE, which is shown in the undergoing non-linear system of equations:

g n 1
g

n g n
xx  g yy
n
 
  g x2    g y2   2 g xn g yn  g xn g y  g x g yn    g x2  g xxn   g y2  g yyn
n n n n

 g    g  
3
dt 2
n
2
n 2
x y

f 
0
2
f0
1  2 ,
g  g2 
n n
3

or

   
M g n g n 1  M g n g n  dt  g xxn  g yy

n
   g    g 
2
x
n
2
y
n
   2g g  g g  g g    g  g
n
x
n
y
n
x y x
n
y
2
x
n
n
xx  
 g y2
n
g yy
n 


   
0
f2 f0 
dt M g n     2 , (15)
 1 g3
   
g 2 
n n


3


where M  g   g x  g y
2

2 2
, g x  Fx f , g y  Fy f , g xx  Fxx f , g yy  Fyy f , and f  0  0.

As the RBF in the Kansa scheme does not necessarily satisfy the governing non-linear PDE (15), so we have more flexibility
to choose an RBF. The well-known RBF in the Kansa method is the multi-quadric (MQ) [5,18], which usually shows spectral
accuracy if a suitable shape parameter c is chosen. Here, the shape parameter c used in RBF is also one of the most
important parameters for the smoothness in our method MS. For the optimal value of c , our proposed methodology gives
more accurate and smooth results in image denoising having multiplicative noise. In this technique, the shape parameter c
and regularization parameters  1 and  2 depend upon the size of the image and the noise level in the image.

Algorithm-1 Algorithm for proposed method MS


RBF:
1. Set N  Nc, n total number of pixel points ( N shows the image size i.e.,
N  N ), where N and Nc are called the total number of pixel and center pixel
points which are used in the RBF approximation process.
2. Find the  according to equation (12) by multiquadric radial basis function (MQ-
RBF) by using step 1.
3. Find g according to the equation (14) by MQ-RBF by using steps (1) and (2).

TV filtering:
4. Initialize the values of 1 ,  2 , c,  , f , and dt.

5. Set n as Nc centers pixel points xc1  xc2  ...  xcn , set n  0.


6.

203
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Put g as MQ-RBF in (15) from (14). Here, f


(0)
 f is chosen.
7.
n 1
n  n 1. For each center point xci , for i  1, 2,..., n, compute g
according to (15) by Kansa method.

8. g n 1  g n
n
   105 (stopping criteria), go to step (10).
g
9.
10. Go to step (7).
11. End.
n 1
Output g  g .

4. EXPERIMENTAL RESULTS AND DISSCUSION

This section is dedicated to an examination of some numerically computed examples to show the execution of our strategy
MS on speckle noise (Uniform distribution) with mean value 0 and variance L. Obtained results are compared with the
results of model GP. The test images are ``Image1", ``Image2", and ``Image3" which are appeared in Figure 1.
In this article, it is supposed that N=Nc= the size of the image, for our method MS to compare with method GP. Here,
multiquadric radial basis function (MQ-RBF) is utilized for the proposed method MS used for the proposed method GP.
To quantify the denoised image, the peak-signal-to-noise ratio (PSNR) is considered. This measure has been commonly used
and applied to determine the quality of the restored image. It can be calculated by the following formula

 : 
 m  n max g 
PSNR  10  log10  .
 (16)
 : 2

 g g 
 
:
Where g is the given image, g is the restored image and m  n is the size of the image. Iterations in our algorithm are
terminated when the following condition is satisfied.
v k 1  v k 
 , (17)
v k 

where  3
indicates the maximum permissible error. Here, it is set to be 10 . Here we use the Multiquadric (MQ) RBF to
 
test and compare the results of model M2 with model M1. For each point x j , y j , MQ-RBF is defined as equation below.

 j  x, y   c 2  rj2  c 2   x  x j    y  y j  , x  x   y  y  .
2 2 2 2
where rj  c2 j j

Figures 2 and 3 are examined as our first test. The original and noisy images with noise levels a, b, c are presented in Figures
2(a)-3(a) and 2(b)-3(b), respectively. The resultant denoised images by our meshless method MS and mesh-based method GP
are shown in Figures 2(c)-3(c) and 2(d)-3(d), respectively.

204
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

We can notice from subfigures 2 and 3, the visual quality of the restoration of our approach M2 is better than M1 because of
the mesh-free applications of MQ-RBF associated with the M2. In this mesh-free procedure, the shape parameter c plays a
vital role in image denoising. The range of optimal value for c in this case is set to 1.79  c  1.84. Moreover, the PSNR
values for the two images by the two methods GP and MS are listed in Table 2. The bigger the PSNR value, the better the
denoising performance. It can be seen from Table 2 that the PSNR values of method MS are greater than that of method GP
for the two images, which shows the dynamic restoration performance of the MS over GP. The CPU time of computation and
number of iterations required for convergence of the two methods GP and MS are also listed in Table 2. It can likewise be
seen from the Table 2 that the number of iterations and CPU time of computation of the MS is smaller than that of GP, which
shows the faster restoration performance of our mesh-free algorithm of proposed method MS over the mesh-based algorithms
of model GP. So, it is evident from this example, that the performance of our mesh-free based method MS is superior to that
of mesh-based technique GP regarding visual restoration quality (PSNR), the number of iterations and CPU time of
computation.

Next, the homogeneity is tested, and loss (or preservation) is examined for the two procedures GP and MS while being
applied to ``Image1". For this purpose, different lines of the original image compared with noisy and restored images that are
shown in Figure 4. It is clear that the lines restored by proposed method MS (demonstrated in subfigure 4(c) is far better than
what acquired utilizing the model GP that is presented in the subfigure 4(b) due to the meshless MQ-RBF employed in our
method MS.

(a) (b) (c)


Fig. 1: Test images; (a) Image1; (b) Image2; (c) Image3.

205
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

(a) (b) (c)

(d)
Fig. 2: Reconstructed results on Image1; (a) Original image; (b) Noisy image with L  0.9 ; (c) Obtained image by
method GP 1  0.0001, 2  0.09 ; (d) Obtained image by method MS 1  0.01,  2  0.02, c  1.81 .

(a) (b) (c)

206
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

(d)
Fig. 3: Recovered results on Image12; (a) Original image; (b) Noisy image L  0.09 ; (c) Restored image by method
GP 1  0.0001, 2  0.08 ; (d) Restored image by method MS 1  0.001,  2  0.01, c  1.83 .

(a) (b) (c)


th
Fig. 4: The 107 line comparison of the original image with the noisy image, restored image by method GP, and
restored image by method MS of the Image1. (a) Original and noisy images lines; (b) Original and restored by method
GP images lines; (c) Original and restored by method MS images lines. The blue line is the original image and the red
line is the restored image.

Table 2: Comparison of method GP and proposed method MS in terms of PSNR, number of iterations, and
CPU-time (in seconds).
Image Size Model GP Model MS

PSNR It. no CPU(times) PSNR It. no CPU(times)

Image1 3002 23.21 422 190.21 24.59 239 101.23


Image2 3002 25.19 301 120.11 26.02 161 80.63

207
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

4.1 Shape parameter analysis

This section, we compare the quality of the image restoration (PSNR) of our model MS for different values of shape
parameter for "Image3” with L  0.05. We can see from Figure 5 and Table 3, that different value of shape parameter
effect the image restoration quality (PSNR). The parameters used in this case are 1  0.003,  2  0.07, c  1.80  .

(a) (b) (c)

(d) (e)

Fig. 5: Experimental results on Image3; (a) Image3 image corrupted with speckle noise L  0.05 ; (b) Restored
image by optimal value of c  1.80 ; (c) Restored image by c  1.89 ; (d) Restored image by c  1.72 .

Table 3: Comparison of the image quality (PSNR values) for different values (increase or decrease) in the
shape parameter c with the optimal value of shape parameter c of the proposed method MS for the artificial Image3.
Image Size Optimal value c PRNR Increase in c PSNR Decrease in c PSNR

Image3 3002 1.80 26.19 1.89 25.87 1.72 25.68

208
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

5. CONCLUSION

In this paper, a novel meshless method that is a multiquadric radial basis function combined with TV regularization is
proposed for the removal of speckle noise. This approach is exploited for the solution of PDE arisen from the TV
minimization functional. This meshless procedure is mathematically simple compared with the mesh-based method and
hence gives more optimal results.
This approach is tested on various Images for speckle noise and the results are compared with the existing method.
Experimental results have shown that the quality of the restoration of images, the number of interactions, and the CPU times
with the use of the proposed method is quite good, and the proposed algorithm is quite efficient. We have also noticed that
the performance of our proposed method is far better than that of the existing method regarding restoration quality (PSNR),
the number of iterations, and CPU times due to the meshless applications of MQ-RBF used in our algorithm. The choice of
shape parameter c also plays a significant role in this algorithm, which affects the image restoration. The shape parameter
analysis has also been discussed here.

Appendix:

The derivatives in our method MS for equation (15) are given as follows:
When we evaluate the derivative at N evaluation points x   and Nc center points x   , then RBF
N
i i 1 j
Nc

j 1

interpolation we have

 ,
Nc
g    j x  xc j
2
j 1

or (18)
g =E ,
with  N  Nc  evaluation matrix E i.e.,

 
E  ij    xi  xc j
2

1i  N , 1 j  Nc
.
Then the first derivative from (18) becomes as:
g 
 ,
Nc
 g xi    j  x  xc j
xi j 1 xi 2

or (19)

g xi  E ,
xi
E    ij 
where
xi

xi

 
xi 
 xi  xc j  2

1i  N , 1 j  Nc
.

Combining equations (12) and (19) we have



g xi  ED 1 f . (20)
xi
1
Define F  ED , then above equation (20) can be re-written as

g xi  Ff  Fxi f . (21)
xi
The differentiation matrix can be defined as

Fxi  ED 1. (22)
xi
For second derivative, we have

209
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

2
Fxi xi  ED 1. (23)
xi
2

Also
2 g 2
 g  Ff  Fxi xi f . (24)
xi2 xi2
xi xi

The differentiation matrix is well-defined since it is known that the system matrix D is invertible.
For any sufficiently differentiable RBF,  [ r ( x)], the chain rule the first derivative is
 d r

xi dr xi
with (25)
r xi
 .
xi r
The second derivative is calculated as follows
2
 2 d  2 r d 2  r 
   
xi2 dr xi2 dr 2  xi 
with (26)
2
 r 
1  
 r  xi  .
2

xi2
r
For the MQ in particular,
d r
 1
dr
c 2  r 2  2
and (27)
d 2 c2
 3
.
dr 2
c  r 
2 2 2

6. REFERENCES

[1] S.N. Atluri, S. Shen. The meshless method. Forsyth: Tech Science Press: 2002.
[2] G. Aubertt, J.F. Ajol. A varitional approach for removing multiplicative noise. SIAM Journal on applied
mathematics, vol.68, pp.925-946, 2008.
[3] M.D. Buhmann. Radial Basis Functions: Theory and Implementations. First edition, Cambridge University
Press, UK, 2003.
[4] F. Bernal, G. Gutiérrez. Solving delay differential equations through RBF collocation.
AAMM. Vol. 1(2), pp. 257-272, 2009.
[5] W. Chen, Z.J. Fu, C.S. Chen. Recent Advantages in radial basis functions collocations methods. Springer, 2014.
[6] Y. Chen, S. Gottlieb, A. Heryudono, A. Narayan. A reduced radial basis function method for partial differential
equations on irregular domains. J. Sci. Comp. Arc., vol. 66(1) pp. 67-90 2016.

210
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

[7] N.M. Duy, T.T. Cong. Numerical solution of Navier-Stockes equations using multiquadric radial basis function
networks. Neural Networks, vol. 14, pp.99-185, 2001.
[8] G.E. Fasshauer. Meshfree application method with Matlab. Interdisciplinary Mathematical Sciences, 2007;6.
[9] M. Dehghan, M. Abbaszadeh, A. Mohebbi, A meshless technique based on local radial basis functions
collocation method for solving parabolic-parabolic-patlak-keller-segal-chemotaxis model. Eng. Anal. Bound.
Elem., vol. 56 pp. 129-144, 2015.
[10] T. Goldstein, S.Osher. The split bregman method for L1-regularized problems. SIAM J. Imaging Sci., vol. 2(2),
pp. 323-343, 2009.
[11] D. Gabay, B. Mercier. A dual algorithm for the solution of nonlinear variational problems via finite element
approximation. Comput. Math. Appl. Vol. 2, pp.17–40, 1976.
[12] Y.M. Huang, M.K. Ng, Y.W.wen. A new total variation method for multiplicative noise removal. SIAM Journal
on Image Sciences, vol. 2(1), pp. 22-40, 2009.
[13] S.U. Islam, V. Singh and S. Rajput. Estimation of dispersion in an open channel from an elevated source using an
upwind local meshless method. Inter. J. Comp. Meth., vol. 2(1), 2017.
[14] S.U. Islam, B. Sarler and R. Vertnik. Local radial basis function collocation method along with explicit time
stepping for hyperbolic partial differential equations. Appl. Num. Math.,vol. 67 pp. 136-151, 2013.
[15] D.H. Jiang, X. Tan, Y.Q. Liang, S. Fang. A new nonlocal variational bi-regularized image restoration model via
split bregman method. EURASIP. J Image. Vide. Process. Vol. 15(1), 2015.
[16] Z. Jin, X. Yang. A variational model to remove the multiplicative noise in ultrasound images. J. Math.
Imaging Vis. Vol. 39, pp. 62–74, 2011.
[17] K.B. Kazemi. Solving differential equations with least square and and collocations methods. Master thesis,
George Washington University, USA, 2004.
[18] E.J. Kansa, E. J. Motivation for using radial basis functions to solve PDEs. Lawrence Livermore National
Library, USA, 1999.
[19] G.R. Liu. Mesh free methods: moving beyond the finite element method. Boca Raton: CRC Press: 2003.
[20] F. Li, C. Shen, J. Fan, C. Shen. Image restoration combining a total variational filter and a fourth-order filter.
J. Vis. Commun. Image. R. vol. 18, pp. 322–330, 2007.
[21] W.R. Madych, S.A. Nelson. Multivariate Interpolation and Conditionally Positive Definite Functions, II.
Mathematics of Computations, vol. 54(149), pp. 211-230, 1990.
[22] C.A. Micchelli. Interpolation of scattered data: Distance matrices and conditionally positive definite functions.
Constrictive Approximation, vo. 2(1), pp. 11-12, 1986.
[23] L. Rudin, , P.L. Lions, S. Osher. Multiplicative denoising and delurring: Theory and algorithms. In Geometric
Level Set Methods in Imaging, Vision, and Graphics, S. Osher and N. Paragios, Eds., pp. 103-120, Springer,
Berlin, Germany, 2003.
[24] G. Steidl, T. Teuber. Removing Multiplicative Noise by Douglas-Rachford Splitting Method. J. Math. Imagning.
vol. 36, pp. 168-184, 2010.
[25] J. Shi, S. Osher. A nonlinear inverse scale space method for a convex multiplicative noise model. SIAM J.
Imaging. Sci. vol.1, pp.294-321, 2008.
[26] B. Shi, L. Huang, Z.F. Pang. Fast algorithm for multiplicative noise removal. J. Vis. Commun. Image
R. vol.23, pp.126–133, 2012.
[27] S. Sajavicius, Optimization, conditioning and accuracy of radial basis function method for partial differential
equations with nonlocal boundary conditions. Eng. Anal. Bound. Elem., vol. 37(4), pp. 788-804 , 2013.
[28] A. Ullah, W. Chen, M.A. Khan. A new variational approach for restoring images with multiplicative noise.
Comp. Math. Appl. Vol. 71, pp. 2034–2050, 2016.
[29] X.L. Zhao, F. Wang, M.K. Ng. A new convex optimization model for multiplicative noise and blur removal.
SIAM J. Imaging Sci. vol. 7, pp. 456–475, 2014.

211
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Selective Segmentation of Images Via Local Gaussian


Distribution
Noor Badshah1 , Muhammad Naveed Khan1 , and Hadia Atta1
1
Department of Basic Sciences UET Peshawar Pakistan
noor2knoor@gmail.com, naveedkhanmaths@gmail.com, hadiaatta@gmail.com

Abstract: This paper includes a new region-based active contour model in a variational level set formulation for
selective segmentation of images. In this model we have introduced markers in Local Gaussian Distribution Fitting
Energy to get efficient selective segmentation results in noisy images and in images having intensity inhomogeneity as
well as texture.

Keywords: Image Segmentation, Selective Segmentation, Local Gaussian Distribution, Intensity In-
homogeneity.

1 Introduction:

Image segmentation is one of the most significant task in image processing. Segmentation of an image
means to make partitions of the image, these partitions are similar to each other in some characteristics
but showing different objects or parts of that image. In an image either a targeted object is selected
or the image is divided into many parts. The basic aim of segmentation is converting the image
into a form which will be more expressive, meaningful and easier to understand as compared to
initial form of the image. Different kinds of techniques are used in image processing for segmentation
process. Like as thresholding, regions based segmentation, edge detection method, clustering method,
histogram based method, graph partitioning method, partial differential equation-based method etc.
Every method of segmentation has its own qualities and also some drawbacks. Active contour
models are established methods used for efficient results of segmentation process. Active contour
models are used either in edge dependent segmentation or in region dependent segmentation. In
edge dependent models image gradient is used for curve evolution and region descriptors are utilized
in region dependent models to specify any region of interest, like color, intensity, or texture to
guide the contour motion. Region covers more pixels than edges therefore method using region
techniques get more information of image as compared to method using edge information. The
models depending on edge information show weak results in segmentation of noisy images, images
having intensity inhomogeneity or texture as compared to models depending on region information.
These segmentation model are useful in many applications, but in many real life images it is required
to segment a particular part of the image or a particular targeted object in an image. Therefore
these models are not directly applicable in the mentioned problems. Such problems can be handled
by using the functionality of selectivity that is to segment only a particular region in those images
which have the same features by giving some additional information of geometric constraints about
the targeted object.
In many medical images the segmentation is used where most of the objects have similar pixels
intensities. For this purpose many models have been designed which are working well in many images.
212
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

BC model [1] suggested a combined model of edge-based as well as region-based methods. There they
achieved more robustness for images having noise than the existing models. However, the BC model
[1] can not segment desire object properly in most of the cases. It also can not segment the desired
object properly in presence of severe noise, having intensity inhomogeneity or textures. Because it
uses the edge function, and when there is intensity inhomogeneity, texture or severe noise than it is
difficult for edge function to detect the edges of the targeted object or specific region of an image.
To overcome this problem filters are used to sharpen the edges or to smooth a textured image, which
will be extra load and time consuming process.
The model of LGDF [2] is a region dependent model, it shows very good results in textures as
well as in intensity inhomogeneity. Therefore here we are neither using any filter nor edge detecting
function in our new model. In this work we have combined the idea of markers for selection of an
object with the LGDF [2] energy and developed a new model. Remaining paper has been organized
as follow:
In Section 2 a literature review about segmentation is provided. Experimental results are shown
in Section 3. In Section 5 conclusion obtained from experimental results are given.

2 Literature Review

2.1 BC Model:
To do selective segmentation in image Zo , Badshah and Chan model solves the following minimization
problem

min(γ,e1 ,e2 ) F (γ, e1 , e2 ),


where
Z Z Z
2
F (γ, e1 , e2 ) = µ d(x, y)g(|∇Zo |)ds + λ1 |Zo (x, y) − e1 | dxdy + λ2 |Zo (x, y) − e2 |2 dxdy.
γ inside(γ) outside(γ)
(1)

Here λ1 and λ2 are any constants, µ is a positive parameter, For the given image Zo, e1 and e2 are
−(x−xi )2 −(y−yi )2

inside and outside respectively the averages in γ. d(x, y) is given in [3] as d(x, y) = Πm
i=1 1−e 2σ 2 e 2σ 2 ,
which is known as distance function for all (x, y) in Ω.
Where A = {(xi , yi ) : i = 1, 2, 3, ..., m} is marker set, giving geometrical constraints. It is preferred
to segment the desired region near A. Certainly in the neighborhood of A, d ≈ 0. g|∇Zo | is used to
detect edge of the object, defined as:
1
g|∇Zo | =
1 + |∇Zo |2
R
In Badshah-Chan model the 1st term γ d(x, y)g(|∇Zo |)ds is same to [3] and [4]. This term has been
used to detect the unknown boundary curve γ. The BC model uses function g|∇Zo | for detecting
the edges, therefore the model cannot detect objects with fuzzy edges. Isotropic Gaussian smoothing
is necessary to smooth Zo however it in addition smoothes the edges also. Therefore the concept
of geodesic active contour alone is notR enough. To overcome this problem in noisy images, region
information are also used by adding λ1 inside(γ) |Zo (x, y)−e1 |2 dxdy +λ2 outside(γ) |Zo (x, y)−e2 |2 dxdy
R

with edge information to form the other two terms of Badshah-Chan model. In a given image the
level set formulation [6, 7, 8] is used to get the implicit representation of the boundary, and to
detect interior and exterior of the objects. Denoting the γ exterior by Ω+ in which points (x, y)
lies if Φ(x, y) > 0 and interior by Ω− which is collection of point (x, y) for which Φ(x, y) < 0 and
213
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

γ = {(x, y) : Φ(x, y) = 0},. Where Φ : Ω → R is a level set function. The quantities in equation (1)
may be further now expressed as
Z Z
length{γ} = |∇H(Φ)| = δ(Φ)|∇Φ|dxdy,

Z Z
2
|Zo − e1 | dx = |Zo − e1 |2 H(Φ)dxdy,
outside(γ) Ω
Z Z
|Zo − e2 |2 dx = |Zo − e2 |2 (1 − H(Φ))dxdy,
inside(γ) Ω

1 if x ≤ 0
Here H(x) = and δ(x) = H 0 (x) Will be converted to their regularized forms [5, 6, 9]
0 if x < 0
  ω 
1 2 
H (ω) = 1 + arctan , δ (ω) =
2 π  π( + ω 2 )
2

Thus equation (1) becomes


Z
F (Φ, e1 , e2 ) = µ d(x, y)g(|∇Zo |)δ (Φ)|∇Φ|dxdy

Z Z
2
+λ1 |Zo (x, y) − e1 | H (Φ)dxdy + λ2 |Zo (x, y) − e2 |2 (1 − H (Φ)dxdy
Ω Ω
Taking Φ fixed then minimizing F (Φ, e1 , e2 ) with respect to e1 and e2 , we get
R R
Ω ZRo (x, y)H (Φ)dxdy Zo (x, y)(1 − H (Φ))dxdy
e1 = and e2 = Ω R
Ω H (Φ)dxdy Ω (1 − H (Φ))dxdy
On condition that in Ω the interior and exterior of curve are both non-empty. Taking e1 and e2
fixed then minimizing F with respect to Φ gives the Euler-Lagrange equation for Φ as

 h i
 δ (Φ) µdiv(G(x, y) ∇Φ ) − λ (Z (x, y) − e )2 + λ (Z (x, y) − e )2 = 0 in Ω
 |∇Φ| 1 o 1 2 o 2
(2)
 G(x, y) δ (Φ) ∂Φ = 0 on ∂Ω
|∇Φ| ∂~
n

Here G(x, y) = d(x, y)g(|∇Zo |), The partial differential equation given above can be assumed just as
a steady state form of the evolution equation:
 
∂Φ ∇Φ 2 2
= δ (Φ) µ∇.(Zo (x, y) − λ1 (Zo − e1 ) + λ2 (Zo − e2 ) , in Ω (3)
∂t |∇Φ|
Where Φ(x, y, 0) = Φo (x, y) in Ω.

2.2 LGDF Model:


The LGDF model [2] can obtain comparatively convincing and accurate results in segmentation of
images having noise and intensity inhomogeneity. This model involves more complicated statistical
terms of local intensities, which describes the information of local intensity information provided by
partition of neighborhood, which is expressed as
M Z
Z !
X
E LGDF = −k(x − y)logpi,x (Zo (y))dx (4)
Ω i=1 Ωi

214
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

The basic objective is to minimize ExLGDF in Ω which is domain of the image, for every center x.
To define local circular region of radius ρ the pixel point x is used , domain of image is split into M
disjoint regions {Ωi }M
i=1 and pi,x (Zo (y)) is a posteriori probability of intensity Zo (y) prescribed by a
pixel point y in the ith sub-region, and its spatial weighting is provided by k(x − y) depending upon
length between x and y, they usually can be expressed by
(vi (x) − Zo (y))2
 
1
pi,x (Zo (y)) = √ exp − (5)
2πσi (x) 2σi (x)2

In above equation vi (x) are local intensity means along with σi (x) which are the local intensity
standard deviations. The term k(x − y) hasR been used as weighting function, which is non-negative,
where k(x − y) = 0 for |x − y| > ρ and k(x − y)dy = 1 in its defined neighborhood. Here k is
selected as a truncated Gaussian kernel such that with the increase of |d|, value of k(d) falls off and
approaches zero.
 
|d|2
(
1
a exp − 2σ 2
if |d| ≤ ρ
k(d) =
0 if |d| > ρ
In the level set formulation domain of image could possibly be broken into sub regions, which are
foreground Ω1 and background Ω2 . Ω1 and Ω2 may be expressed as the area outside zero level set
and inside the zero level set of Φ, i.e. Ω1 contains those points for which Φ(x, y) > 0 and Ω2 contains
the points (x,y) for which Φ(x, y) < 0. Involving Heaviside function, the equation
M Z
X
ExLGDF = −k(x − y)logpi,x (Zo (y))dx
i=1 Ωi

may be represented in terms of Φ,vi ,and σi2 as


Z
ExLGDF (Φ, v1 (x), v2 (x), σ1 (x)2 , σ2 (x)2 ) = − k(x − y)logp1,x (Zo (y))N1 (Φ(y))dy
Z (6)
− k(x − y)logp1,x (Zo (y))N2 (Φ(y))dy

Here N1 (Φ(y)) = H(Φ(y)) and N2 (Φ(y)) = 1 − H(Φ(y)). Hence, the energy E LGDF in Eq.(4) may
be expressed as

Z
ExLGDF (Φ, v1 , v2 , σ12 , σ22 ) = ExLGDF (Φ, v1 (x), v2 (x), σ1 (x)2 , σ2 (x)2 )dx (7)

R (|∇Φ(x)|−1)2 dx R
Further for more factual results the regularizing terms P (Φ) = 2 and L(Φ) = |∇H(Φ(x))|dx
have been used, then, entire energy functional will be

F (Φ, v1 , v2 , σ12 , σ22 ) = E LGDF (Φ, v1 , v2 , σ12 , σ22 ) + νL(Φ) + µP (Φ) (8)

Here ν, µ are weighting constants such that ν > 0 and µ > 0. Replacing H by its smooth version H ,
which is
 
1 1 x
H (x) = + arctan( ) (9)
2 π 
and derivative of this H is
 1
δ (x) = . 2 (10)
π ( + x2 )
215
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Therefore, F (Φ, v1 , v2 , σ12 , σ22 ) in Eq.(8) will become

F (Φ, v1 , v2 , σ12 , σ22 ) = ELGDF (Φ, v1 , v2 , σ12 , σ22 ) + νL (Φ) + µP (Φ) (11)

The parameters vi and σi2 that minimize the energy in Eq.(11) fulfill the Euler-Lagrange equations
given below:
Z
k(y − x)(vi (x) − Zo (y))Ni, (Φ(y))dy = 0 (12)

and
Z
k(y − x)(σi (x)2 − (vi (x) − Zo (y))2 )Ni, (Φ(y))dy = 0 (13)

Where N1, (Φ(y)) = H (Φ(y)) and N2, (Φ(y)) = 1 − H (Φ(y)) From Eq.(12) and Eq.(13), we may
obtain the value of vi (x) and σi (x)2 which minimize energy functional F (Φ, v1 , v2 , σ12 , σ22 ) for a fixed
Φ. By solving the following gradient descent flow equation minimization of energy functional F in
Eq.(11) with respect to Φ may be obtained.
    
∂Φ ∇Φ 2 ∇Φ
= −δ (Φ)(η1 − η2 ) + νδ (Φ)div + µ ∇ Φ − div (14)
∂t |∇Φ| |∇Φ|
where
k(y − x)(v1 (y) − Zo (x))2
Z  
η1 (x) = k(y − x)logσ1 (y) + dy
Ω 2σ1 (y)2
and
k(y − x)(v2 (y) − Zo (x))2
Z  
η2 (x) = k(y − x)logσ2 (y) + dy
Ω 2σ2 (y)2
In this method the means and variances of local intensities are assumed as spatially varying
functions to manage noise and inhomogeneity in an image. Therefore it handles noise and intensity
inhomogeneity in an image, also very helpful in texture images during the process of segmentation.
This model is very effective in synthetic as well as in real images.

3 Selective segmentation of Images via Local Gaussian Distribution:

This portion will develop a new model for selective image segmentation using markers in Local
Gaussian Distribution Fitting Energy [2] in the process of image segmentation to get accurate results
in different images. We will use markers for selecting the desired object, as it has been used in
”Selective Image Segmentation Using Geometrical Constraints” [1]. We will take start from LGDF
energy, then will introduce markers inside this energy functional to develop a new model, which will
be used for selective segmentation of images. The energy functional of LGDF is given by:
M Z
Z !
X
LGDF
E = −k(x − y)logpi,x (Zo (y))dx (15)
Ω i=1 Ωi

For selective image segmentation introducing


−(x−xi )2 −(y−yi )2
 
d(x, y) = Πm
i=1 1 − e 2σ 2 e 2σ 2

in the above functional, we get


M Z
Z !
X
E= −k(x − y)d(x, y)logpi,x (Zo (y))dx (16)
Ω i=1 Ωi

216
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

In Eq.18 we have to minimize the energy functional E SLGDF for all x in domain Ω of the given image.
Local circular region has been defined by x. Which is further divided to M disjoint subregions {Ωi }M i=1
and pi,x (Zo (y)) is representing a posteriori probability of intensity Zo (y), and its spatial weighting is
expressed by k(x − y). Which is depending upon distance from x to y, and are expressed as:

(vi (x) − Zo (y))2


 
1
pi,x (Zo (y)) = √ exp −
2πσi (x) 2σi (x)2
and
(  2

1 |d|
a exp − 2σ 2 if |d| ≤ ρ
w(d) =
0 if |d| > ρ
In level set formulation the Domain of image has been divided into sub regions foreground rep-
resented by Ω1 and background represented by Ω2 . These two partitions may be expressed as the
locations outside and inside the zero level set of Φ, i.e Ω1 contains those points for which Φ(x, y) > 0
and Ω2 contains all those points for which Φ(x, y) < 0. Involving the Heaviside function, and in terms
of Φ, σ 2 and vi Eq.(16) can be written as given below
Z
2 2
Ex (Φ, v1 (x), v2 (x), σ1 (x) , σ2 (x) ) = − d(x, y).k(x − y)logp1,x (Zo (y))N1 (Φ(y))dy
Z (17)
− d(x − y).k(x − y)logp1,x (Zo (y))N2 (Φ(y))dy

Here N1 (Φ(y)) = H(Φ(y)) and N2 (Φ(y)) = 1 − H(Φ(y)). Thus Eq.(17) may be written as:
Z
Ex (Φ, v1 , v2 , σ12 , σ22 ) = Ex (Φ, v1 (x), v2 (x), σ1 (x)2 , σ2 (x)2 )dx (18)

1 2
R R
After adding the regularizing terms P (Φ) = 2 (|∇Φ(x)| − 1) dx , and L(Φ) = |∇H(Φ(x))|dx , the
energy functional becomes

F (Φ, v1 , v2 , σ12 , σ22 ) = E(Φ, v1 , v2 , σ12 , σ22 ) + νL(Φ) + µP (Φ) (19)

Here ν, µ are weighting constants such that ν > 0, µ > 0. Using smooth version H of Heaviside
function H the energy functional is given by

F (Φ, v1 , v2 , σ12 , σ22 ) = E (Φ, v1 , v2 , σ12 , σ22 ) + νL (Φ) + µP (Φ) (20)

vi and σi2 satisfy the Euler-Lagrange equations:


Z
d(x, y)k(y − x)(vi (x) − Zo (y))Ni, (Φ(y))dy = 0 (21)

and
Z
d(x, y).k(y − x)(σi (x)2 − (vi (x) − Zo (y))2 Ni, (Φ(y))dy = 0 (22)

Here N1, (Φ(y)) = H (Φ(y)) and N2, (Φ(y)) = 1 − H (Φ(y)) From Eq.(21) and Eq.(22), we can obtain
vi (x) and σi (x)2 which minimize F (Φ, v1 , v2 , σ12 , σ22 ) for a fixed Φ. Minimization of F in Eq.(20) with
respect to Φ may be obtained by dealing with the gradient descent flow equation.
    
∂Φ ∇Φ 2 ∇Φ
= −δ (Φ)(η1 − η2 ) + νδ (Φ)div + µ ∇ Φ − div (23)
∂t |∇Φ| |∇Φ|
217
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

where η1 (x) and η2 (x) are:

d(x, y)k(y − x)(v1 (y) − Zo (x))2


Z  
η1 (x) = d(x, y)k(y − x)logσ1 (y) + dy
Ω 2σ1 (y)2

and

d(x, y)k(y − x)(v2 (y) − Zo (x))2


Z  
η2 (x) = d(x, y)k(y − x)logσ2 (y) + dy
Ω 2σ2 (y)2
In Eq.23 the derivatives ∂Φ ∂Φ
∂x and ∂y may be discretized as central finite differences. The temporal
derivative is discretized as a forward difference and an iteration scheme is then followed by discretizing
the PDE in Eq.23.

4 Experimental Results

Here the comparison of proposed model with Badshah-Chan model has been shown. Images having
noise, intensity inhomogeneity and texture have been used in the experiments of selective segmenta-
tion. All the experiments have been conducted with the help of MATLAB.

Fig. 1: Original image. Fig. 2: BC model. Fig. 3: Proposed model.

Fig. 4: Original image. Fig. 5: BC model. Fig. 6: Proposed model.

In above Fig.1 is original image, in this image pixel intensities are not homogeneous. Fig.2 is
segmentation result of BC model with parameters beta = 1.0e−10 , alpha = -0.00100, iterations =
200, Epsilon = 1, mue = 100, delta = 4, lambda2 = 0.1, lambda1 = 0.1, which shows that BC
model has not done the segmentation correctly in this image. Fig.3 is result of segmentation using
proposed model with parameters µ = 0.01/timestep,  = 1, λ1 = 1, λ2 = 1, ν = 0.001 ∗ 255 ∗ 255,
for the value of σ = 3, and α = 35. Here we see that proposed model has segmented the targeted
portion very well. Fig.4 is an image having texture, where Fig.5 is segmentation result of BC model
with parameters alpha = -0.001490, beta = 1.0e−10 , Epsilon =1, mue = 1*m*n/1000, delta = 4,
lambda2 = 0.00951, lambda1 = 0.00951. Fig.6 is result of segmentation using proposed model with
218
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

parameters µ = 0.01/timestep,  = 1, λ1 = 1, λ2 = 1, ν = 0.01 ∗ 255 ∗ 255, for the value of σ = 15,


and α = 350. Here an image having texture has been chosen. It is quite clear from the above images
that BC model can not perform the segmentation process well and the model we proposed can do
segmentation properly in images having textures.

Fig. 7: Ultrasound image. Fig. 8: BC model. Fig. 9: Proposed model.

Fig. 10: Synthetic image. Fig. 11: BC model. Fig. 12: Proposed model.

Fig. 13: Image having texture. Fig. 14: BC model. Fig. 15: Proposed model.

Fig. 16: Image having texture. Fig. 17: BC model. Fig. 18: Proposed model.

219
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

From the above figures efficiency of this model is clear, it segmented every targeted object in all
of the chosen images as compared to BC model. Proposed model has given efficient results in images
having textures, inhomogeneous intensities, or noise.

5 Conclusion

This paper presents more contemporary model which is actually dependent on selective segmentation
of images using active contour driven through LGDF energy model. In proposed model markers
have been used for selecting the desire object or region. Segmentation of proposed model has been
compared with the selective segmentation output given by BC model through experiments. From
these experiments on different images using our new model we observe that it is working good(without
any filter) for selective segmentation of images having noise, texture or intensity inhomogeneity. The
segmentation experiments show that proposed method works very well in segmentation of these
images.

References

[1] Badshah, Noor and Chen, Ke: ‘Image selective segmentation under geometrical constraints using
an active contour approach’ ,Communications in Computational Physics, 2010, pp. 759–778

[2] Li Wang, Lei He, A rabinda M., Li C.: ‘Active Contour Driven by local Gaussian Distribution
fitting energy’, Signal Processing, 2009, pp. 2435–2447

[3] C. L. Guyader and C. Gout.: ‘Geodesic active contour under geometrical conditions Theory and
3D applications’, Numerical Algorithms, 2008, pp. 48:105-133

[4] C. Gout, C. L. Guyader and L. A. Vese.: ‘Segmentation under geometrical conditions with geodesic
active contour and interpolation using level set method’,Numerical Algorithms, 2005, pp. 39:155-
173,

[5] T. F. Chan and L. A. Vese. ‘Active contours without edges’, Transactions on Image Processing,
2001, pp. 10(2):266-277

[6] S. Osher and R. Fedkiw.: ‘Γ-Level Set Methods and Dynamic Implicit Surfaces’, Springer Verlag,
2003 Lec. Notes Comp. Sci., 2005 pp. 3708:499-506

[7] S. Osher and J. A. Sethian.: ‘Fronts propagating with curvature-dependent speed: algorithms
based on Hamilton-Jacobi formulations’, J. Comput. Phys., 1988, pp. 79(1):1249

[8] J. A. Sethian.: ‘Level set methods and fast marching methods. Evolving interfaces in Computa-
tional Geometry’, Fluid Mechanics, Computer Vision and Material Science, Cambridge University
Press, 1999.

[9] K. Chen.: ‘Matrix Preconditioning Techniques and Applications’, Cambridge University Press,
2005.

220
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Digital Image Processing Applications for


Monitoring & Mapping Soil Salinity
Lubna Rafiq*
Sapna Tajbar §
Summaya §
Sidra Malik §
Nabia Gul §

§
Physics Department, Shaheed Benazir Bhutto Women University Peshawar
*Applied Geoinformatics University of Salzburg

Abstract:
Digital image processing (DIP) has been proved to be effective tool for analysis in

various fields and applications of agriculture, irrigation, soil sciences sectors. The aim of this

paper is to utilize various DIP techniques on multi spectral Landsat 8 OLI/TIRS images. In this

study we monitor and map saline areas in the selected portion of the Larkana District, Pakistan.

The approach applied here is based on the analysis of multi spectral and multi temporal Landsat

images based remote sensing indicators (three salinity indices) which are based on the different

spectral characteristics of different kinds of surfaces. It is observed that the proposed indices

provide a more prominent view of the saline area, particularly a difference between salt and salt

free land has been observed between the original images and indices applied images.

Key words: Soil Salinity, Digital Image Processing (DIP), Salinity Indices, Landsat 8 OLI/TIRS
Images, Larkana District-Pakistan

1. Introduction
1.1 Soil Salinity
Soil salinity is the increase in the concentration of salt in soil which results in the formation
of soil saline areas. This can be natural or human induced due to the poor management practices

221
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

(Wu et al., 2008). Salt-affected soils are all over the world particularly in semi arid, some sub-
humid and arid parts (Dehni and Lounis, 2012). It has influence on the physical and chemical
properties of soil, causing the loss of crop yield (Dehaan and Taylor, 2002; Metternicht and
Zinck, 2003; Nawar et al., 2014). Irrigation changes the soil hydration balance by the provision
of supplementary supply of water and this supply of water is always related with the added salts
which results in the soil salinity by making it unproductive. It is a major threat to the food
security as well as to those farmers who rely on agricultural production from the salt affected
areas (Umali, 1993; Wu et al, 2008). Some proportions of the irrigated lands in different
countries affected by soil salinity are Pakistan (28%), Egypt (30%), India (27%), Iraq (50%) and
Australia (20%) (Nawar et al., 2014). In Pakistan due to the growth of the population, endeavors
are made to escalate the production from agriculture and in most cases by making use of the
waste lands or the lands which were formerly under water for cultivation in response to the
limited water supplies. Reports have shown that approximately 10% of the current lands suitable
for growing crops are influenced by the salinity (Tabet et al., 1997; Khan et al., 2001).

1.2 Spectral Indicators of Remote Sensing Data for Soil Salinity

Surface salinity can be detected from remotely sensed data either directly on bare soils or
indirectly through vegetation type (affected by salinity) (Mougenot and Pouget, 1993). Salt
mineralogy (e.g. carbonates, sulphates, chlorides) determines the presence (or absence) of
absorption bands in the electromagnetic spectrum as shown in figure 1. For instance, pure halite
(NaCl) is transparent and its chemical composition and structure preclude absorption in the
visible and near to thermal infrared bands (Hunt et al., 1972). On the contrary, carbonates present
absorption features in the thermal range (between 11 and 12µm) whereas sulphate anions have
an absorption band near 10.2µm caused by overtones or combination tones of internal vibrations
of constitutional water molecules, as reported by Mulders (1987) and Siegal and Gillespie
(1980). Carbonate absorption features are also reported at 2.34µm wavelength (Siegal and
Gillespie, 1980). Middle infrared bands, reflecting water and OH absorption, allow
differentiation between chlorides (as halite) and sulphates when both are dry. Mulders (1987)
reports the 1.50–1.73µm range as one of the absorption bands for soil surface features containing
gypsum (CaSO4.H2O).

222
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Identification of saline areas using remote sensing data has been proved to be the most effective
in many recent studies (Khan et al., 2001; Metternicht and Zinck, 2003; Wu et al., 2008; Shahid
et al., 2010; Dehni and Lounis, 2012; Rimjhim et al., 2013; Nawar et al., 2014; Lhissou et al.,
2014).
This study is devoted to the mapping of soil in Pakistan affected by salinity. The technique of
remote sensing indicators (different salinity indices) that is based on different spectral features of
different type of surfaces has been practiced here. The work has been done with the use of digital
image processing (DIP) techniques along with the geographical information system (GIS) as a
tool for remote sensing data assessment and processing.
The structure of this paper is as follows. Firstly, section 2.1 comprises a general description of
the study area, section 2.2 describe the data used and description and section 2.3 presents the
methodology adopted for this study, while section 3 presents results and conclusion.

Figure 1. (a) Spectra of gypsum, Halite, Calcium Carbonate, Sodium bicarbonate


and sodium sulphate in the visible, near and midinfrared (0.4-2.5μm), as recorded by
the GER 3700 spectroradiometer (b) Spectra of carbonates, chlorides and sulphates
(gypsum, anhydrite, apatite and halite) in the thermal infrared (6-20 μm) (Source:
Lane and Christensen (1997, 1998)

223
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

2. Study Area, Data Used, General Description and Methodology

2.1 Study Area

Figure 2 shows location of the study area which is Larkana district. It is a district of Sindh
province of Pakistan and is located within 68°7’E to 68°30’E longitude and 27°6’N to 27°58’N
latitude with an average elevation of 494 m above mean sea level. The average temperatures
recorded within the region as maximum and minimum are 42°C and 31°C, respectively, during
the summer season (Kharif) from June to September and 21°C and 11°C during the winter
season (Rabi) from November to March. The annual precipitation in the district recorded is
approximately 130 mm, which is inadequate to meet water needs of the crop. Agricultural
activities mainly depend on two main irrigation canals, the Rice canal and the Dadu canal, which
flow through the district. The crop rotation of rice and wheat is common here. During the Kharif
season, paddy rice is grown on more than 80% of the lands for agriculture, whereas wheat is a
dominant crop during the rabi season on approximately 30% of the area. The growing of rice
paddy begins each year after mid-June and continues until mid-August. Harvest of rice crop
starts in the middle of October and continues until the end of November (Siyal et al., 2015).

Figure 2. Location of the Study Area

224
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

2.2 Data used & Data Description


 Landsat 8 OLI/TIRS

Landsat 8 carries two push-broom instruments: the Operational Land Imager (OLI) and the
Thermal Infrared Sensor (TIRS). The OLI sensor records image data for 9 shortwave spectral
bands over a 190 km swath with a 30 m spatial resolution for all bands except the 15 m
panchromatic band. The TIRS sensor, which has a three-year design life records image data for
two thermal bands with a 100 m spatial resolution over a 190 km swath. Landsat 8 images the
whole earth every 16 days in an 8-day offset from Landsat 7 (LDUHB, 2015). Major
characteristics of Landsat 8 imageries are briefly explained in table 1.
Two Landsat 8 images of two different years (i.e. April 2013 and March 2018 for Larkana
district) were selected to observe the changes in the saline areas.

Sensor Bands Wavelength (μm) Resolution (m)


Band 1- Coastal aerosol 0.43-0.45 30
Band 2- Blue 0.45-0.51 30
Band 3- Green 0.53-0.59 30
Landsat 8 Band 4- Red 0.64-0.67 30
Operational Land Band 5- Near Infrared (NIR) 0.85-0.88 30
Imager (OLI) and
Thermal Infrared Band 6- SWIR 1 1.57-1.65 30
Sensor (TIRS)
Band 7- SWIR 2 2.11-2.29 30
Band 8- Panchromatic 0.50-0.68 15
Band 9- Cirrus 1.36-1.38 30
Band 10- Thermal Infrared 10.60-11.19 100
(TIRS) 1
Band 11- Thermal Infrared 11.50-12.51 100
(TIRS) 2

Table 1. Characteristics of Landsat 8 OLI/TIRS

2.3 Methodology

Remote sensing spectral indicators based on Landsat data were used to delineate soil

surface salinity. Along with image enhancement technique, three indices were then calculated

225
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

before the application of classification to each image. Figure 3 shows the graphical presentation

of the methodology applied in this study. The study area, pertaining to a small portion of

Larkana covering about 7533.980 hectares, was extracted from two Landsat images for April

2013 and March 2018.

Figure 3. Graphical presentation of the methodology

In order to highlight saline area from other features present in the study area, following three

indices, namely SI (1), SI(2) and SI(3) were calculated from three bands of Landsat data (B3, B4 & B5),

which are represented as follows,

SI (1) = √B 4 x B5

SI (2) =
√ 2
(B 3) x(B 4)
2

B 4 x B5
SI (3) = B 3

226
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Color composite images (for both years) were produced by combining all calculated indices.

Both color composite images were then classified by using supervised classification techniques.

3. Results and Conclusion

For precise signature selection of saline and non-saline areas, it was necessary to have clear color

differentiation between the two types of lands.

Figure 4. Color composite based on calculated indices for 2013

227
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Figure 5. Color composite based on calculated indices for 2018

Within the color composite images shown in figures 4 & 5, saline areas are depicted in a

bright white tone. A clear differentiation among the various surface features, particularly

between salt and salt free land, has also been observed in color composite images. The

supervised classification was based on the values which distinguished between saline and non-

saline areas, as determined by the output of the color composite based on three calculated

indices. For this purpose it was initially adequate to have just two classes- ‘saline area’ and

‘non-saline area’ (Figure 6 & 7). Finally both classified images were utilized in order to create

change detection.

228
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Figure 6. Classified image of study area (2013)

Figure 7. Classified image of study area ( 2018)

229
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Figure 8 reveals the changes in salinity extent from 2013 to 2018.

Figure 8. Change on salinity from 2013 to 2018

Table 2, reveals that ~ 189.769 hectares out of 7533.980 hectares was classified as saline

in the year 2013 whereas ~ 483.505 hectares out of 7533.980 hectares was classified as saline in

the year 2018. Approximately 3.9% increase in saline area has been observed in two years.

Area (Hectares) Area %

Soil Salinity in ‘2013’ 189.769 2.52%

Soil Salinity in ‘2018’ 483.505 6.42%

Total Selected Area 7533.980

Increase in saline area between the years ‘2013’ & ‘2018’ 3.9% of the total selected area

Table 2. Changes in saline area from 2013 to 2018

230
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

It is concluded that this simple approach based on specialized indices has shown promising

potential in monitoring and mapping of saline areas using Landsat 8 satellite data. Although

there is no doubt that remote sensing and Landsat images are good tools for soil salinity

monitoring and mapping, but the accuracy of the results need verification through extensive

ground truth survey of the area.

4. References
Dehni, A. and Lounis, M. (2012) Remote Sensing Techniques for Salt Affected Soil Mapping:
Application to the Oran Region of Algeria. Procedia Engineering, 33, pp. 188-198.
Dehaan, R.L. and Taylor, G.R. (2002) Field-derived spectra of salinized soils and vegetation as
indicators of irrigation-induced soil salinization. Remote Sensing of Environment, 80 (3), pp.
406-417.
Hunt, G.R., Salisbury, J.W. and Lenhoff, C.J. (1972) Visible and near infrared spectra of
minerals and rocks: V. Halides, phosphates, arsenates, venadates and borates. Modern Geology,
3, pp. 121–132.
Khan, N.M., Rastoskuev, V.V., Shalina, E.V. and Sato, Y. (2001) Mapping salt-affected Soils
Using Remote Sensing Indicators-A Simple Approach with the use of GIS IDRISI. 22nd Asian
Conference on Remote Sensing, 5-9 November 2001, Singapore.
Lhissou, R., Harti, A.E. and Chokmani, K. (2014) Mapping soil salinity in irrigated land using
optical remote sensing data. Eurasian Journal of Soil Science, 3, pp. 82-88.
Landsat 8 Data Users Hand Book. (2015) Department of the Interior U.S Geological Servey,
EROS, Sioux Falls, South Dakota.
Lane, M. D. and Christensen, P. R. (1997) Thermal infrared emission spectroscopy of anhydrous
carbonates. Journal of Geophysical Research, 102, pp. 25581–25592.
Lane, M. D. and Christensen, P. R. (1998) Thermal infrared emission spectroscopy of salt
minerals predicted for Mars. Icarus, 135, pp. 528–536.
Mougenot, B. and Pouget, M. (1993) Remote Sensing of Salt Affected Soils. Remote Sensing
Reviews, 7, pp. 241–259.
Mulders, M.A. (1987) Remote sensing in Soil Science. Development in Soil Science.
Amsterdam, The Netherlands: Elsevier, pp. 379.

231
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Metternicht, G.I. and Zinck, J.A. (2003) Remote sensing of soil salinity: potentials and
constraints. Remote Sensing of Environment, 85 (1), pp. 1-20.
Nawar, S., Buddenbaum, H., Hill, J. and Kozak, J. (2014) Modeling and Mapping of Soil
Salinity with Reflectance Spectroscopy and Landsat Data Using Two Quantitative Methods
(PLSR and MARS). Remote Sensing, 6, pp. 10813-10834.
Rimjhim, K., Sushmita, B. and Malaya, C. (2013) Application of Remote Sensing in Soil
Mapping: A Review. Nort East Students Geo-Congress on Advances in Geotechnical
Engineering (NES Geo-Congress 2013), 28th September 2013, Guwahati.
Siegal, B.S. and Gillespie, A.R. (1980) Remote sensing in geology. New York: Wiley.
Shahid, S.A., Abdelfattah, M.A., Omar, S.A. and Mahmoudi, H. (2010) Mapping and
Monitoring of Soil Salinization-Remote Sensing, GIS, Modeling, Electromagnetic Induction and
Conventional Methods Case Studies. International Conference on Soils and Groundwater
Salinization in Arid Countries, Sultan Qaboos University, Oman.
Siyal, A.A., Dempewolf, J. and Reshef, I.B. (2015) Rice yield estimation using Landsat ETM+
Data. Journal of Applied Remote Sensing, 9, 095986, pp. 1-16.
Tabet, D., Vidal, A., Zimmer, D., Asif, S., Aslam, M., Kuper, M. and Strosser, P. (1997) Soil
salinity characterization in SOPT images, A case study in one irrigation system of the Punjab,
Pakistan. Physical Measurements and Signatures in Remote Sensing, Guyot & Phulp in Eds-
Balkema, pp. 795-800.
Umali, D.L. (1993) Irrigation-Induced Salinity A Growing Problem for Development and the
Environment. World Bank Technical Paper Number 215. The World Bank Washington, D.C.
Wu, J., Vincent, B., Yang, J., Bouarfa, S. and Vidal, A. (2008) Remote Sensing Monitoring of
Changes in Soil Salinity: A Case Study in Inner Mongolia, China. Sensors, 8, pp. 7035-7049.

232
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

A New Variational Model for Segmentation of Texture


Images via L0 Norm
Tahir Zaman∗, Noor Badshah, Hassan Shah, Fahim Ullah

Abstract
Segmentation of texture images is one of the fundamental problems in the field of image pro-
cessing. In this paper, we develop a variational model to segment images with texture. Our
propose model uses L0 norm for smoothing image. This new approach is the unification of LGM
model and a fast global minimization (FGM) model, which smooth and segment texture images
jointly. By using L0 gradient norm, LGM model smooths the image while preserve the edges and
FGM model segment the image which solve the propagation problem of the active contour toward
object boundaries through a dual formulation. Our new model does not depend upon the selection
of initial contour. The experimental results of our proposed model are compared with other latest
texture segmentation models which show that the proposed method achieves better results.

Keywords: Active contours, Texture images, Variational model, Fast fourier transform, Image
smoothing, Image segmentation.

1 Introduction
Image segmentation is the basic problem in image processing. It is the process of distributing
an image into distinct multiple segments. It represents an image into finite number of meaningful
regions which are easier to analyze. Different models which are based on edges (EACMs) [8,22,23] and
region of the active contour (RACMs) [11,13,21] have been used for the solution of image segmentation
problem. EACMs uses edge information [11], while the models based on region uses the information
like variation of texture and colors etc. [21]. A well-known EACM is snake model [22], which is based
on evolving the curve around the object. Its enhance version is the geodesic active contour (GAC)
model [8]. GAC-model is defined by the following partial differential equation (PDE):

∂C
= {κ1 g− < ∇g, N >} N ,
∂t
where κ1 and N are used for curvature and normal of the curve C respectively. The curve C moves in
the direction of its interior normal N and stops on the boundaries of the object. The edge indicator
function g is defined as:
1
g(|∇I(C(s))|) = ,
1 + β|∇I(C(s))|2
where I is the given image and β is an arbitrary positive number. However, this model is limited
to segment medical images or natural images with texture. To overcome these difficulties, RACMs
have been proposed whose results are much better in noisy images and is independent of the initial
contour’s location and can detect weak boundaries in a better way. Mumford Shah (MS) model [29]

Department of Basic Sciences, University of Engineering and Technology, Peshawar Pakistan

233
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

is the basic region-based model which is utilized in [19, 30, 38, 40]. The energy functional of the
MS-model is:
Z Z
MS 2
E (U, K) = η |U − I| dxdy + |∇U|2 dxdy + µ.length(K),
Ω Ω\K

where η and µ are the positive parameters, Ω is a bounded domain in R2 , and K is the set of
edges. First term is the fidelity term which minimizes the variation between the observed image U
and original image I. The Second term minimizes the variation of the input image and promotes
its smoothness. The last term is the length term which minimizes the length of interfaces. This
model works well in de-noising and segmentation of images. However, this model is very complex to
implement computationally and its first variant is known as piecewise constant MS model (PCMS),
defined as: U = ci inside each connected region Ωi . With this definition of U , E M S becomes:
n
X Z
E P CM S
(ci , K) = ηi |I − ci |2 dxdy + µ.length(K).
i=1 Ωi

Since K is unknown, so it is difficult to minimize the MS-model. Algorithms used for the solution
of MS-model are complicated and expensive computationally. Chan et al. [11] in 2001, proposed a
CV-model. The model performed better for gray images with intensity homogeneity, but may not
work in images with intensity in-homogeneity and it depends upon the location of initial contour as
well. To overcome these limitations, Li et al. [25] developed a local binary fitting (LBF) model. This
model can segment images with slight variation of intensity inhomogeneity. However, this model fails
to segment images with strong inhomogeneity or heavy noise. In this model, the fitting functions
are used which are not too-much capable to describe the local image feature comprehensively. To
overcome all these limitations much work, such as [1, 12, 26–28, 35, 41, 42, 48, 50] have been done. An
efficient local CV (LCV) model proposed by Wang et al. for image segmentation [42]. By using local
information and an extended structure tensor (EST), the model accurately produces segmented result
for inhomogeneous images and may segment texture images. Bressan et al. proposed a fast global
segmentation model (FGM) to resolve the problem of location of initial contour [7]. However, these
models are unable to segment real-world hard texture images.
For smoothing and de-noising, various approaches have been used so far for this purpose like
bilateral filtering approach [37], anisotropic diffusion model proposed by Perona and Malik [31], mod-
ified Perona-Malik model by Black et al. [5], Local structure tensor (LST) for orientation estimation
by Bigun et al. [4]. Weickert proposed a diffusion model based on the structure tensor [46], which
utilizes a filter for strengthening coherent structures in vector valued images. Rudin et al. [33] pro-
posed a new model based on the nonlinear total variation (TV) norm, which remove the noise and
preserve edges. In this model L1 norm of image gradient is used which is more stable as compared to
anisotropic diffusion [31]. Therefore, TV-model is mostly utilized for the purpose of image smooth-
ing in [9, 10, 17, 18, 43, 47]. However, this model has some limitations, such as stair-case effects and
blurness in the resulting image if parameters are not chosen properly. Various Gabor filters are used
in [6, 15, 20, 36, 39] for extraction of texture.
For the purpose of image segmentation with texture, different approaches like CV-model [2,30,34]
based on Gabor filters, LST [13,32], LCV-model based on (EST) [42] etc. have been developed. These
models [2, 13, 30, 32, 34, 42] shows good results but have some drawbacks as well. Gabor produces a
lot of channels and LCV model fails to give texture information etc. In 2011, Xu et al. developed
an L0 gradient (LGM) model to smooth a given image [49]. LGM model efficiently smooths various
patterns in an image without disturbing edge information. In texture images, LGM model smooths
texture while preserves edges. Utilizing the importance of L0 norm smoothing, Badshah et al. [3]
developed a smoothing and segmentation model for synthetic and natural texture images. But this
model producing a problem of stair-casing effect in inhomogeneous texture images. To overcome this
234
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

limitation, we design a novel global segmentation model for texture images. In our proposed model
we made Badshah et al. [3] model convex by utilizing the FGM model [7]. This model overcome the
stair-casing effect and also independent of initial guess.
The organization of the rest of the paper is as followed: In the next section related work for
texture smoothing is discussed briefly. Brief discussion on image segmentation models is given in
section 3. In section 4, state of the art models are discussed. In section 5, our proposed model and
it’s minimization is presented in detail. In section 6, experiments and discussions about the proposed
model is given and comparison with some state of the art models is also given.

2 Related Texture Smoothing Models:


Smoothing of texture is an elementary problem in the research area, for which different techniques
have developed. Here we briefly discuss some methods.

2.1 Extended Structured Tensor (EST)


To estimate the orientation and analyze the local structure,a second moment matrix is used, which
is known as the structure tensor Tσ . The orientation of edges can be determined by using image
gradient I in the local structure tensor. It helped in the solution of filtering problems such as
anisotropic filtering [14, 45] and motion detection [24]. For a given image I, Tσ is given by:

Kσ ∗ Ix2 Kσ ∗ Ix Iy
 
T
Tσ = Kσ ∗ (vv ) = , (1)
Kσ ∗ Ix Iy Kσ ∗ Iy2

where v = [Ix Iy ]T , Kσ denotes Gaussian kernel and ∇I = Ix î + Iy ĵ is the image gradient. From Eq.
(1), Tσ contains three feature channels and it is used as a feature detector for edges [16]. However, it
is unable to segment texture images with distinct intensity inhomogeneity [42]. For this purpose, the
intensity information is introduced in EST which can segment texture images. For a scalar image I,
an EST is given by:

Kσ ∗ Ix2 Kσ ∗ Ix Iy Kσ ∗ Ix I
 

TσE = Kσ ∗ (wwT ) =  Kσ ∗ Ix Iy Kσ ∗ Iy2 Kσ ∗ Iy I  , (2)


Kσ ∗ Ix I Kσ ∗ Iy I Kσ ∗ I 2

where w = [Ix Iy I]T . EST have six feature channels for every scale, in which three out of six have
the information about intensity. It smooth texture in the given image.

2.2 Smoothing of Image via L0 Gradient Minimization (LGM)


Various models have been used to remove the noise and smooth texture in the image like: Bilateral
filtering [37], Perona Malik model [31], robust anisotropic diffusion model [5], LST [4] and TV-
models [33]. But the existing models have some limitations such as, they produce blurness and the
stair-casing effect in the image. To handle these drawbacks, in 2011 Xu et al. developed a new
smoothing model via L0 norm, which is known as the LGM-model [49]. This model uses the property
of L0 gradient norm of the image which measure gradient sparsity. L0 gradient norm is defined as:
( )
∂S ∂S
p p
E(S) = p| + 6= 0 , (3)
∂x ∂y

235
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

where p counts the number of those pixels where ∇p 6= 0. The energy functional of LGM-model [49]
is defined as:
( )
X 2
min Sp − Ip + λE(S) , (4)

S
p

where the parameter λ is used for smoothing. Minimizing Eq. (4), using Alternating Minimization
∂S ∂S
(AM) algorithm [44] and new variables hp and vp are considered by replacing ∂xp and ∂yp respectively.
(  )
2 
∂Sp
X 2 2 ∂S
p
min Sp − Ip + β1 − hp + − vp + λE(hp , vp ) , (5)

S,hp ,vp
p
∂x ∂y
 
∂S ∂S
where β1 > 0 is used to minimize and control the difference between (hp , vp ) and ∂xp , ∂yp respec-
tively. Using L0 norm, this model smoothes texture and noise, while preserve salient edges and shows
better result as compared to other smoothing models.

3 Related Segmentation Models


In this section we discuss some models for image segmentation.

3.1 Chan Vese (CV) Model


CV-model is developed by Chan et al. [11] for gray images. It is a special case of MS-model [29] in
which they restrict the domain into two sub domains. The energy functional of the CV-model is:
Z Z Z
E CV (c1 , c2 , Ψ) = µ δ (Ψ)|∇Ψ|dxdy+ϑ1 |I−c1 |2 H (Ψ)dxdy+ϑ2 |I−c2 |2 (1−H (Ψ))dxdy, (6)
Ω Ω Ω

1 2 Ψ 
H (Ψ) =
1 + arctan( ) . (7)
2 π 
For fixed Ψ(x, y), minimizing Eq. (6) for c1 and c2 we obtained:
R
IH (Ψ)dxdy
c1 (Ψ) = RΩ , (8)
Ω H (Ψ)dxdy

and R
I(1 − H (Ψ))dxdy
c2 (Ψ) = RΩ . (9)
Ω (1 − H (Ψ))dxdy
Keeping c1 , c2 fixed, minimization of Eq. (6) for Ψ is obtained by Euler Lagrange equation and
gradient flow equation by using artificial time t ≥ 0 as:
∂Ψ h  ∇Ψ  i
= δ (Ψ) µ∇ · − ϑ1 (I − c1 )2 + ϑ2 (I − c2 )2 , (10)
∂t |∇Ψ|
 
where δ (Ψ) = π1 2 +Ψ
2 , is Dirac Delta function. This model has good segmentation results for

homogeneous images due to its robustness and large convergence rate. However, this model has some
drawbacks as well, that it cannot segment images with intensity in-homogeneity and is sensitive to
initial guess selection.

236
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

3.2 Local CV (LCV) Model


CV-model segment images with intensity homogeneity because it is designed for gray scale images but
may not work in intensity in-homogeneity and its dependence on the location of the initial contour
makes it very sensitive. So this model is very time consuming because of the re-initialization of the
initial contour. To overcome the above limitations, so many models have been used. One of the well
known model is LCV-model proposed by Wang et al. [42]. This model uses both local and global
information for image segmentation of images. The energy functional of LCV-model is defined as:

E LCV = α1 E G + α2 E L + E R , (11)

where E G , E L and E R represents the global term, local term and the regularization term respectively.
α1 and α2 are two positive fixed values. The global term E G is defined as:
Z Z
G 2
E = ϑ1 |I − c1 | H (Ψ)dxdy + ϑ2 |I − c2 |2 (1 − H (Ψ))dxdy. (12)
Ω Ω

The local term E L is given as:


Z Z
L 2
E = ϑ1 |gk ∗ I − I − d1 | H (Ψ)dxdy + ϑ2 |gk ∗ I − I − d2 |2 (1 − H (Ψ))dxdy, (13)
Ω Ω

where gk represents the filter of size k × k, d1 and d2 are intensity means of (gk ∗ I − I) inside and
outside the contour respectively. The regularizing term E R is defined as:
Z Z 
R 1
E = µ δ (Ψ)|∇Ψ|dxdy + |∇Ψ| − 1)2 dxdy. (14)
Ω Ω 2

The minimization of the model (11) by using gradient decent method is given by:

∂Ψ h  ∇Ψ  n o
= δ (Ψ) µ∇. + α1 − ϑ1 (I − c1 )2 + ϑ2 (I − c2 )2 (15)
∂t |∇Ψ|
n oi  ∇Ψ 
+α2 − ϑ1 (gk ∗ I − I − d1 )2 + ϑ2 (gk ∗ I − I − d2 )2 + ∇ · (∇Ψ) − ∇. .
|∇Ψ|

This model can efficiently segment images with intensity in-homogeneity, as this model uses both
local and global information but is unable to segment some hard real-world texture images and is
non-convex.

3.3 Fast global minimization of the snake (FGM) Model


Active contour is one of the most fundamental and successful model in image segmentation. The
success of FGM-model is based on strong mathematical property and efficient numerical schemes.
However, existence of local minima in its energy functional due to non-convexity makes it very
sensitive to the location of the initial contour. To solve this problem, Bresson et al. proposed a new
convex variational model, in which they have determined the global minimum of the active contour
model, whose energy functional is given by:
Z 
F GM
E (c1 , c2 , u) = T Vg (u) + ϑ1 |I − c1 |2 dxdy


− ϑ2 |I − c2 |2 dxdy u, (16)

237
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

where u ∈ [−1, 1], c1 and c2 are the average intensities outside and inside contour. ϑ1 and ϑ2 are two
positive parameters balancing the approximation errors in the fitting term. The first term is weighted
total variation. The convex and unconstrained minimization of above problem is given by:
n Z  2 2  o
min T Vg (u) + ϑ ( I − c1 + I − c2 )u + αν(u) dx , (17)

u Ω

where ϑ1 = ϑ2 = ϑ. The minimization of the above equation w.r.t u by using dual formulation of the
TV-Norm is given by:
Z 
n 1 2
2 2  o
min T Vg (u) + ||u − u1 || + ϑ ( I − c1 + I − c2 )u1 + αν(u1 ) dx , (18)

u,u1 2θ Ω

where Θ > 0 is a small parameter and u1 give us information about texture. Keeping u1 fixed,
minimize (18) w.r.t u, we get:
n 1 o
min T V − g(u) + ||u − u1 ||2 , (19)
u 2Θ
and the solution of (19) is:
u = u1 − Θdivp (20)
where p = (p1 , p2 ) is given by:

g(x)∇(Θdivp − u1 )|∇(Θdivp − u1 )|p = 0 (21)

. Minimize (18) w.r.t u1 keeping u fixed, we get:


nX 1 o
min (ϑ(|I − c1 |2 + |I − c2 |2 )u1 + ||u − u1 ||2 ) (22)
u1
p

(22) gives a solution which is:


n n o o
u1 = min max u(x) − Θ p (ϑ(|I − c1 |2 + |I − c2 |2 )), 0 , 1
P
(23)

4 Texture Segmentation Models


In this section, we present some other models for texture images.

4.1 LCV-model with EST for texture images


CV-model can’t give us any information about texture because it utilizes the inside and outside
intensities of the objects, which are segmented. So it can segment only the small part of the objects
but not the whole object because of the mean intensities differences of the neighboring textured parts.
So, for textured images segmentation, some extra information needed.
For segmentation of textured images, first the texture is extracted from the given image and then
by using the features of texture, the segmentation is carried out as a minimization problem. This
model use EST in LCV-model to segment texture images. The original image is replaced by all of the
nine channel’s averages and Eq. (15) of LCV-model is used for textureP image segmentation. All of
the nine channels’ average is represented by uE and defined as: uE = 1 E
σ σ 9 i Jσ,i , where i = 1, 2, 3, ...9.
The model can segment images with texture and intensity inhomogeneity, as this model uses local
information in image and EST in the LCV-model.

238
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

4.2 A Smoothing and segmentation model for texture images


LCV-model model fails to extract information regarding intensity and texture from the image. In
2017, Badshah et al. [3] proposed a new variational model whose energy functional is:
( 2 2 !
X ∂Sp ∂S p
|Sp − Ip |2 + λE(hp , vp ) + β2

min − hp + − vp
S,hp ,vp ,Ψ,c1 ,c2
p
∂x ∂y
)
X X X
2 2
+ϑ1 |Sp − c1 | H (Ψ) + ϑ2 |Sp − c2 | (1 − H (Ψ)) + µ |∇H (Ψ)| , (24)
p p p

where λ, β1 are smoothing and updating parameters and hp , vp are auxiliary variables respectively.
Minimizing Eq. (24) for (hp , vp ) while keeping S, Ψ, c1 , and c2 , fixed, we get:

(∂x Sp )2 + (∂y Sp )2 < λ



(0, 0) β1
(hp , vp ) = (25)
(∂x Sp , ∂y Sp ) otherwise.

Now keeping hp , vp , S,and Ψ fixed, minimize Eq. (24) to compute c1 and c2 as follows:
P
p Sp H (Ψ)
c1 = P , (26)
p H (Ψ)

and
P
p Sp (1 − H (Ψ))
c2 = P . (27)
p (1 − H (Ψ))

Now minimizing Eq. (24) by keeping (rh , vp ), Ψ, c1 , and c2 , fixed, the required smooth image S is
given by:
( )
−1 F(I p ) + β1 D1 + ϑ1 c1 F(H (Ψ)) + ϑ2 c2 (F(1) − F(H (Ψ)))
S=F n o , (28)
F(1) + ϑ2 D2 + ϑ1 F(H (Ψ)) + ϑ2 (F(1) − F(H (Ψ)))

where F is used for Fast Fourier Transform,

D1 = F ∂x T ∗ hp + ∂y T ∗ vp ,


and
D2 = F(∂x T ∗ ∂x ) + F(∂y T ∗ ∂y ).
Finally minimizing Eq. (24) for computing Ψ, keeping hp , vp , S, c1 and c2 , fixed and then applying
the gradient descent method on it, we get the P.D.E given by:
∂Ψ h  ∇Ψ  i
= δ (Ψ) µ∇. + ϑ1 (S − c1 )2 − ϑ2 (S − c2 )2 . (29)
∂t |∇Ψ|

This model perform well while segmenting images with texture but produce staircase effect as well.
Also this model did not work well when boundaries are not sharp and is sensitive to contour selection
as well. Due to this reason, here we developed a fast global segmentation model for texture images.

239
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

5 Proposed Segmentation Model


LCV-model with EST, did not work in rich textured images segmentation. Badshah et al. [3] recently
developed a new model to segment texture images which works well but this model have some issues
while segmenting images having intensity inhomogeneity and also produce staircase effect. The model
depends upon the choice of initial contour selection and did not perform well when the boundaries are
not sharp. To overcome these drawbacks, we here develop a new variational model for segmentation
of texture images. Our novel model is the unified form of LGM model and FGM model, which smooth
and segment texture images jointly and whose energy functional is:
nX
min |Sp − Ip |2 + λE(hp , vp )
0≤u≤1
p
 ∂S 2 ∂S 2 
p p
+ β1 − hp + − vp +

∂x ∂y
X 2 2 o
+ϑ ( Sp − c1 + Sp − c2 ) + T Vg (u) (30)

p

The constrained problem of Eq. (30) is converted to unconstrained optimization problem as:
nX
min |Sp − Ip |2 + λE(hp , vp )
u
p
 ∂S 2 ∂S 2 
p p
+ β1 − hp + − vp +

∂x ∂y
X  2 2  o
+ϑ ( Sp − c1 + Sp − c2 )u + αν(u) + T Vg (u) (31)

p

Minimizing Eq. (31) w.r.t u by using dual formulation of the TV-norm as:
nX
min |Sp − Ip |2 + λE(hp , vp )
u,u1
p
 ∂S 2 ∂S 2 
p p
+ β1 − hp + − vp +

∂x ∂y
X  2 2 
+ϑ ( Sp − c1 + Sp − c2 )u1 + αν(u1 )

p
1 o
+ T Vg (u) + ||u − u1 ||2 (32)

Minimizing (hp , vp ), S, c1 and c2 using the AM algorithm [44], we get four subproblems.

5.0.1 Computing (hp , vp ):


Keeping S, u, u1 , c1 and c2 fixed, minimizing Eq. (32) for (hp , vp ) we have:
nX λ  ∂S 2 ∂S 2 o
p p
min E(hp , vp ) + − hp + − vp (33)

hp ,vp
p
β1 ∂x ∂y

Solving Eq.(33) we get :


λ
(∂x Sp )2 + (∂y Sp )2 <

(0, 0) β1
(hp , vp ) = (34)
(∂x Sp , ∂y Sp ) otherwise
240
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

5.0.2 Computing c1 and c2 :


Keeping hp , vp , S and Ψ fixed minimizing Eq. (32) for c1 and c2 we get:
( )
X X
2 2
min ϑ |Sp − c1 | H (Ψ) + |Sp − c2 | (1 − H (Ψ)) . (35)
c1 ,c2
p p

We get
P
p Sp H (Ψ)
c1 = P , (36)
p H (Ψ)

and
P
p Sp (1 − H (Ψ))
c2 = P . (37)
p (1 − H (Ψ))

5.0.3 Computing u and u1 :


Keeping hp , vp , S, c1 and c2 fixed, minimizing Eq. (32) for u, u1 we have:
n 1 o
min T V − g(u) + ||u − u1 ||2 (38)
u 2Θ
and
nX 1 o
min (ϑ(|Sp − c1 |2 + |Sp − c2 |2 )u1 + ||u − u1 ||2 ) (39)
u1
p

Solving Eq.(38) we get :


u = u1 − Θdivp (40)
where p = (p1 , p2 ) is given by:

g(x)∇(Θdivp − u1 )|∇(Θdivp − u1 )|p = 0 (41)

Solving Eq.(40) by fixed point method we get:

pn + δt∇(div(pn − u1 /Θ))
p(n+1) = (42)
1 + δt/g(x)|∇(div(p)n − u1 /Θ)|

The solution of Eq.(39) is given by :


n n o o
u1 = min max u(x) − Θ p (λ1 (|Sp − c1 |2 + |Sp − c2 |2 )), 0 , 1
P
(43)

5.0.4 Computing Sp :
Keeping hp , vp , u, u1 and c2 fixed, minimizing Eq. (32) for Sp we have:
nX  ∂S 2 ∂S 2 
p p
min |Sp − Ip |2 + β1 − hp + − vp

Sp
p
∂x ∂y
2 2

X o
+ϑ ( Sp − c1 + Sp − c2 )u1 (44)
p

241
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Minimizing w.r.t. Sp :

(Sp −Ip ) ∂ T (∂ Sp −hp )
2|∂x Sp − hp | x|∂x Sxp −h
P
p 2|Sp − I p | |Sp −Ip | + β 1 p|
∂yT (∂y Sp −vp )

+2|∂y Sp − vp | |∂y Sp −vp | (45)
n o
(S −c ) (|S −c )
+2ϑ |Sp − c1 | |Spp −c11 | + |Sp − c2 | |Spp−c22| u1 = 0.

So Eq. (45) gives:  


T (∂ S − h ) + ∂ T (∂ S − v )
P
(S
p p − I p ) + β1 ∂ x x p p y y p p
n o (46)
+ϑ (Sp − c1 ) + (|Sp − c2 ) u1 = 0.
After simplification we have
n o
1 + 2ϑu1 + β1 (∂xT ∗ ∂x + ∂yT ∗ ∂y ) S
(47)
= I + ϑ(c1 + c2 )u1 + β1 (∂xT ∗ h + ∂yT ∗ v).
Applying convolution property and Fast Fourier Transform on Eq. (47) we get :
n o
F (1) + 2ϑF (u1 ) + β1 F (∂xT ∗ ∂x + ∂yT ∗ ∂y ) F (S)
(48)
= F (I) + ϑ(c1 + c2 )F (u1 ) + β1 F (∂xT ∗ h + ∂yT ∗ v)
So we get: n o
F (I)+ϑ(c1 +c2 )F (u1 )+β1 F (∂xT ∗h+∂yT ∗v)
F (S) = n o (49)
F (1)+2ϑF (u1 )+β1 F (∂xT ∗∂x +∂yT ∗∂y ) F (S)

Applying Inverse Fourier Transform we get:


n o
F (I)+ϑ(c1 +c2 )F (u1 )+β1 F (∂xT ∗h+∂yT ∗v)
S = F −1 n o (50)
F (1)+2ϑF (u1 )+β1 F (∂xT ∗∂x +∂yT ∗∂y )

6 Experiments on texture models


Here in this section, some experiments of our new proposed model on different images with texture
are compared with LCV-model with EST [42] and smoothing and segmentation model by Bashah
et al. [3]. The proposed model is tested on different types of natural and synthetic texture images.
Parameters used in the proposed model are λ for smoothing, β1 for updating and Θ for segmentation
result.
Fig. 1 shows that the segmentation result of our proposed model is more better than LCV-model
with EST and Badshah et al. models. Here, (a) is the given synthetic image, (b) is the result of LCV
texture model, (c) is Badshah et al. model result and (d) is the segmented result of our proposed
model.
In Fig. 2, (a) is original image and (b) show the LCV segmentation result. In next row (c) is
the result of Badshah et al. model and (d) shows the result of our proposed model. Once again the
proposed model shows better performance.
In Fig. 3, (a) is original real image, (b) is the result of LCV texture, (c) is the result of Badshah et
al. model and (d) shows the result of our proposed model. The above results show that the proposed
model can efficiently segment the real images.
242
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

(a) (b)

(c) (d)

Figure 1: 1(a) is the input image, 1(b) LCV-model with EST: σ = 5.0, µ = 65, iteration=500, 1(c)
Badshah et al. model: λ = 0.09, µ = 0.0002, iteration=100 and 1(d) Proposed model: λ = 0.00002,
Θ = 0.14, iteration=100.

243
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

(a) (b)

(c) (d)

Figure 2: Texture image segmentation result: 2(a) Original image, 2(b) LCV-model with EST:
σ = 4.0, µ = 65, iteration=500, 2(c) Badshah et al. model: λ = 0.002, µ = 0.2, iteration=60 2(d)
Proposed model: λ = 0.00002, Θ = 0.17, iteration=40.

244
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

(a) (b)

(c) (d)

Figure 3: Texture image segmentation result: 3(a) Original image, 3(b) LCV-model with EST:
σ = 4.0, µ = 650, iteration=500, 3 Badshah et al. model: λ = 0.9, µ = 0.01, iteration=20 3 Proposed
model: λ = 0.000002, Θ = 9, iteration=100.

245
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

(a) (b)

(c) (d)

Figure 4: Texture image segmentation result: 4(a) Original image, 4(b) LCV-model with EST:
σ = 3.5, µ = 0.65, iteration=500, 4 Badshah et al. model: λ = 0.9, µ = 0.001, iteration=100, 4
Proposed model: λ = 0.000002, Θ = 2, iteration=100.

246
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

In Fig. 4, (a) is original image and (b) show the LCV result. In next row (c) is the result of
Badshah et al. model and (d) shows the result of our proposed model. Once again the proposed
model shows better performance.
Some more experiments on our proposed model are shown in Fig. 5.

247
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

(j)

Figure 5: The results of Proposed model are given in Fig. 5(a) to Fig. 5(j). The values of different
parameters in Fig. 5(a) are: λ = 0.00002, = 0.14 and number of iterations=30. Fig. 5(b) are:
λ = 0.00002, Θ = 1, iteration=55. Fig. 5(c) are: λ = 0.000002, Θ = 4, iteration=200. Fig. 5(d) are:
λ = 0.000002, Θ = 4, iteration=175. Fig. 5(e) are: λ = 0.00002, Θ = 1, iteration=80. Fig. 5(f) are:
λ = 0.00002, Θ = 0.14, iteration=100. Fig. 5(g) are: λ = 0.000002, Θ = 0.5, iteration=110. Fig.
5(h) are: λ = 0.00002, Θ = 0.9, iteration=100. Fig. 5(i) are: λ = 0.00002, Θ = 1, iteration=60. Fig.
5(j) are: λ = 0.0002, Θ = 0.14, iteration=10.

248
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

7 Conclusion
As LCV-model with EST and a smoothing and segmentation model via L0 norm performed well but
both models are sensitive to the selection of initial contour. Badshah et.al model produces staircase
effect and also did not work when boundaries are not sharp. Hence we developed a new convex
variational model which is the combination of LGM-model and FGM-model and does not depend
upon the selection of initial guess. L0 gradient norm is used for smoothing and FGM fitting term is
used for segmentation of smoothed image. The alternating minimization algorithm is applied on the
model to efficiently solve it. Some of the experiments of the new model shows that our model works
better than LCV-model with EST [42] and Badshah et.al model [3].

References
[1] Mohand Saı̈d Allili and Djemel Ziou. Object tracking in videos using adaptive mixture models
and active contours. Neurocomputing, 71(10):2001–2011, 2008.

[2] Gilles Aubert, Michel Barlaud, Olivier Faugeras, and Stéphanie Jehan-Besson. Image segmenta-
tion using active contours: Calculus of variations or shape gradients. SIAM Journal on Applied
Mathematics, 63(6):2128–2154, 2003.

[3] Noor Badshah and Hassan Shah. Model for smoothing and segmentation of texture images using
l 0 norm. IET Image Processing, 12(2):285–291, 2017.

[4] Josef Bigun, Goesta H. Granlund, and Johan Wiklund. Multidimensional orientation estimation
with applications to texture analysis and optical flow. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 13(8):775–790, 1991.

[5] Michael J Black, Guillermo Sapiro, David H Marimont, and David Heeger. Robust anisotropic
diffusion. IEEE Transactions on image processing, 7(3):421–432, 1998.

[6] Alan C. Bovik, Marianna Clark, and Wilson S. Geisler. Multichannel texture analysis using local-
ized spatial filters. IEEE transactions on pattern analysis and machine intelligence, 12(1):55–73,
1990.

[7] Xavier Bresson, Selim Esedo?lu, Pierre Vandergheynst, Jean-Philippe Thiran, and Stanley Osher.
Fast global minimization of the active contour/snake model. Journal of Mathematical Imaging
and vision, 28(2):151–167, 2007.

[8] Vicent Caselles, Ron Kimmel, and Guillermo Sapiro. Geodesic active contours. International
journal of computer vision, 22(1):61–79, 1997.

[9] Antonin Chambolle. An algorithm for total variation minimization and applications. Journal of
Mathematical imaging and vision, 20(1-2):89–97, 2004.

[10] Antonin Chambolle and Pierre-Louis Lions. Image recovery via total variation minimization and
related problems. Numerische Mathematik, 76(2):167–188, 1997.

[11] Tony F Chan and Luminita A Vese. Active contours without edges. IEEE Transactions on
image processing, 10(2):266–277, 2001.

[12] Yunjie Chen, Jianwei Zhang, Arabinda Mishra, and Jianwei Yang. Image segmentation and bias
correction via an improved level set method. Neurocomputing, 74(17):3520–3530, 2011.

249
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

[13] Daniel Cremers, Mikael Rousson, and Rachid Deriche. A review of statistical approaches to
level set segmentation: integrating color, texture, motion and shape. International journal of
computer vision, 72(2):195–215, 2007.

[14] José-Jesús Fernández and Sam Li. An improved algorithm for anisotropic nonlinear diffusion for
denoising cryo-tomograms. Journal of structural biology, 144(1):152–161, 2003.

[15] Itzhak Fogel and Dov Sagi. Gabor filters as texture discriminator. Biological cybernetics,
61(2):103–113, 1989.

[16] Wolfgang Förstner and Eberhard Gülch. A fast operator for detection and precise location
of distinct points, corners and centres of circular features. In Proc. ISPRS intercommission
conference on fast processing of photogrammetric data, pages 281–305, 1987.

[17] S Fu and C Zhang. Adaptive non-convex total variation regularisation for image restoration.
Electronics Letters, 46(13):1, 2010.

[18] Shujun Fu and Caiming Zhang. Fringe pattern denoising via image decomposition. Optics letters,
37(3):422–424, 2012.

[19] Song Gao and Tien D Bui. Image segmentation and selective smoothing by using mumford-shah
model. IEEE Transactions on Image Processing, 14(10):1537–1549, 2005.

[20] Simona E Grigorescu, Nicolai Petkov, and Peter Kruizinga. Comparison of texture features based
on gabor filters. IEEE Transactions on Image processing, 11(10):1160–1167, 2002.

[21] Lei He, Zhigang Peng, Bryan Everding, Xun Wang, Chia Y Han, Kenneth L Weiss, and William G
Wee. A comparative study of deformable contour methods on medical image segmentation. Image
and Vision Computing, 26(2):141–163, 2008.

[22] Michael Kass, Andrew Witkin, and Demetri Terzopoulos. Snakes: Active contour models. In-
ternational journal of computer vision, 1(4):321–331, 1988.

[23] Ron Kimmel, Arnon Amir, and Alfred M. Bruckstein. Finding shortest paths on surfaces us-
ing level sets propagation. IEEE Transactions on Pattern Analysis and Machine Intelligence,
17(6):635–640, 1995.

[24] Gerald Kuhne, Joachim Weickert, Oliver Schuster, and Stephan Richter. A tensor-driven active
contour model for moving object segmentation. In Image Processing, 2001. Proceedings. 2001
International Conference on, volume 2, pages 73–76. IEEE, 2001.

[25] Chunming Li, Chiu-Yen Kao, John C Gore, and Zhaohua Ding. Implicit active contours driven
by local binary fitting energy. In 2007 IEEE Conference on Computer Vision and Pattern
Recognition, pages 1–7. IEEE, 2007.

[26] Chunming Li, Chiu-Yen Kao, John C Gore, and Zhaohua Ding. Minimization of region-scalable
fitting energy for image segmentation. IEEE transactions on image processing, 17(10):1940–1949,
2008.

[27] Maria Lianantonakis and Yvan R Petillot. Sidescan sonar segmentation using texture descriptors
and active contours. IEEE Journal of Oceanic Engineering, 32(3):744–752, 2007.

[28] Akshaya K Mishra, Paul W Fieguth, and David A Clausi. Decoupled active contour (dac)
for boundary detection. IEEE Transactions on Pattern Analysis and Machine Intelligence,
33(2):310–324, 2011.
250
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

[29] David Mumford and Jayant Shah. Optimal approximations by piecewise smooth functions and
associated variational problems. Communications on pure and applied mathematics, 42(5):577–
685, 1989.

[30] Nikos Paragios and Rachid Deriche. Geodesic active regions and level set methods for supervised
texture segmentation. International Journal of Computer Vision, 46(3):223–247, 2002.

[31] Pietro Perona and Jitendra Malik. Scale-space and edge detection using anisotropic diffusion.
IEEE Transactions on pattern analysis and machine intelligence, 12(7):629–639, 1990.

[32] Mikaël Rousson, Thomas Brox, and Rachid Deriche. Active unsupervised texture segmentation
on a diffusion based feature space. In Computer vision and pattern recognition, 2003. Proceedings.
2003 IEEE computer society conference on, volume 2, pages II–699. IEEE, 2003.

[33] Leonid I Rudin, Stanley Osher, and Emad Fatemi. Nonlinear total variation based noise removal
algorithms. Physica D: Nonlinear Phenomena, 60(1):259–268, 1992.

[34] Berta Sandberg, Tony Chan, and Luminita Vese. A level-set and gabor-based active contour
algorithm for segmenting textured images. In UCLA Department of Mathematics CAM report.
Citeseer, 2002.

[35] Ken Tabb, Neil Davey, Rod Adams, and Stella George. The recognition and analysis of animate
objects using neural networks and active contour models. Neurocomputing, 43(1):145–172, 2002.

[36] TN Tan. Texture edge detection by modelling visual cortical channels. Pattern Recognition,
28(9):1283–1298, 1995.

[37] Carlo Tomasi and Roberto Manduchi. Bilateral filtering for gray and color images. In Computer
Vision, 1998. Sixth International Conference on, pages 839–846. IEEE, 1998.

[38] Andy Tsai, Anthony Yezzi, and Alan S Willsky. Curve evolution implementation of the mumford-
shah functional for image segmentation, denoising, interpolation, and magnification. IEEE trans-
actions on Image Processing, 10(8):1169–1186, 2001.

[39] Mark R Turner. Texture discrimination by gabor functions. Biological cybernetics, 55(2-3):71–82,
1986.

[40] Luminita A Vese and Tony F Chan. A multiphase level set framework for image segmentation
using the mumford and shah model. International journal of computer vision, 50(3):271–293,
2002.

[41] Xiao-Feng Wang and De-Shuang Huang. A novel density-based clustering framework by using
level set method. IEEE Transactions on Knowledge and Data Engineering, 21(11):1515–1531,
2009.

[42] Xiao-Feng Wang, De-Shuang Huang, and Huan Xu. An efficient local chan–vese model for image
segmentation. Pattern Recognition, 43(3):603–618, 2010.

[43] Y Wang, W Chen, S Zhou, T Yu, and Y Zhang. Mtv: modified total variation model for image
noise removal. Electronics Letters, 47(10):592–594, 2011.

[44] Yilun Wang, Junfeng Yang, Wotao Yin, and Yin Zhang. A new alternating minimization algo-
rithm for total variation image reconstruction. SIAM Journal on Imaging Sciences, 1(3):248–272,
2008.

[45] Joachim Weickert. Anisotropic diffusion in image processing, volume 1. Teubner Stuttgart, 1998.
251
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

[46] Joachim Weickert. Coherence-enhancing diffusion of colour images. Image and Vision Comput-
ing, 17(3):201–212, 1999.

[47] Chunlin Wu and Xue-Cheng Tai. Augmented lagrangian method, dual methods, and split breg-
man iteration for rof, vectorial tv, and high order models. SIAM Journal on Imaging Sciences,
3(3):300–339, 2010.

[48] Qinggang Wu, Jubai An, and Bin Lin. A texture segmentation algorithm based on pca and
global minimization active contour model for aerial insulator images. IEEE Journal of Selected
Topics in Applied Earth Observations and Remote Sensing, 5(5):1509–1518, 2012.

[49] Li Xu, Cewu Lu, Yi Xu, and Jiaya Jia. Image smoothing via l 0 gradient minimization. In ACM
Transactions on Graphics (TOG), volume 30, page 174. ACM, 2011.

[50] Kaihua Zhang, Huihui Song, and Lei Zhang. Active contours driven by local image fitting energy.
Pattern recognition, 43(4):1199–1206, 2010.

252
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

An Efficient Hybrid Kernel Metric Energy Segmentation Model

Fatima Shoaib, Noor Badshah, Ali Ahmed


Department o Basic Sciences and Islamiat.
University of Engineering and Technology, Peshawar, Pakistan
Fatima.syed12@yahoo.com, noor2knoor@gmail.com, aliahmadmath@gmail.com

ABSTRACT

In this paper, we have developed a novel hybridized kernel metric based model. Hybridized
kernel metric is developed using local and global averages. By introducing hybrid kernel metric,
it not only reduces the sensitivity to initial contour placement, rather makes it more efficient in
segmenting images having intensity inhomogeneity more effectively than LBF, SBGFRLS and
KIDS models. Level set technique is used for the evolution of the active contour. To get faster
convergence and to enable the active contour to efficiently stop at weak or blurred edges a new
SPF function is added to the evolution equation. Experiments on medical images demonstrate the
advantages of the proposed model over LBF, SBGFRLS and KIDS models in terms of
efficiency.
Keywords: Segmentation, level set, Active contour, Signed Pressure Force (SPF).

I. Introduction.

Image segmentation is a fundamental concept in image processing and has been a main subject
of many theoretical and practical studies [1-3]. Generally, the aim of image segmentation is to
separate an image uniformly into intersected and non-overlapped regions through bound
properties like textures or colors. Therefore, image segmentation can simplify the image
representation for image understanding and analysis. Active contour model introduced by Kass
et al. [4] is one of the most successful methods for image segmentation. The basic idea is to
represent a contour as the zero level set of the level set functional, and evolve the curve under
some constraints to extract the required objects. Over the past few decades, the active contour
models have shown promising results through using level set strategies in image segmentation
[5-12]. The active contour models by level set methods can be broadly categorized into two basic
types: edge-based methods [5-9] and region-based methods [10-23]. First, Chan and Vese [24]
proposed the piecewise constant (PC) model of two-phase image segmentation, which was
supported on the simplify Mumford-Shah functional [25] and variational level set method. In PC
model, the edge position isn’t delineating with gradient, however with Mumford-Shah
functional. The PC model assumes that an image consists of statistically homogeneous regions,
with intensities in every region being terribly completely different, and image segmentation is
performed by the intensities difference of the homogeneous regions. Through variational level
set method, the energy functional is minimized with the assumption that each homogeneous
region is given a constant, which approximates the image intensity. The PC model is successful
for images with homogeneous regions. And the PS model overcomes the flaw of C-V model,

253
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

however its massive calculation. Moreover, the PS model isn’t able to deal with the intensity
inhomogeneity image, which object part is lighter than the background, another part is darker.
Recently, Li et al. [26] proposed a local binary fitting (LBF) model to overcome the difficulty in
the intensity inhomogeneity image segmentation, and improved the PC model. The LBF model
has solve the intensity inhomogeneity and weak boundary segmentation issues, and the global
binary fitting energy functional in PC model is replaced with the local binary fitting energy
functional, which has a Gauss function as the kernel function. Especially, there are several
intensity inhomogeneity medical images, like X ray image, MR image and CT image [27, 28].
The Local Images Fitting (LIF) model [17] introduces the local fitted image energy functional to
extract the local information, and then uses the Gaussian filtering to regularize the level set
function. Comparing with the global region-based models, the local region-based models are
more complex and time-consuming [16].
Local image information is crucial for accurate segmentation of images with intensity
inhomogeneity. However, image information in local region is not embedded in popular region-
based active contour models, like the piecewise constant models.The LIF model has three
characteristics. First, the local fitted image (LFI) formulation is outlined to extract the local
image information. Second, the Gaussian filter is used to regularize the level set function and
eliminate the necessity of re-initialization for every iteration. Last, the traditional regularized
term is removed be- cause the level set function of the LIF model has been smoothed by using
the Gaussian filter. In this way, the LIF model has less computational complexity to finish the
image segmentation than other local region-based models [15, 18]. However, the LIF model is
sensitive to the local region scale and the initial contour, and easy to produce segmentation
errors. The time complexity of the LIF model: in each iteration, the computing of the LIF model
includes the updating of the region parameters, the level set, and the regularization term in every
iteration. Since the LIF model uses Gaussian kernel to regularize the level set, the traditional
regularized term can be removed.
This paper is organized as follows. In Section II, we review some classical models and point out
their limitations. The proposed model and its level set flow are shown in Section III. Section IV
demonstrates some experimental results with synthetic and real images. Conclusion is given in
Section V.

II. Background
Many image segmentation techniques have been developed in past like threshold, edge and region
based segmentation etc. Here we tend to discuss, LBF model [29], SBGFRLS model [32] and
KIDS model [30].

A. Local Binary Fitting Energy (LBF) Model

Li et al. [29] proposed the LBF model embedded the local image information, which was ready
to deal with the intensity inhomogeneity images.
The basic plan was to introduce kernel function to define the energy functional of LBF model
[29] as follows:

254
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

( ) ∬ ( )| ( ) ( )|
( )
(1)
∬ ( )
( )| ( ) ( )| ,

Where are fixed parameters. is an input image. is a Gaussian kernel with


standard deviation . and are two smooth functions that approximate the local image
intensities inside and outside the contour C.
is a Gaussian kernel and is defined as:

| | ⁄
( ) , (2)
( ) ⁄
minimizing the energy functional in eq. (1) to get the gradient descent flow.

( )( ). (3)

Where are defined as follows:

( ) ∫ ( )| ( ) ( )|
{ (4)
( ) ∫ ( )| ( ) ( )|

Functions are image average intensities, and given as:

 K *[H ( ) I(x)]
 g1 (x)  K * H ( )
  
 (5)
 g (x)  K *[(1  H ( )) I(x)] .
 2 K * (1  H ( ))

Limitation of LBF model is that it’s sensitive to the initial contour [31] and also computational
costly.

B. Selective Binary and Gaussian Filtering Regularized Level Set (SBGFRLS) Model

The level set formulation can be written as follows:


( ( )) | | . (6)
The SPF function has values within the range [-1, 1]. It modulates the signs of the pressure
forces within and outside the region of interest in order that the contour shrinks once outside the
object, or expands once inside the object. The SPF function is outlined as:
( )
( ( )) , (7)
(| ( ) |)
where , are average intensities. This model may not work on the images having intensity
inhomogeneity [33].

255
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

C. Kernel induced data segmentation (KIDS) Model

The image data is mostly nonlinearly dissociable. The basic idea in using a kernel function, to
transform the image data for image segmentation is as follows: instead of seeking accurate
image models. They transform the image data implicitly via a kernel function, in order that the
piecewise constant model becomes applicable.
The energy functional of given model is often expressed with in the form:

( ) ∫ ‖ ( ( )) ( )‖ ∫ ‖ ( ( )) ( )‖ ∫⃗ , (8)

where are two regions, ( ) is the image, ( ) is a nonlinear mapping from the
observation space to a higher dimensional space, let : [0,1] be a closed planer parametric
curve. divides the image domain in two regions. The first two terms in Eq. (8) are data terms
whereas the third term is regularized term. The evolution equation for a two-region segmentation
in the kernel-induced space is as follows


( ( ( ) ) ( ( ) ) )⃗ , (9)

Where is the mean curvature function of and ( ( ) ) is given by

( ( ) ) ( ( ) ( )) ( ) ( ( ) )
( ( ( ) )) (10)
The two averages are given in Eq. (11)

∫ ( ) ( ( ) ) ∫ ( ) ( ( ) )
. (11)
∫ ( ( ) ) ∫ ( ( ) )

KIDS model [32] can segment numerous types of images, including images with slight intensity
inhomogeneity. However, this model isn’t appropriate for segmenting images with severe
intensity inhomogeneity, like computed tomography (CT) magnetic resonance images (MRI)
[31].
III. Proposed Model

We developed a hybridized kernel metric based on global and local averages defined in equation
(11) and (5). Based on the above kernel metric we developed a new model for segmentation of
images having intensity inhomogeneity. To speed up the proposed model we have designed a
new SPF function based on local and global averages.

256
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

The energy functional of our proposed model is as follows:

∫( ( )) ( ) ∫( ( ))( ( )) (12)

where
( )
( ) . (13)

After putting the values of in Eq. (13) we get a novel kernel metric

( ( ) ( ( ) ( ( )))

( ( ) ) (14)

( ( ) ( ( ) ( ( )))

( ( ) )

Where the local averages are taken from LBF model.

is the bandwidth parameter. The bandwidth parameter can be estimated by the variance of
and given by Eq. (15) And the mathematical formula for is given below

( ∑ ( ̅) ) (15)

Where is the absolute distance from ( ) to ̅ which is defined as | ( ) |̅ . So, the


average value can be computed by Eq. (16)

̅ ∑ . (16)

The average intensity of image ̅ is given below

̅ ∑ ( ).

So the energy functional is given in Eq. (17)

∫( ( ( ) )) ( ) ∫( ( ( ) ))( ( )) . (17)

We take global averages from Eq. (11). We minimize energy functional F with respect to
. The Euler-Lagrange equation is as follows

∫( ( ( ) )) ( ) ∫( ( ( ) )) ( ), (18)

[ ( ( ( ) )) ( ( ( ) ))] ( ) . (19)

The steepest descent equation is given as

, (20)

257
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

( ( ( ) )) ( ( ( ) )) ( ). (21)

The novel SPF function is given below


( ( ) ( ( ))) ( ( ) ( ( )))
( ( ) )
( ( ) ( ( ))) ( ( ) ( ( )))
(22)
( )

Finally, we add novel SPF function to get faster convergence of level set which is given by Eq.
(23). Final evolution equation becomes

( ( ( ) )) ( ( ( ) )) ( ) . (23)

It is important to note that we do not use length term for regularization of level set function,
instead we are using Gaussian smoothing filter.

IV. Numerical Results


To validate the effectiveness of the proposed model, the experiments on medical images are
given. This section consists of analyzing the performance and effectiveness of proposed kernel
model and its comparison with LBF, SBGFRLS and KIDS model by using number of different
medical images. In each case the parameters are set fixed in proposed model where
can change according to the images.

Image 1 LBF Model SBGFRLS Model KIDS Model Proposed Model

Fig (a) Given image and solution of segmented images are given in a row by LBF, SBGFRLS
KIDS and Proposed model.

Image 2 LBF Model SBGFRLS Model KIDS Model Proposed Model

258
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Image 2 LBF Model SBGFRLS Model KIDS Model Proposed Model

Fig (b) Given image and solution of segmented images are given in a row by LBF, SBGFRLS
KIDS and Proposed kernel model.

Clearly, it can be seen from segmented results that proposed model, LBF, SBGFRLS
and KIDS models are appropriate for images having intensities inhomogeneity but our model is
efficient because it takes less iterations. Existing models fails to segment some images.
Proposed model is applied on Fig (a, b, c, d, e) having parameters for
Image 1, 2, 3, 4, 5. Table 1. shows an improvement of the proposed strategy in speeding up
convergence. The segmented results shows that proposed kernel model gives satisfying
performances in dealing with the problems of images having intensities inhomogeneity.
Proposed kernel model is much more efficient for different medical images and it takes less
iterations and CPU time than LBF, SBGFRLS and KIDS models. All the iterations and CPU
times are provided in Table1. Experiments are done on different skin lesion images.

Image 3 LBF Model SBGFRLS Model KIDS Model Proposed Model

Fig(c). Given image and solution of segmented images are given in a row by LBF, SBGFRLS
KIDS and Proposed models.

259
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Image 4 LBF Model SBGFRLS Model KIDS Model Proposed Model

Fig (d) Given image and solution of segmented images are given in a row by LBF, SBGFRLS
KIDS and Proposed models. Where SBGFRLS and KIDS models do not performed well in
image 4.

Image 5 LBF Model SBGFRLS Model KIDS Model Proposed Model

Fig (e). Given image and solution of segmented images are given in a row by LBF, SBGFRLS
KIDS and Proposed models. Where SBGFRLS and KIDS models do not performed well in
image 5.

Table 1.Results of Proposed model with LBF, SBGRLS and KIDS Model on different
medical images.

Images LBF Model SBGFRLS Model KIDS Model Proposed Model


Iterations CPU(Sec) Iterations CPU(Sec) Iterations CPU(Sec) Iterations CPU(Sec)
Image1 3000 136 400 17 3000 45 300 12
Image2 5000 315 600 19 20,000 328 400 16.
Image3 600 18 500 21 9000 158 100 3
Image4 1900 143 1500 121 8000 120 450 50
Image5 3500 295 511 39 10000 279 300 22

V. Conclusion

In this paper, we have developed a novel hybridized kernel metric based model. Hybridized
kernel metric is developed using local and global averages. We observed that by introducing
hybrid kernel metric, it reduces the sensitivity to initial contour placement. The method can
exactly detect the objects having blurred/discrete boundaries. It is more efficient in segmenting

260
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

images having intensity inhomogeneity than LBF, SBGFRLS and KIDS models. It has faster
convergence rate than existing models. After comparison we can observe that proposed model
segment images in few iterations and takes less time than LBF, SBGFRLS and KIDS models. To
get faster convergence and to enable the active contour to efficiently stop at weak or blurred
edges a new SPF function is added to the evolution equation. Experimental results on medical
images shows that the proposed model is more efficient over LBF, SBGFRLS and KIDS
models.
References

[1] J.M. Morel, S. Solimini, Variational Methods in Image Segmentation, Birkhäuser Boston, Boston, MA, 1995 .
[2] G. Aubert, P. Kornprobst , Mathematical Problems in Image Processing, Springer, New York, 2006 .
[3] D. Cremers, M. Rousson , R. Deriche , A review of statistical approaches to level set segmentation: integrating color,
texture, motion and shape, Int. J. Comput. Vis. 72 (2) (2007) 195-215.
[4] M. Kass, A. Witkin , D. Terzopoulos , Snakes: active contour models, Int. J. Com- put. Vis. 1 (4) (1988) 321-331.
[5] V. Caselles, F. Catté, T. Coll , F. Dibos , A geometric model for active contours in image processing, Numedsche
Math. 66 (1) (1993) 1-31.
[6] R. Kimmel, A . Amir, A .M. Bruckstein , Finding shortest paths on surfaces using level sets propagation, IEEE
Trans. Pattern Anal.Mach. Intell. 17 (6) (1995) 635-640.
[7] C. Xu , J.L. Prince , Snakes, shapes, and gradient vector flow, IEEE Trans. Image Process. 7 (3) (1998) 359-369.
[8] A. Vasilevskiy , K. Siddiqi , Flux maximizing geometric flows, IEEE Trans. Pattern Anal. Mach. Intell. 24 (12) (2002)
1565-1578.
[9] C. Li , C. Xu , C. Gui , M.D. Fox , Level set evolution without re-initialization: a new variational formulation, in:
Proceeding of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1(2005)
430-436.
[10] R. Ronfard, Region-based strategies for active contour models, Int. J. Comput. Vis. 13 (2) (1994) 229-251.
[11] C. Samson, L. Blanc-Féraud, G. Aubert , J. Zerubia , A variational model for im- age classification and restoration,
IEEE Trans. Pattern Anal. Mach. Intell. 22 (5) (20 0 0) 460-472.
[12] L.A. Vese , T.F. Chan , A multiphase level set framework for image segmentation using the Mumford and Shah
model, Int. J. Comput. Vis. 50 (3) (2002) 271-293.
[13] T.F. Chan , L.A. Vese , Active contours without edges, IEEE Trans. Image Process. 10 (2) (2001)
[14] J. Lie , M. Lysaker , X.-C. Tai, A binary level set model and some applications to Mumford–Shah image
segmentation, IEEE Trans. Image Process. 15 (5) (2006) 1171-1181. ApublicationoftheIEEESignalProcessing
Society
[15] C. Li , C.Y. Kao , J.C. Gore , Z. Ding , Minimization of region-scalable fitting energy for image segmentation, IEEE
Trans. Image Process. 17 (10) (2008) 1940-1949.
[16] M. Ben Salah , A. Mitiche , I. Ben Ayed , Effective level set image segmentation with a kernel induced data term,
IEEE Trans. Image Process. 19 (1) (2010) 220-232.
[17] K. Zhang , H. Song , L. Zhang , Active contours driven by local image fitting energy, Pattern Recognit. 43 (4) (2010)
1199-1206 .
[18] C. Li , R. Huang , Z. Ding , J.C. Gatenby , D.N. Metaxas , J.C. Gore ,A level set method for image segmentation in
the presence of intensity inhomogeneities with application to MRI, IEEE Trans. Image Process.
20 (7) (2011) 2007-2016.
[19] S. Liu , Y. Peng , A local region-based Chan–Vese model for image segmentation, Pattern Recognit. 45 (7) (2012)
2769-2779.
[20] Y. Wang , S. Xiang , C. Pan , L. Wang , G. Meng , Level set evolution with locally linear classification for image
segmentation, Pattern Recognit. 46 (6) (2013) 1734–1746.
[21] H. Min , W. Jia , X.-F. Wang , Y. Zhao , R.-X. Hu , Y.-T. Luo, F. Xue , J.-T. Lu, An in- tensity–texture model based
level set method for image segmentation, Pattern Recognit. 48 (4) (2015) 1547–1562.
[22] C. Huang , L. Zeng , An active contour model for the segmentation of images with intensity inhomogeneities and
bias field estimation, PloS One 10 (4) (2015) 120-399 .
[23] Chunming Li 1, Chiu-Yen Kao 2, John C. Gore 1, and Zhaohua Ding 1, Implicit Active Contours Driven by Local

261
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Binary Fitting Energy,


[24] Chan T., Vese L., Active contours without edges, IEEE Trans. Image Process.10 (2) (2001) 266-277.
[25] Mumford D., Shah J., Optimal approximation by piecewise smooth functions and associatedvariational problems,
Commun Pure Appl. Math., vol. 42(1989) 577-685.
[26] C.M.Li, C.Kao, J.Gore, Z.Ding, Implicit active contours driven by local binary fitting energy,
IEEE Conference on Computer Vision and Pattern Recognition (CVPR). (2007) 1-7.
[27] Wang Li, LiChunming, Sun Quansen, et al, Active contours driven by local and global intensity
fitting energy with application to brain MR image segmentation, Computerized Medical Imaging
and Graphics 33 (2009) 520-531.
[28] Wang X., Huang D., Xu H., An efficient local Chan-Vase model for image
Segmentation, Pattern Recognition 43 (2010) 603-618.
[29] Chunming, L., Chiu-Yen, K., John, C., Zhaohua, D., Implicit Active Contours Driven by Local Binary Fitting
Energy,Computer Vision and Pattern Recognition, IEEE Conference on (2007) 1-7.
[30] Kaihua, Z., Lei, Z., Huihui, S., Wengang, Z., Active Contours with Selective Local or Global Segmentation: A New
Formulation and Level Set Method, Image and Vision Computing. 28(4) (2010) 668-676.
[31] Na, S., Jinxiao, P., The Improved Local Binary Fitting Model.College of Science, International Journal of Digital
Content Technology and its Applications (JDCTA). 6(1) (2012) 15-22.
[32] Salah, M.,Mitiche, A. Effective Level Set Image Segmentation with a Kernel Induced Data Term, IEEE Transactions
on Image Processing. 19 (1) (2010) 220-232.
[33] Thi-Thao, T., Van-Truong, P., Yun-Jen, C.,Kuo-Kai, S.,Active Contour with Selective Local or Global
Segmentation for Intensity Inhomogeneous Image,IEEE. 1 (2010) 306-310.
[34] Ying, C. Z., He, G., Feng, C., Hongji, Y., Weighted Kernel Mapping Model with Spring Simulation Based Watershed
Transformation for Level Set Image Segmentation, Neurocomputing, (2017) 249 1–18.

262
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Compact Higher order scheme for the solving of 1-D


fractional diffusion equation
∗1 †2
Fazal Ghaffar and Noor Badshah
1
Department of Mathematics Govt: Jahanzeb P.G. College Saidu Sharif Swat
2
Department of Basic Sciences UET Peshawar

Abstract In this paper, a higher-order compact finite difference scheme with multigrid algorithm is applied for solving
one dimensional time fractional diffusion equation. The second order derivative with respect to space is approximated
by higher-order compact difference scheme then Grunwald-Letnikov approximation is used for the Riemann-Liouville
time derivative to obtain an implicit scheme. The proposed numerical scheme is based on a heptadiagonal matrix with
eighth-order accurate local truncation error. Fourier analysis is used to analyze the stability of compact higher-order
finite difference scheme. Matrix analysis is used to show that the scheme is convergent with the accuracy of eighth
order in space. Numerical experiments confirm our theoretical analysis and demonstrate the performance and accuracy
of our proposed scheme.

Keywords: Compact iterative scheme, Fractional diffusion equation, Stability, Convergence, Multigrid
method.

1 Background

Fractional derivative is the generalization of classical integer order derivative. Fractional order dif-
ferential equations (FDE’s) are equations having fractional order derivatives. In recent years FDE’s
have attracted researchers to develop effective techniques for solution and applications in different
fields. Recently FDE’s are used in image processing [1], dynamical system in mathematical biology
[2, 3], fractional Schrodinger equation for many quantum processes in Physics [4]. Fractional diffusion
equation can be used for predicting market behavior [5], speculative option valuation [6] and modeling
cancer tumor [7] etc.
To develop some methods that efficiently solve fractional differential equations seems to be more
important. Though several methods have been proposed to solve FDE’s theoretically [8, 9], such as
Green function method, Fourier and Laplace transform methods, but most of the problems cannot
be solved analytically by such methods. It seems to be more important to develop efficient numerical
techniques that solve fractional differential equations. The brief discussion regarding the FDE’s are
in [9].

Many numerical schemes have been proposed for solving the time or space FDE’s, for instance
finite difference schemes for time fraction diffusion equation were presented in [10] and Lynch et al.
[11] developed an explicit and semi-implicit schemes for the solution of PDE’s. Meerschaert and
Tadjeran [12] inquired a new model and proved that the fractional derivatives are more accurate than
ordinary derivatives. Acedo and Yuste [13, 14] developed a finite difference scheme for fractional

Email:fghaffarmaths@gmail.com

Email: noor2knoor@gmail.com

263
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

reaction- subdivision equations also analyze the stability condition. More over Chen et al. [15] devel-
oped a numerical scheme for fractional subdivisional equation also analyze accuracy by local fourier
analysis method, Chen et al. [16] introduced an explicit and implicit finite difference method for frac-
tional reaction subdivisional equation also an implicit numerical technique for modified anomalous
subdivision equation is developed by Liu et al. [17]. Khaliq and Zhu [18] introduced the compact
scheme used for space derivatives up to second order. Recently Mingrong Cui developed compact
finite difference method for fractional diffusion equation [19, 20] where as Gao and Sun[21] developed
a compact difference scheme for one dimensional subdiffusion equation. Also Zhao and Sun obtained
a high order scheme for the fractional sub diffusion equation [22]. This research work aimed to de-
velop higher order compact difference (HOC) scheme based on compact eighth order for the solution
of fractional diffusion equation.
It is of great interest to investigate the higher order compact finite difference method for FDE’s.
Compact methods are the higher order methods and the coefficient matrix of the linear system of
equations is in the form of tri-diagonal, block diagonal or pentadiagonal that can be solved easily
by multigrid method. We consider one dimensional time fraction diffusion equation as the model
problem with a non-homogeneous term [13, 23].

∂u(x, t) 1−γ ∂2u


= 0 Dt (Kγ ) + f (x, t), x ∈ (L1 , L2 ), 0 < t < T, (1)
∂t ∂x2

where Kγ > 0 and 0 Dt1−γ u is the Riemann-Liouville fractional derivative of the function u(x, t) when
(0 < γ < 1), defined as [8].
∫ t
1−γ 1 ∂ u(x, t)
D
0 t u = dτ, (2)
Γ(γ) ∂t 0 (t − τ )1−γ
In limiting case when limit γ −→ 1, equation (1) reduces to integer order of the diffusion equation .
The boundary conditions for equation (1) are

u(x, 0) = ϕ(x), u(L1 , t) = φ(t), u(L2 , t) = ψ(t), t ≥ 0. (3)

2 Compact higher order scheme

The model problem (1- 3) is solved numerically in the following way:


The standard 2nd order central finite difference scheme is defined as:
( j )
2 j ui+1 − 2uji + uji−1
δx u i = , (4)
h2
equation (4) is used for second order approximation of uxx . For the compact high order operator we
use same steps/equations as done in [24]:
( ) ( ) ( )
∂u 2µx sinh−1 δ2x µx 12 3 12 .32 5
= ui = δx − δx + δ − · · · ui , (5)
∂x h (1 + δx2 )1/2 h 6 120 x
i 4

where µx is the mean operator defined as µx ui = 21 (ui+ 1 + ui− 1 ).


2 2

( ) ( )2
∂2u 2 −1 δx
= sinh ui
∂x2 h 2
i
( )
1 1 4 1 6 1 8 1 10 1
= δ − δ + δ −
2
δ + δ − δ + · · · uji .
12
(6)
h2 x 12 x 90 x 560 x 3150 x 16632 x

264
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Since we know that


( )
δx2 uji 1 1 4 1 6 1 8 j ∂ 2 uji 1
= δx
2
− δ x + δ x − δ x + · · · u = − δ 6 uj + O(h6 ) (7)
h2 (1 + δx2 h2 12 144 1728 i
∂x 2 240h2 x i
12 )

( ( )−1 )
∂ 2 uji 1 δx2
= 1+ 2 1+ δx2 uji + O(h4 ). (8)
∂x2 h 12

Now we approximate higher order compact finite difference scheme for second order spatial derivative
by using the following lemma.
Lemma 1. Define the following operator
( )−1 ( )
1 1 6 1 1
L = 2 1+ δ δx2 1 − δx2 + δx4 , (9)
h 560 x 12 90

then
∂ 2 uji
= L uji + O(h8 ) hold. (10)
∂x2
Proof. By the use of the above approximation using in equation (6), we obtain
( )−1 ( )
1 1 6 1 2 1 4 j
1 + δ δx 1 − δx + δx u i
2
h2 560 x 12 90
( )
1 1 4 1 6 1 8 1 10
= δ − δ + δ −
2
δ + δ + · · · uji
h2 x 12 x 90 x 560 x 6720 x
∂ 2 uji 197
= + δ 10 uj + O(h10 )
∂x 2 28672000h2 x i
∂ 2 uji 197h8 ∂ 10 uji
= + + O(h10 ). (11)
∂x2 28672000 ∂x10
that is ( )−1 ( )
∂ 2 uji 1 1 6 1 1
= 2 1+ δ δx2 1 − δx2 + δx4 + O(h8 ). (12)
∂x2 h 560 x 12 90
This equation can lead us to an approximation up to eighth order accuracy. In next section we first
define fractional derivative in discrete sense.

Discrete fractional derivatives


Discrete form of fractional derivative can be found by using the Riemann-Liouville and the Grunwald-
Letnikov fractional order derivative formulas [9], which can be written in the following way:
⌊t/τ ⌋
1−γ 1 ∑
0 Dt f (t) = ξℓ1−γ f (t − ℓτ ) + O(τ )q , (13)
τ 1−γ
ℓ=0

where ξℓ1−γ are the coefficients of the generating function f (t − ℓτ ) and are defined as


ξ(w, α) = ξℓα wℓ .
ℓ=0

265
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

(1−γ)
In particular we may take ξ( w, α) = (1 − w)α , then the coefficients are obtained as ξ0 = 1 and
(1−γ) (1−γ )
λℓ ≡ ξℓ ℓ
= (−1) ℓ , ℓ = 0, 1, 2, ....j, as in [13, 25, 27]. We used these coefficients and q = 1 in
equation (13). The higher order compact scheme for equation (1) can be written as in [19]
 ( 4 )

2

 j
−U j−1
j
δ 2 (1 − δx + δx )
 i i = Kγ 2 1−γ

U 1
λℓ x 12 90
Uij−ℓ + fij ,
 τ h τ δx6
ℓ=0 (1 + 560 )
(14)

 U 0 = ϕ(x ), i = 1, 2, ..., M − 1

 i i
 j j
U0 = φ(tj ), UM = ψ(tj ), j = 1, 2, ..., N,

where Ui is the discrete approximation of ui that is, ui = Ui + O(h)6 , multiplying both sides of first
δx6
equation in (14) by (1 + 560 ), we have
( )
 ∑j ( ) ( )

 6 δx4 δx6 δx6


δ j j−1
1 + 560 (Ui − Ui ) = µ λℓ δx −
x 2
+ j−ℓ
Ui + τ 1 + fij ,

 12 90 560
 ℓ=0

 1 ≤ i ≤ M − 1, 1 ≤ j ≤ N, (15)



 Ui = ϕ(xi ), i = 1, 2, ...., M − 1
0


U j = φ(t ), U j = ψ(t ), j = 1, 2, ...., N, where µ = K τ γ .
0 j M j γ h2

Putting ℓ = 0, we have
( ) ( )
 ∑
j ( ) ( )

 4 6 δx6 δx4 δx6 δx6
 δ δ j
Uij−1 j−ℓ
fj,
 1 − µδx + µ 12 + 5040 (9 − 56µ) Ui − 1 + = µ λℓ δx −
2 2
 x x
+ Ui + τ 1 +


560 12 90 560 i
ℓ=1

 1 ≤ i ≤ M − 1, 1 ≤ j ≤ N, (16)



 Ui0 = ϕ(xi ), i = 1, 2, ...., M − 1


U j = φ(t ), UMj
= ψ(tj ), j = 1, 2, ...., N.
0 j

After some manipulations we have




 1 + U 1 ) + a (U 1 + U 1 ) + a (U 1 + U 1 ) + a U 1 = b (U 0 + U 0 ) + b (U 0 + U 0 )
a3 (Ui+3

 i−3 2 i+2 i−2 1 ( i+1 i−1 0 i 3 i+3 i−3 2 i+2 i−2 )



 +b1 (Ui+1 + Ui−1 ) + b0 Ui + 560 (fi+3 + fi−3 ) − 6(fi+2 + fi−2 ) + 15(fi+1 + fi−1 ) + 540fi1 ,
0 0 0 τ 1 1 1 1 1 1












j j j j j j j j−1 j−1 j−1 j−1
a3 (Ui+3 + Ui−3 ) + a2 (Ui+2 + Ui−2 ) + a1 (Ui+1 + Ui−1 ) + a0 Ui = b3 (Ui+3 + Ui−3 ) + b2 (Ui+2 + Ui−2 )

j−2 ( )
(17)


j−1 j−1 j−1 ℓ ℓ ℓ ℓ ℓ
+b1 (Ui+1 + Ui−1 ) + b0 Ui + µ λj−ℓ d3 (Ui+3 + Ui−3 ) + d2 (Ui+2 + Ui−2 ) + d1 (Ui+1 + Ui−1 ) + d0 Ui ℓ ℓ



 ( ℓ=0 )



 j j j j j j j

 + τ
(f + f ) − 6(f + f ) + 15(f + f ) + 540f i , 2 ≤ j ≤ N,


560 i+3 i−3 i+2 i−2 i+1 i−1






U 0 = ϕ(x ), U j = φ(t ), U j = ψ(t ), j = 0, 1, 2, ..., N,
i i 0 j M j

where
4860 + 13720µ −4860 + 13720µλ1 49
a0 = , b0 = , d0 = − ,
5040 5040 18
135 − 7560µ 135 + 7560µλ1 27
a1 = , b1 = , d1 =
5040 5040 18
756µ − 54 54 − 756µλ1 27
a2 = , b2 = , d2 = −
5040 5040 180
9 − 56µ 56µλ1 − 9 1
a3 = , b3 = , d3 = ,
5040 5040 90

266
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

equation (17) can be written as

j j j
a3 Ui+3 + a2 Ui+2 + a1 Ui+1 + a0 Uij + a1 + Ui−1
j j
+ a2 Ui−2 j
+ a3 Ui−3
j−1 j−1 j−1
= b3 Ui+3 + b2 Ui+2 + b1 Ui+1 + b0 Uij−1 + b1 Ui−1
j−1 j−1
+ b2 Ui−2 j−1
+ b3 Ui−3
∑j−2 ( )
ℓ ℓ ℓ ℓ ℓ ℓ ℓ
+ µ λj−ℓ d3 Ui+3 + d2 Ui+2 + d1 Ui+1 + d0 Ui + d1 Ui−1 + d2 Ui−2 + d3 Ui−3
ℓ=0
( )
τ j j j j j j j
+ f − 6fi+2 + 15fi+1 + 540fi + 15fi−1 − 6fi−2 + fi−3 . (18)
560 i+3

Equation (18) is the higher order compact scheme for one dimensional diffusion equation.

3 Matrix form of the our numerical scheme

In the form matrix the above discrete scheme (18) can be expressed in the following way:



 1 0 1
ÅU = B0 U + F ,

j−1
(19)


j Bℓ U ℓ + F j ,
ÅU = j = 2, 3, 4, ..., N,
ℓ=0

where Bℓ and Å, B0 are heptadiagonal matrices of order M − 1 × M − 1.


 
b1 U00 − a1 U01 + 560τ
(15f01 + 540f11 + 15f21 − 6f31 )
 b2 U00 − a2 U01 + 560τ
(15f11 + 540f21 + 15f31 − 6f41 ) 
 
 .. 
 . 
1
F =  ,
.. 
 . 
 
 b2 UM − a2 UM + 
560 (15fM −4 + 540fM −3 + 15fM −2 − 6fM −1 )
0 1 τ 1 1 1 1

b 1 UM0 − a U 1 + τ (15f 1 + 540f 1 + 15f 1 − 6f 1 )


1 M 560 M −3 M −2 M −1 M
 

j−2
τ
 µ λj−ℓ (d1 U0ℓ ) + b1 U0j−1 − a1 U0j + (15f0j + 540f1j + 15f2j − 6f3j ) 
 560 
 ℓ=0 
 ∑
j−2 
 τ 
 µ λj−ℓ (d2 U0ℓ ) + b2 U0j−1 − a2 U0j + (15f1j + 540f2j + 15f3j − 6f4j ) 
 560 
 
 ℓ=0 
 .. 
 . 
Fℓ =  , for j ≥ 2.
 .. 
 . 
 j−2 
 ∑ 
 µ λ (d1 U 0 ) + b2 U j−1 − a2 U j + τ (15f j j j j 
 j−ℓ M 0 0 M −4 + 540fM −3 + 15fM −2 − 6fM −1 ) 
 ℓ=0 560 
 j−2 
 ∑ 
 j−1 j τ j j j j 
µ λj−ℓ (d2 UM ℓ
) + b1 UM − a 1 UM + (15fM −3 + 540fM −2 + 15fM −1 − 6fM )
560
ℓ=0

4 Multigrid method

Multigrid method is one of the most fastest and efficient method for partial differential equations
solution. In order to solve the linear system arisen from the discretization of different schemes, many

267
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

iterative techniques are applied. As the rate of convergence of tri diagonal iterative methods are very
slow. To avoid this deficiency multigrid method is used. In multigrid method the convergence rate
is not dependent on the grid size . The main characteristics of multigrid method is to utilize some
relaxation methods to reduce the high frequency error on different coarse grids level and make the
use of coarse grid correction until the error reduced. After correction, results are transferred to the
finnest grids level by the use of interpolation operator. Multigrid method has the main three crucial
components that are restriction, interpolation and relaxation operators.
Multigrid method has been extensively used for several elliptic problems such as Poisson and
Helmholtz equations [38]-[43]. We are using multigrid V- cycle method to solve the linear system
arisen from the discretization of higher order compact difference scheme. To show the performance
and to match the the results of HOC scheme, we use the full weighting projection operator on uniform
grids as proposed by Fiorentino and Serra [31, 34, 36, 37] for Toeplitz matrices. Let Au = b, be the
linear system with u, b ∈ Rn the smoother is

uj+1 = Suj + b1 = (u(j) , b1 ),

where S = I − M −1 A, is the iteration matrix,

b1 = M −1 b ∈ Rn .
Consider P is the projection operator the two grid multigrid algorithm (TGM) is given by
T GM (S, P )

u(j,v) ≈ S v (u(j) , b1 );
dn = Au(j,v) − b;
dk = P d k ;
Ak = P AP T ;
Solve Ak y = dk ;
u(j+1,v) = u(j,v) − P T y. (20)

Recall the global iteration matrix of TGM is

G = [I − P T (P AP T )P A]S v .

In order to reduce the computational cost and improve the convergence rate multigrid two grid
method is used.

5 Numerical experiments

Example 1. We consider the following one dimensional PDE with the source function as in [15]
 ( ) ( )




∂u(x,t) 1−γ 2
∂ u Γ(2+γ)
+ ex (1 + γ)tγ − Γ(1+2γ) t2γ , 0 < x < 1, 0 < t ≤ 1,
 ∂t = 0 Dt ∂x2
(21)

 u(0, t) = t1+γ , u(1, t) = et1+γ , t ∈ [0, 1],


u(x, 0) = 0, x ∈ [0, 1].

268
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

The the theoretical solution for the above problem is u(x, t) = ex t1+γ . Applying the Reimann-
Liouville definition of fractional integral we have
−γ v Γ(1 + v) γ−α
0 Dt t = t , for α > 0. (22)
Γ(v + 1 − α)
Applying equation(22), the concerned problem can be simply written as.
 ( ) ( )




∂u(x,t) 1−γ 2
∂ u
+ ex (2 + γ)tγ − t1+γ , 0 < x < 1, 0 < t ≤ 1,
 ∂t = 0 Dt ∂x2
(23)

u(0, t) = t1+γ , u(1, t) = et1+γ , t ∈ [0, 1],


u(x, 0) = 0, x ∈ [0, 1].

The l2 − norm (error norm) of the proposed numerical scheme is of order eighth w.r.t space and
first order accurate w.r.t to time, the error of our numerical scheme satisfies

( M −1 )

||e ||l2 =
j
|eji |2 h = a1 (u)τ + a2 (u)h8 ,
i=1

eji
where = uji − Uij
and a1 (u) , a2 (u) depend on the theoretical solution of u. When we decreases the
grid size h to its half and τ to τ /8 and τ /16, we observe decreasing in error. The order of convergence
is computed by the formula
Order = log2 ((||e(16τ, 2h||)/||e(8τ, h)||). (24)
In order to check the order of convergence of our numerical scheme defined in (14), here we put
τ = h = 1/4, and decrease the mesh size, it is observed that the error is continuously decreasing.
The formula ∥e∥l2 is used to compute the order of convergence that are shown in Tables 1 and 2 for
two different values of τ , that is γ = 0.25 and γ = 0.75, respectively. Generally for γ ∈ (1/2, 1] the
results are much better than those of γ ∈ (0, 1/2] for experimental order of convergence.
Table 1. The accuracy order, Error norms and CPU timing of the HOC scheme for example 1,
γ = 0.75
h τ ∥e∥l∞ ∥e∥l2 CPU(seconds) Order
h=1/4,τ =1/4 6.8205e −2 1.3600e −2 0.900 –
h=1/8, τ =1/64 5.6124e−3 8.7200e−4 1.910 3.6032
h=1/8, τ =1/512 8.4612e−5 6.8194e−6 3.600 6.0516
h=1/8, τ =1/1024 5.4600e−6 2.1984e−6 4.600 5.1041
h=1/8 , τ =1/8 1.4122e−2 4.1102e−3 1.800 –
h=1/16, τ =1/128 4.1187e −4 5.2861e −4 1.940 4.3102
h=1/32, τ =1/512 4.071e−5 2.1382e−5 2.880 4.1607
h=1/32, τ =1/2048 1.2040e −6 1.1056e −6 4.880 4.6319

Example 2. We consider the following one dimensional PDE with the source function as in [44]
 ( ) ( )




∂u(x,t) 2
= 0 Dt1−γ ∂∂xu2 + 2t + 8π
2 t 1+γ
sin(2πx), 0 < x < 1, 0 < t ≤ 1,
 ∂t Γ(2+γ)
(25)

 u(0, t) = 0, u(1, t) = 0, 0 ≤ t ≤ 1,


u(x, 0) = 0, 0 ≤ x ≤ 1.

The exact solution for this problem is u(x, t) = t2 sin(2πx), using γ = 0.5 for computation. The
accuracy and convergence rate are shown in Table 3 and Table 4.

269
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Table 2. The accuracy order, Error norms and CPU timing of the HOC scheme for example 1,
γ = 0.25
h τ ||e||l∞ ∥e∥l2 CPU(seconds) Order
h= 1/4, τ =1/4 3.2570e −2 1.3446e −2 0.920 –
h=1/8, τ =1/64 2.3980e−3 5.7442e−4 1.980 3.7636
h=1/8, τ =1/256 1.2083e−4 3.0050e−5 2.630 4.3108
h=1/8, τ =1/1024 3.4618e −6 2.0198e −6 4.200 5.1253
h=1/8 , τ =1/8 7.0216e−2 4.1251e−3 1.860 –
h=1/16, τ =1/128 4.1187e −4 4.2871e −4 1.910 4.0915
h=1/32, τ =1/256 2.3028e−5 2.0182e−5 2.894 4.1607
h=1/32, τ =1/2048 1.2070e−6 1.0189e−6 4.830 4.2539

Fig. 1: (a) Approximate solution (b) Exact solution

Table 3. The accuracy order, Error norms and CPU timing of the HOC scheme for example 2,
γ = 0.25
h τ ||e||l∞ ∥e∥l2 CPU(seconds)
h=τ =1/4 2.2720e−2 1.4286e−2 1.970
h=1/8 , τ =1/8 2.0251e −2 6.4511e −3 2.880
h=1/8, τ =1/64 5.8920e−3 5.4253e−4 2.990
h=1/8, τ =1/128 5.1098e −4 1.1475e −4 3.120
h=1/8, τ =1/256 1.8193e−4 5.5970e−5 3.345
h=1/8, τ =1/512 7.0247e−5 9.9032e−6 3.385
h=1/8, τ =1/1024 5.1823e −6 3.1898e −6 4.110
h=1/16 , τ =1/32 2.8103e−3 2.5200e−3 1.890
h=1/16, τ =1/64 1.0011e −3 8.8871e −4 1.960
h=1/16, τ =1/128 7.8337e−4 3.6451e−4 2.190
h=1/32, τ =1/256 2.0098e−5 2.1267e−5 2.494
h=1/32, τ =1/512 8.1980e −6 6.1647e −6 3.346
h=1/32, τ =1/1024 5.0023e−6 2.5410e−6 4.760
h=1/32, τ =1/2048 3.2570e−6 1.0642e−6 4.925

270
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Table 4. Error norms, the accuracy order and CPU timing of the HOC scheme for example 2,
∥e∥l2 , γ = 0.5
t h=1/4, τ = 1/16 h=1/8, τ = 1/256 h=1/16, τ = 1/1024 CPU(seconds) Order
t=1/16 1.1820e−2 2.6030e−5 3.6960e−6 1.900 4.042
t=1/8 1.4256e −2 3.8612e −4 1.3338e−6 1.920 4.189
t=3/16 1.8491e−2 5.2872e−4 2.7838e−5 1.940 4.193
t=4/16 3.8010e −2 7.8945e −4 2.6045e −5 2.119 4.212
t=5/16 4.2013e−2 8.5970e−4 2.7533e−5 2.320 4.350
t=6/16 4.4126e −2 3.9194e −3 2.8762e −5 2.643 4.365
t=7/16 5.4690e−2 4.8234e−3 2.9916e−5 2.500 4.455
t=8/16 6.1183e−2 4.9636e−3 1.4821e−4 2.590 4.542
t=9/16 7.1773e −2 5.2981e −3 2.0872e −4 2.510 4.547
t=10/16 7.8714e−2 5.3217e−3 2.5532e−4 2.540 4.504
t=11/16 8.3080e−2 5.3820e−3 2.6378e−4 2.580 4.571
t=12/16 8.7120e −2 5.4187e −3 2.8589e −4 2.632 4.603
t=13/16 9.4564e−2 6.5610e−3 3.1189e−4 2.700 4.734
t=14/16 1.1140e −1 6.8065e −3 4.3386e −4 2.850 4.781
t=15/16 1.2740e−1 7.5976e−3 5.8812e−5 3.780 4.839
t=1 1.6970e−1 8.5874e−3 6.8765e−5 3.810 4.851

Fig. 2: (a) Approximate solution (b) Exact solution

271
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

6 Conclusion

A higher compact finite difference scheme is applied for discretizing a one dimensional fraction diffu-
sion equation. High accuracy and good efficiency is the main advantage of compact finite difference
schemes. The numerical scheme is based on pentadiagonals matrix that is solved by multigrid method.
It is observed that when 0 < γ < 1, the compact scheme is unconditionally stable. Our obtained nu-
merical experiments were compared with the existing results and show high accuracy of the introduced
numerical method.

References

[1] Blackledge, J: Diffusion and fractional diffusion based image processing. Theo practice of comput
graphics. 233 – 240 (2009).

[2] Debnath, L: Recent applications of fractional calculus to science and engineering. Int. J. Maths
and Mathematical Sciences. 3413-3442, (2003).

[3] Edelman, M: Fractional dynamical systems. Nonlinear sciences, chaotic dynamics.(2013).

[4] Naber, M: Time fractional schrdinger equation. J. Mathe Phys 45(8),(2004).

[5] Blackledge, J: Application of the fractional diffusion equation for predicting market behaviour.
Int. J. appli mathematics, 40, 130–158 (2010).

[6] Enrico, S, Gorenflo, R, Mainardi, F, Meerschaert, M: Speculative option valuation and the frac-
tional diffusion equation, Proceedings of the IFAC workshop on fractional differentiation and its
applications, Bordeaux. 234–238 (2004) .

[7] Olaniyi, S. I, Zamanb, F. D: A fractional diffusion equation model for cancer tumor. AIP AD-
VANCES 4, 107–121 (2014).

[8] Kilbas, A, Srivastava, H, Trujillo, J: Theory and applications of fractional differential equations.
Elsevier science and technology (2006).

[9] Podlubny, I: Fractional differential equations. Academic Press, New York (1999).

[10] Langlands, T, Henry, B. The accuracy and stability of an implicit solution method for the
fractional diffusion equation, J. Comput. Phys. 205 719-736 (2005).

[11] Lynch, VE, Carreras, BA, Ferreira-Mejias, KM, Hicks, HR: Numerical methods for the solution
of partial differential equations of fractional order. J. Comput. Phys. 192, 406–421(2003).

[12] Meerschaert, M, Tadjeran, C: Finite difference approximations for fractional advection-dispersion


flow equations. J. Comput. Appl. Math. 172, 65–77 (2004).

[13] Yuste, B, Acedo, L: An explicit finite difference method and a new Von Neumann-type stability
analysis for fractional diffusion equations. SIAM J. Numer. Anal. 42, 1862–1874 (2005).

[14] Zhuang, P, Liu, F, Anh, V, Turner, I: New solution and analytical techniques of the implicit
numerical method for the anomalous subdiffusion equation. SIAM J. Numer. Anal. 46, 1079–1095
(2008).

[15] Chen, CM, Liu, F, Turner, Anh, V: A Fourier method for the fractional diffusion equation
describing sub-diffusion. J. Comput. Phys. 227, 886–897 (2007).

272
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

[16] Chen, CM, Liu, F, Burrage, K: Finite difference methods and a fourier analysis for the fractional
reaction-subdiffusion equation. Appl. Math. Comput. 198, 754–769 (2008).

[17] Liu, F, Yang, C, Burrage, K: Numerical method and analytical technique of the modified anoma-
lous subdiffusion equation with a nonlinear source term. J. Comput. Appl. Math. 231, 160–176
(2009).

[18] Liao, W, Zhu, J, Khaliq, AM: An efficient high-order algorithm for solving systems of reaction-
diffusion equations. Numer. Meth. Partial Diff. Eq. 18, 340–354 (2002).

[19] Cui, M: Compact finite difference method for the fractional diffusion equation. J. Comp. Phys.
228, 7792-7804 (2009).

[20] Cui, M: Compact alternating direct implicit method for two dimensional fractional diffusion
equation. J. Comp. Phys. 231, 2621-2633 (2012).

[21] Gao, G and Sun, Z, A compact finite difference scheme for the fractional
sub-diffusion equations, J. Comput. Phys. 230 (2011), pp. 586595. Available at
http://dx.doi.org/10.1016/j.jcp.2010.10.007.

[22] Hao, Z, Lin, G and Sun,Z, A high-order difference scheme for the fractional sub-diffusion equa-
tion, Inter. J. Comput Maths. (2015), DOI: 10.1080/00207160.2015.1109642

[23] Metzler, R, Klafter, J: The random walks guide to anomalous diffusion: a fractional dynamics
approach, Phys. Rep. 339, 1–77 (2000).

[24] Ames, W: Numerical methods for partial differential equations. Academic Press, New York
(1977).

[25] Thomas, W: Numerical partial differential equations. Finite difference methods, Springer-Verlag
(1995).

[26] Mattheij, R. M. M, Rienstra, S. W, Ten Thije Boonkkamp, J. H. M. :Partial Differential


Equations, Modeling, Analysis, Computation. Technische Universiteit Eindhoven Eindhoven, The
Netherlands (2005).

[27] Lubich, CH: Discretized fractional calculus. SIAM J. Math. Anal.17, 704-719 (1986).

[28] Quarteroni, A, Valli, A: Numerical approximation of partial differential equations. Springer-


Verlag (1994).

[29] Roger, A. Horn, Charles, Johnson, R: Matrix Analysis. Cambridge University Press (1986).

[30] Chu, MT, Diele, F, Ragnion, S: On the inverse problem of constructing symmetric pentadiagonal
Toeplitz matrices from three largest eigenvalues. Inv. Problems. 21, 1879-1894 (2005).

[31] Serra-Capizzano, S: Multi-iterative methods, Computers Math. Applic. 2 (4), 65-87 (1993).

[32] Arico, A, Donatelli, M, Serra-Capizzano, S: V-cycle optimal convergence for certain (multilevel)
structured linear systems, SIAM J. Matrix Anal. Appl., 26, 186-214 (2004).

[33] Serra-Capizzano, S, Possio, C. T: Multigrid Methods for Multilevel Circulant Matrices, SIAM J.
Sci. Comp. 26, 55-85 (2004).

[34] Fiorentino, G, Serra-Capizzano, S: Multigrid method for Toeplitz matrices. Calcolo 28 (1998).

273
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

[35] Elouafi, M: An eigenvalue localization theorem for pentadiagonal symmetric Toeplitz matrices.
Linear Algeb. Appl. 435, 2986–2998 (2011).

[36] Chan, R. H.; Chang, Qian-Shun; Sun, Hai-Wei Multigrid method for ill-conditioned symmetric
Toeplitz systems. SIAM J. Sci. Comput. 2(19), 516529 (1998).

[37] Sun, Hai-Wei; Chan, R. H. ; Chang, Qian-Shun A note on the convergence of the two-grid
method for Toeplitz systems. Comput. Math. Appl.1 (34),1118 (1997).

[38] Ghaffar, F, Badshah, N, Islam, I: Multigrid method for solution of 3D Helmholtz equation based
on HOC schemes. J. Abstract and Appl. Analy. 2014, (2014) Article ID 954658.

[39] Ghaffar, F, Badshah, N, Khan, M. A, Islam, I: Multigrid method for 2D Helmholtz equation
using higher order finite diffence scheme accelerated by Krylove subspace. J. Appl. Environ. Biol.
Sci. 4, 169–179 (2014).

[40] Ghaffar, F, Islam, I, Badshah, N: Multigrid method based on transformation free higher order
scheme for solving 3D Helmholtz equation on nonuniform grids. J. Appl. Environ. Biol. Sci. 5,
85–97 (2015).

[41] Ghaffar, F, Badshah, N, Islam, I, Khan, M: Multigrid method based on transformation free
higher order scheme for solving 2D Helmholtz equation on nonuniform grids. J. Advac in Diff.
Eqns. (2016) 2016:19, DOI 10.1186/s13662-016-0745-2.

[42] Gupta, M, Kouatchou, J, Zhang, J: Comparision of second and fourth order discretizations for
multigrid Poisson solver. J. Comput. Phys. 132, 226–232 (1997).

[43] Zhang, J: Fast and high accuracy multigrid solution for 3D Poisson equation. J. Comput. Phys.
143, 449–461 (1998).

[44] Lin, YM, Xu, CJ: Finite difference/spectral approximations for the time-fractional diffusion
equation. J. Comput. Phys. 225, 1533–1552 (2007).

274
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

OPTICALLY-DETECTED LONG-LIVED SPIN COHERENCE IN MULTILAYER


SYSTEMS: DOUBLE AND TRIPLE QUANTUM WELLS

S. Ullah1*, G. M. Gusev1, A. K. Bakarov2, F. G. G. Hernandez1

1
Instituto de Física, Universidade de São Paulo, Caixa Postal 66318 - CEP 05315-970, São Paulo, SP,
Brazil
2
Institute of Semiconductor Physics and Novosibirsk State University, Novosibirsk 630090, Russia
*
To whom Correspondence should be addressed: saeedullah.phy@gmail.com

ABSTRACT

In this contribution, we investigated the spin coherence of high-mobility dense two-dimensional electron gases confined in
multilayer systems. The dynamics of optically-induced spin polarization was experimentally studied employing the time-
resolved Kerr rotation and resonant spin amplification techniques. For both the double and triple quantum wells doped
beyond the metal-insulator transition, where the spin coherence is greatly suppressed, we found remarkably long spin
lifetimes limited by the Dyakonov-Perel mechanism and spin hopping process between the donor sites as well as the spread
of ensemble g-factor. The double quantum well structure yields a spin lifetime of 6.25 ns at T = 5 K while the triple quantum
well shows a spin lifetime exceeding 25 ns at T = 8 K.

Index Terms— Spintronics, Kerr rotation, spin-orbit coupling, quantum wells, g-factor, spin relaxation

1. INTRODUCTION

The device concepts in semiconductor nanostructures rely mainly on the efficient generation of spin polarization, its
manipulation, and detection [1]. The fabrication of these future devices could benefit from the long-lived spin coherence [2-
4]. A number of experimental techniques have been developed to study the spin polarization dynamics and spin relaxation
mechanisms in semiconductor nanostructures. Among those techniques, the double-pulsed pump-probe technique is one of
the most reliable tools [5-10]. The principle of this technique is as follow: A circularly-polarized light of laser pulse (the
pump) incident normal to the sample structure creates the spin-polarized electrons with the spin vector along the sample
growth direction and a relatively weak linearly-polarized probe pulse, from the same laser, is used to detect the spin
polarization dynamics of electrons in two-dimensional electron gas (2DEG). Using this technique, the spin dynamics can be
studied in time intervals shorter than the repetition period of the pump pulses, which is usually ~13 ns for commonly used
Ti:sapphire laser.

It has been reported in the bulk semiconductors [7], quantum dots (QDs) [11], and quantum wells (QWs) [2,3,5] that the spin
lifetime (T2*) exceeds the laser repetition and attains several nanoseconds in the transverse magnetic field. When T2* becomes
equal to or greater than the laser repetition period, then the direct determination by time-resolved methods becomes
inapplicable. In such situation, the procedure of resonant spin amplification (RSA) [7] can be used for the determination of
T2*. For the RSA measurements, the pump-probe delay (∆t) is kept fixed while the dependence of Kerr rotation (KR) signal
on experimental parameters can be studied by sweeping the external magnetic field in the milli-Tesla range. When the laser
repetition time becomes multiple to the period of spin precession a series of sharp resonance peaks as a function of magnetic
field can be observed. The spacing of those resonance peaks allows to calculate the carrier g-factor while their line-width
points the spin lifetime.

The aim of the present work is to study the long-lived spin coherence in GaAs/AlGaAs double (DQW) and triple quantum
wells (TQW). These structures were selected for present investigation because such multiple quantum well systems result in
the discovery of fascinating phenomena, such as, current-induced spin polarization [12], spin Hall effect [13], large
anisotropic spin relaxation [14], and macroscopic transport and transverse drift of long current-induced spin coherence
[15,16]. We observed a remarkably long T2* exceeding 25 ns in the TQW. The obtained values are among the longest

275
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Fig. 1 Scheme of the time-resolved pump-probe technique: The spin polarization was induced by a circularly polarized pump
and was monitored by a relatively weak linearly polarized probe pulse.

reported T2* in the structures with similar doping level [17,18] and comparable to the one reported in undoped GaAs QWs
[19] and low-density two-dimensional electron gases confined in CdTe QWs [5].

The manuscript is organized as follows. Section 2 is devoted to the materials and experiment. Section 3 presents the
experimental results of spin dynamics reported in two different samples while concluding remarks are briefly discussed in
section 4.

2. MATERIALS AND EXPERIMENT

The structures used in this study are GaAs/AlGaAs double and triple quantum wells grown by molecular-beam epitaxy
(MBE) on a (001)-oriented GaAs substrate. Both samples are symmetrically δ-doped beyond the metal-insulator transition (5
× 1010 cm-2 for GaAs QWs [17]). The DQW structure is a 45-nm wide GaAs well with total electron sheet density ns = 9.2 ×
1011 cm-2 and low-temperature mobility 𝜇 = 1.9 × 106 cm2/Vs. Due to the large well width and high electron density, the
Coulomb repulsion of electrons results in a DQW configuration with two occupied subbands with subband separation of ∆ 12
= 1.4 meV [20]. The TQW structure has a 22-nm thick central well and two 10-nm thick side wells separated by 2-nm thick
AlGaAs barriers. The tunneling of electron states, due to thin barriers, results in three populated subbands with separation
energies ∆12 = 1.0 meV, ∆23 = 2.4 meV, and ∆13 = 3.4 meV [21]. To keep the central well populated its width was kept wider
than lateral wells because the electron density mostly concentrates in the side wells due to electron repulsion and
confinement. The estimated density in lateral wells is 35 % larger than in the central wells. The calculated band structure and
charge density of occupied subbands for both the DQW and TQW are demonstrated in Ref. [2].

We used time-resolved Kerr rotation (TRKR) [22] and resonant spin amplification [7] to study the coherent spin dynamics in
multilayer structures. For the optical excitation, we used a Ti-sapphire laser with 100 fs pulse duration and a repetition period
of trep = 13 ns. The samples were kept in a He flow cryostat and exposed to an external magnetic field applied normal to the
light beam direction (Voigt geometry) as shown in Fig. 1. The spin polarization along the quantum well growth direction was
induced by circularly polarized pulses focused onto a spot of ~50 𝜇m. For most of the experiments, except power
dependence, we used a pump power of 1 mW which gives rise to a photogenerated density of 2 × 10 11 cm-2. The time
evolution of optically generated spins was studied through the rotation of linearly polarized probe reflected by the sample and
detected with a balanced bridge.

276
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Fig. 2 (a) TRKR traces fitted to Eq. 1 at various magnetic fields measured at T = 5 K for the DQW. (b) Spin precession
frequency and lifetime versus the applied magnetic field. (c) RSA for the DQW as a function of optical pump power. (d) The
corresponding T2* and amplitude of the zero-field peak. The inset shows the Lorentzian fit of the zero field peaks.

3. RESULTS AND DISCUSSION

Fig. 2 (a) shows a series of TRKR curves recorded at different magnetic fields ranging from 0.25 T to 1.5 T. The
experimental conditions were selected for maximum Kerr signal. The observed oscillations are associated with the precession
of excited electron spins about an applied external magnetic field. To retrieve information about the spin coherence time and
precession frequency (ωL = g𝜇𝐵 B/ħ) the curves were fitted to an exponentially damped harmonic function:

ΘK = A exp(-∆t/𝑇2∗) cos (g𝜇𝐵 B∆t/ħ + 𝜑) (1)

where A is the initial spin polarization induced by the pump, g is the electron g-factor, 𝜇𝐵 is the Bohr magneton, ħ is the
reduced Planck constant, B is the applied magnetic field and 𝜑 is the oscillation phase. The obtained T2* (half-filled circles)
and ωL (half-filled diamonds) as a function of the applied magnetic field are shown in Fig. 2 (b).

As expected ωL versus B follows a linear dependence which is typical for electrons [2], however, for holes, the band mixing
may result in non-linearities as reported for InxGa1-xAs/GaAs QWs [23]. The linear interpolation of data resulted in a g-factor
(absolute value) of 0.453 ± 0.001 which is close to the reported g value for bulk GaAs. The increase of magnetic field results
in a strong T2* reduction which is well described by 1/B-like dependence [7,24]. For increasing field, the spin dephasing was
dominated by the Dyakonov-Perel (DP) mechanism [25] as well as the variations in ensemble g-factor. For a spread of
ensemble g-factor ∆g, the spin lifetime is given by T2* = √2ħ/∆g𝜇𝐵 B which allows to estimate the size of inhomogeneity by
fitting with the data. Such a fit to the data, shown by a solid red curve in Fig. 2 (b), gives ∆g = 0.0026.

In the magnetic field dependence of TRKR, we observed that the oscillations on positive delay were accompanied by
oscillations at negative delay also which reflect the long-lived spin coherence persisting between successive pulses. Both (the
positive and negative delay oscillations) have the same origin since the negative ones were positive of the previous pulse, and

277
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Fig. 3 (a) RSA measurements for various pump-probe wavelengths recorded at T = 5 K and ∆t = 0.2 ns. (b) Spin lifetime and
amplitude of zero-field peaks extracted from (a).

both share the same decay time. Thus, to get a realistic T2* we referred to the RSA technique at a short negative delay (which
is the longest possible positive delay). Fig. 2 (c) shows the RSA curves for the DQW recorded at various pump power by
sweeping the magnetic field in a range of -200 mT to +200 mT while keeping ∆t = -0.2 ns fixed. One can clearly see that the
amplitude of RSA peaks decreases and getting broader with increasing field due to the g-factor variation in the measured
ensemble. The half-width (B1/2) of those peaks obeying periodic condition ∆B = hfrep/g𝜇B , allow us to retrieve T2* by using the
Lorentzian (Hanle) model [7]:

ΘK = A/ [1 + (ωL𝑇2∗)2] (2)

where T2* = ħ /g𝜇𝐵 B1/2. Lorentzian fit (solid lines) to the peaks centered at zero magnetic fields are displayed in the inset Fig.
2 (c). The obtained values for amplitude and T2* are depicted in Fig. 2 (d) as a function of excitation power. With the increase
of excitation power, both quantities decrease, however, T2* display an exponential decay (solid curve). The observed T2*
reduction may be possibly due to the heating effect caused by optical excitation [2].

Fig. 4 (a) TRKR curves of the TQW as a function of external magnetic field measured at T = 8 K. (b) Spin precession
frequency and spin lifetime as a function of applied magnetic field.

278
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Spectral dependence of Kerr signals, with the pump-probe wavelength ranging from 811 nm to 817 nm, recorded at ∆t = 0.2
ns for the DQW structure are shown in Fig. 3 (a). The recorded RSA pattern, having all the peaks of comparable height,
corresponds to the regime of isotropic spin relaxation. However, in the anisotropic spin relaxation, the spin components of
carrier’s oriented in- and out-plane relax at different rates and hence affect the amplitude of zero-th field RSA peak. The
retrieved T2* and amplitude, by fitting the data to Hanle model, increases with the excitation wavelength as plotted in Fig. 3
(b). Changing the excitation energy about 3 meV (~2∆12), by increasing the pump-probe wavelength from 815 nm to 817 nm,
results in a T2* increase of less than 10 %. The observed small difference is attributed to the relatively similar charge density
distribution of electrons for both subbands.

We now turn to the spin dynamics in TQW. Fig. 4 (a) shows a series of TRKR curves recorded at T = 8 K and Ppump = 1 mW
for the various magnetic field in a range of 0.4 T to 2.0 T. The pump-probe energy was tuned to the exciton bound to the
neutral donor transition [14] for maximum Kerr signal. As evidenced by negative delay oscillations, the long-lived spin
coherence was observed on a time scale longer than the laser repetition period. To extract T2* and ωL, the oscillations at
positive ∆t were fitted to Eq. (1). The fitting results are displayed in Fig. 4 (b). The linear increase of spin precession
frequency on increasing magnetic field indicates that the observed g value is constant in the measured range of external
magnetic field (0.4-2 T). From the slope of linear fit (solid red line), we evaluated g = 0.452 ± 0.002. We obtained T2* = 12.7
ns at 0.4 T which decreases with further increase of magnetic field. At low temperature, the observed T2* reduction was
attributed to the spin hopping process between the donor sites as well as the contribution of g-factor inhomogeneity. 1/T2* as
a function of applied field (not shown here) follows a linear increase which is a well-known indication of the ensemble
spread of g-factor originating from the inhomogeneous spin relaxation rates. From the slope, expressed by ∆g𝜇𝐵 /√2ħ, we
obtained ∆g = 4.9 × 10-4 (0.1 % of observed g-value) which is very small compared to the reported value in quantum dots.
The observed small variations highlight the high structural uniformity of studied sample.

Taking into account the negative delay oscillation, in analogy to the DQW, we carried out the RSA measurement to evaluate
the spin lifetime. The magnetic field was scanned in a range from -200 mT to +200 mT while adjusting ∆t such that the probe
pulse arrives 240 ps before the pump pulse. Fig. 5 (a) shows multiple RSA peaks recorded for different pump-power. The
observed RSA spectrum differs from that of the DQW i.e. the resonance peaks centered at zero magnetic fields are smaller in
amplitude than those at the finite field. The depression of those zero-field peaks reveals the existence of spin relaxation
anisotropy caused by the presence of an internal magnetic field. The spin lifetime and amplitude, extracted from the

Fig. 5 (a) RSA pattern of the TQW sample measured for various pump power at T = 8 K. (b) Spin lifetime and (c) amplitude
extracted from (a).

279
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Lorentzian fit to the zero-field peak, are plotted in Fig. 5 (b) and (c) as a function of pump power. We observed a remarkably
long T2* exceeding 25 ns. As evidence from RSA spectrum, the amplitude of zero-field peaks increases with high pump
power and become equal to that of finite field peaks 7 mW. In contrast, T2* decreases with pump power yielding an
exponential decay. The reduction of T2* with increasing pump power was mainly due to the heating effect caused by optical
excitation [2] as well as the increased efficiency of Bir-Aronov-Pikus mechanism induced by a high density of photo-created
carriers [26]. The obtained results are amongst the longest reported T2* in samples of similar doping levels [17,18] and
comparable to that of undoped GaAs QWs [19].

Fig. 6 (a) presents the energy dependence of RSA signals, with excitation wavelength ranging from 816 nm to 823 nm,
measured at ∆t = - 0.24 ns for the TQW structure. Fig. 6 (b) shows the comparison between normalized zero-field peaks fitted
to Lorentzian model for several wavelengths. T2* and amplitude obtained from (b) are plotted in Fig. 6 (c). Changing the
pump-probe wavelength from 816 nm to 819 nm has no effect on the spin lifetime and amplitude of zero-field peak.
Increasing the wavelength beyond 820 nm, which is the origin of spin relaxation anisotropy, results in the rapid variation of
amplitude and T2*. Here, the same energy variation (3 meV ≅ ∆13), by changing wavelength from 821 to 823 nm, results in a
strong T2* reduction by almost 35 %. This large variation may be possibly due to the different charge density distribution in
the first and third occupied subband.

Fig. 6 (a) Magnetic field scan of KR signal recorded for various excitation wavelengths at T = 5 K and ∆t = -0.24 ns. (b)
Comparison of normalized zero-field peaks fitted to Lorentzian model (c) Spin dephasing time and amplitude of zero-field
peaks as a function of excitation wavelength.
.
4. CONCLUSIONS

In conclusion, we carried out a detailed investigation of spin relaxation in two-dimensional electron gases confined
multilayer quantum wells by employing the TRKR and RSA techniques. The dependence of spin lifetime was studied as a
function of experimental parameters like a magnetic field, optical pump power, and excitation wavelengths. In the TQW
sample, T2* extends to a very long time at T = 8 K while the DQW structure yields a relatively small T2*. We believe that
achieving the long-lived spin coherence and the wavefunction engineering in multilayer structure will open a practical path
for spintronics devices.

280
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

ACKNOWLEDGMENT

F.G.G.H. acknowledges the financial support from Grant No. 2009/15007-5, 2013/03450-7, 2014/25981-7 and 2015/16191-5
of the São Paulo Research Foundation (FAPESP). S.U acknowledges TWAS/CNPq for financial support.

REFERENCES

[1] S. Data, and B. Das, “Electronic analog of the electro-optic modulator,” Appl. Phys. Lett. vol. 56, pp. 665, 1990.
[2] S. Ullah, G. M. Gusev, A. K. Bakarov, and F. G. G. Hernandez, “Long-lived nanosecond spin coherence in high-
mobility 2DEGs confined in double and triple quantum wells,” J. Appl. Phys. vol. 119, pp. 215701, 2016.
[3] E. A. Zhukov, D. R. Yakovlev, M. Bayer, G. Karezewski, T. Wojtowicz, and J. Kossut, “Spin coherence of two-
dimensional electron gas in CdTe/(Cd,Mg)Te quantum wells,” Phys. Stat. Sol. (b) vol. 243, pp. 878, 2016.
[4] M. W. Wu, J. H. Jiang, and M. Q. Weng, “Spin dynamics in semiconductors,” J. Phys. Rep. vol. 493, pp. 61,
2010.
[5] E. A. Zhukov, D. R. Yakovlev, M. Bayer, M. M. Glazov, E. L. Ivchenko, G. Karezewski, T. Wojtowicz, and J.
Kossut, “Spin coherence of a two-dimensional electron gas induced by resonant excitation of trions and excitons
in CdTe/(Cd,Mg)Te quantum wells,” Phys. Rev B vol. 76, pp. 205310, 2007.
[6] M. Griesbeck, M. M. Glazov, E. Ya. Sherman, D. Schuh, W. Wegscheider, C. Schü ller, and T. Korn, “Strongly
anisotropic spin relaxation revealed by resonant spin amplification in (110) GaAs quantum wells,” Phys. Rev. B
vol. 85, pp. 085313, 2012.
[7] J. M. Kikkawa, and D. D. Awschalom, “Resonant spin amplification in n-type GaAs,” Phys. Rev. Lett. Vol. 80,
pp. 4313, 1998.
[8] J. M. Kikkawa, I. P. Smorchkova, N. Samarath, and D. D. Awschalom, “Room-temperature spin memory in two-
dimensional electron gases,” Science, vol. 277, pp. 1284, 1997.
[9] K. Biermann, A. Hernández,-Mínguez, R. Hey, and P. V. Santos, “Electrically tunable electron spin lifetimes in
GaAs (111)B quantum wells,” J. Appl. Phys. vol. 112, pp. 083913, 2012.
[10] V. Sih, and D. D. Awschalom, “Electrical manipulation of spin-orbit coupling in semiconductor
heterostructures,” J. Appl. Phys. vol. 101, pp. 081710, 2007.
[11] A. Greilich, R. Oulton, E. A. Zhukov, I. A. Yugova, D. R. Yakolev, M. Bayer, A. Shabaev, A. L. Efros, I. A.
Merkulov, V. Stavarache, D. Reutor, and A. Wieck, “Optical control of spin coherence in singly charged
(In,Ga)As/GaAs quantum dots,” Phys. Rev. Lett. vol. 96, pp. 227401, 2006.
[12] F. G. G. Hernandez, G. M. Gusev, and A. K. Bakarov, “Resonant optical control of electrically induced spin
polarization by periodic excitation,” Phys. Rev. B vol. 90, pp. 041302(R), 2014.
[13] F. G. G. Hernandez, G. M. Gusev, and A. K. Bakarov, “Observation of the intrinsic spin Hall effect in a two-
dimensional electron gas,” Phys. Rev. B vol. 88, pp. 161305(R), 2013.
[14] S. Ullah, G. M. Gusev, A. K. Bakarov, and F. G. G. Hernandez, “Large anisotropic spin relaxation time of
exciton bound to donor states in triple quantum wells,” J. Appl. Phys. vol. 121, pp. 205703, 2017.
[15] S. Ullah, G. J. Ferreira, G. M. Gusev, A. K. Bakarov, and F. G. G. Hernandez, “Macroscopic transport of a
current-induced spin polarization,” Journal of Physics: Conf. Series. vol. 864, pp. 012060, 2017.
[16] F. G. G. Hernandez, S. Ullah, G. J. Ferreira, N. M. Kawahala, G. M. Gusev, and A. K. Bakarov, “Macroscopic
transverse drift of long current-induced spin coherence in two-dimensional electron gases,” Phys. Rev. B vol. 94,
pp. 045305, 2016.
[17] J. S. Sandhu, A. P. Heberle, J. J. Baumberg, and J. R. A. Cleaver, “Gateable suppression of spin relaxation in
semiconductors,” Phys. Rev. Lett. vol. 86, pp. 2150, 2001.
[18] A. V. Larionov, and A. S. Zhuravlev, “Coherent spin dynamics of different density high mobility two-
dimensional electron gas in a GaAs quantum well,” JETP Lett. vol. 97, pp. 137, 2013.
[19] R. I. Dzhioev, V. L. Korenev, B. P. Zakharchenya, D. Gammon, A. S. Bracker, J. G. Tischler, and D. S. Katzer,
“Optical orientation and the Hanle effect of neutral and negatively charged excitons in GaAs/AlxGa1-xAs quantum
wells,” Phys. Rev. B vol. 66, pp. 153409, 2002.
[20] S. Wiedmann, G. M. Gusev, O. E. Raichev, A. K. Bakarov, and J. C. Portal, “Nonlinear transport phenomena in a
two-subband system,” Phys. Rev. B vol. 84, pp. 165303, 2011.
[21] S. Wiedmann, N. C. Mamani, G. M. Gusev, O. E. Raichev, A. K. Bakarov, and J. C. Portal, “Magnetoresistance
oscillations in multilayer systems: Triple quantum wells,” Phys. Rev. B vol. 80, pp. 245306, 2009.
[22] J. J. Baumberg, D. D. Awschalom, N. Samarth, H. Luo, and J. K. Furdyna, “Spin beats and dynamical
magnetization in quantum structures,” Phys. Rev. Lett. vol. 72, pp. 717, 1994.

281
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

[23] J. Traynor, R. T. Harley, and R. J. Warburton, “Zeeman splitting and g factor of heavy-hole excitons in InxGa1-
xAs/GaAs quantum wells,” Phys. Rev. B vol. 51, pp. 7361, (1995).
[24] R. Bratschitsch, Z. Chen, S. T. Cundiff, E. A. Zhukov, D. R. Yakovlev, M. Bayer, G. Karczewski, T. Wojtowicz,
and J. Kossut, “Electron spin coherence in n-doped CdTe/CdMgTe quantum wells,” Appl. Phys. Lett. vol. 89, pp.
221113, (2006).
[25] M. I. Dyakonov, and V. I. Perel, “Spin orientation of electrons associated with the interband absorption of light in
semiconductors,” Sov. Phys. JETP vol. 33, pp. 1053, (9171).
[26] G. Wang, A. Balocchi, A. V. Poshakinskiy, C. R. Zhu, S. A. Tarasenko, T. Amand, B. L. Liu, and X. Marie,
“Magnetic field effect on electron spin dynamics in (110) GaAs quantum wells,” New J. Phys. vol. 16, pp.
045008, (2014).

282
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

OUTCOME BASED EDUCATION SYSTEM: A PILOT STUDY IN INDUSTRIAL


ENGINEERING

Shahzad A.Jamshid, *Prof. Dr. Iftikhar Hussain,*Hamidullah, Muhammad Nafees Khan


Department of Engineering Management CECOS University Peshawar, *University of Engineering and
Technology Peshawar.
shahzadjamshid@gmail.com

ABSTRACT

With the recent acceptance of Pakistan engineering council into Washington accord as a permanent signatory all the more
need has been felt for the inculcation of engineer attributes into graduating students. The current study makes a tentative
analysis of the results obtained from the assessment of the students of fall semester 2016 in the subject “optimization”, in the
Industrial engineering department of UET Peshawar. Based on the said analysis it was intended to find out the prospects of
the engineering graduates under OBE in lieu of the PEC’s admission to Washington accord. The results showed that though
engineering institutes have a future in OBE environment but there are many loose ends including the proper definition and
implementation of the outcomes (PLOs and CLOs), the assessment tools, and the inclusion of all the stakeholders in the
process.

Index Terms— Program learning outcomes (PLOs), outcome based education (OBE), Pakistan engineering council (PEC),
course learning outcomes (CLOs).

1. INTRODUCTION

Technical education received a boost and high market value after the industrial revolution. Accordingly the procedure of
education also saw a great deal of variation. Different models and approaches were proposed and implemented and that
affected the resulting outcomes accordingly. The expanding technology market has set the requirement for a wide range of
skills and capabilities ever since, which compelled the responsible institutions to provide such traits to its graduates. One
result of such increased globalization and greater interaction between engineers of many countries is the perceived
need for the definition of a set of core competences that define what an engineer is, regardless of where he or she is
trained. This situation has compelled to adopt pre-specified notions such as, ‘competencies’, ‘standards’, ‘benchmarks’, and
‘attainment targets’ [1]. One of the approaches for the inculcating the desired traits into engineers; is the so called outcome
based education. The concept OBE is concerned with the notion of developing an expedient and all-encompassing layout for
the grooming of a student into a suitable engineering graduate according to the expectations of the modern industry. Also a
graduate has to be equipped with tools for addressing, the concerns of the members of the society in which he works, about
the repercussions of the project. In the words of Spady, OBE means “clearly focusing and organizing everything in an
educational system around what is essential for all students to be able to do successfully at the end of their learning
experiences. This means starting with a clear picture of what is important for the students to able to do, then organizing
curriculum, instruction, and assessment to make sure this learning ultimately happen” [2]. In Pakistan the educational
paradigm stated shifting towards OBE after the acceptance of the provisional membership of PEC into Washington accord,
after the Ottawa meeting of international engineering alliance (IEA) in 2010. . PEC after acquiring provisional membership
included in its accreditation criteria, the engineer attributes adopted by IEA in 2013. This provisional membership has
recently changed to permanent, after the June 2017 visit of the international body. This paper presents a candid analysis of
the results of OBE implementation in the industrial department of UET Peshawar. The said department introduced OBE in
parallel with the existing system of schooling tentatively. This paper will discuss the result obtained from the performance of
the students in the taught subject, both the results and the analysis of its various aspects. In the following sections a candid
analysis of these results will be discussed. The overall layout of the paper is in this order. The next section will give a view of
the of the OBE implementation in engineering institutes. The subsequent three sections will be comprised of the technique
applied in OBE implementation, analysis of the performance of the students under OBE, and suggestions for further work in
this regard. In the last i.e. the sixth section the paper will be concluded.

283
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

2. OBE IN ENGINEERING INSTITUTES

Pakistani universities have been pondering over the implementation of OBE in their degree program for quite some time
now. This was due to the encouragement of PEC and higher education commission (HEC) in order to get the desired traits
inculcated into the graduates. This caused a great deal of attention in national institutes about OBE implementation. A great
count of seminars and conferences, under the auspices of HEC and PEC, have been held across the country regarding the
understanding and implementation of OBE. Many institutes have initiated undergraduate programs under OBE e.g, NUST,
COMSATS, IQRA national university, UET Lahore, GIK institute of technology, and UET Peshawar. The current study is
based on such a pilot program implemented under OBE in parallel with traditional teaching and assessment techniques, in the
Industrial engineering department of UET Peshawar.

3. OBE IMPLEMENTATION TECHNIQUE

Industrial engineering department of UET Peshawar implemented OBE in parallel with traditional education system. This
was a tentative attempt at finding out the outcomes of the system in the institute. This implementation was aimed at bringing
the discrepancies in the conventional tools of teaching and assessment and also to get an idea for the development of a
suitable design for OBE system. The OBE model implemented here comprised of the conventional steps of the process i.e
defining of the mission statement of the department, program educational objectives (PEOs), PLOs, and CLOs. The second
step involved in this process is the selection of the learning and assessment tools i.e. lectures, assignments, exams etc. These
tools are designed according to the curriculum and students’ grooming requirements. Also it has been kept in mind that these
tools and the learning and assessment process fulfills the spirit of OBE i.e. they bring the students at par with the desired
outcomes in the shape of satisfying the vital elements of CLOs, PLOs, and institutional goals at large. The practical picture of
these outcomes is determined through the determination of the PEOs in the field surveys.

3.1 Mission Statement of Industrial Engineering

To produce industrial engineers who have professional knowledge, research and problem solving skills to play leading role
for the economic well-being, safety and productivity of an organization and society.

3.2 PEOS, PLOS, AND CLOS

In the following subsections the PEOs and PLOs for industrial department and the CLOs for the selected subject are been
discussed.

3.2.1 PEOS

The PEOs for industrial department can be summarized in the following lines. Graduates of Industrial Engineering will;
Have the ability to be involved in decision making process regarding designing and improving complex systems, both in
manufacturing and service sector.
Have ability to be engaged in continuous learning through knowledge and skill development that further enhance their
technical abilities and career growth.
Demonstrate professional and ethical responsibilities towards their profession, society and the environment as well as
respect for diversity.
Have ability to effectively lead, work and communicate in cross functional teams or to operate their own businesses.

3.2.2 PLOS

The PLOs defined are essentially the engineer attributes expounded by IEA, and they are as follows:
(i) Engineering Knowledge.
(ii) Problem Analysis.
(iii) Design/Development of Solutions.
(iv) Investigation of complex engineering problems.
(v) Modern Tool Usage.
(vi) The Engineer and Society.
(vii) Environment and Sustainability.
(viii) Ethics.

284
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

(ix) Individual and Team Work.


(x) Communication.
(xi) Project Management.
(xii) Lifelong Learning.

3.2.3 CLOS

The CLOs for the selected subject were also defined on the same linings; the CLOs for “optimization” are given as:
1) To formulate real life problems into optimization problems and to enter and solve them into different optimization
software’s (Analysis).
2) To apply different optimization methods especially on linear programming problems, transportation, networking,
and queuing problems (Applications).
3) To interpret and take decision on solution obtained from different optimization methods and software’s (Evaluate).

4. RESULTS OBTAINED THROUGH TRADITIONAL AND OBE ASSESSMENTS

In table 1: a candid presentation of the results from the assessments of 48 students. These results show the scores
recorded under traditional and OBE assessment tools. The students are from the fifth semester of 2016 for the subject
“Optimization”.

Table 1: Assessment Results under both systems for the subject

Serial no. Name of the Average Sum Sum Average of


particular score (Average) of normalized values
assessed OBE score
1 CLO1 48.89 48.89 48.89
2 CLO2 32.43 32.43 32.43
3 CLO3 37.27 37.27 37.27
4 Sum of OBE  111.60 58.20
score
5 Sum of 48.88  69.83
traditional score

The first column of the table shows the serial number. Second column of the table has the names for the scores under
various items being assessed. The first three rows contain the CLOs being defined for the course. These are the values of
the respective scores after normalization. The scores under OBE were initially recorded individually for each CLO out of
hundred. The total scores of the students were out of three hundred. For a convenient comparison of both the schools the
values have been normalized. Fourth row is the sum of the scores under OBE. Similarly fifth row shows the sum of the
scores got under traditional schooling system.
Figure 1. given below shows the graphical representation of the comparative scores of the students under the systems of
learning. This graph depicts the picture of average normalized scores of all the 48 students being assessed. The bar
graph colored green shows the OBE assessment score. And blue dotted graph shows the result scores under conventional
system. On the horizontal axis of the graph are the registration numbers of the students and scores are on the vertical
axis. The scores are from 0 to 100, which are the normalized values of the marks obtained.
The results shown in both Table and Figure, displays the discrepancies in the scores of students. CLO 1 for the subject
under discussion defines the skills needed or a graduate to be able to know the parameters and tools involved in the
understanding of the subject. CLO 2 is defined for inculcating the skills of the comprehension and application of the
concepts intercepted in CLO 1. Similarly CLO 3 is aimed at providing the students with the insight into different
optimization methods applied at various problems. This package of curricular learning process was aimed at bringing the
students at par with the international standards of learning. The results obtained were normalized as is the procedure to
obtain a uniform result curve and fulfill the criterion for the prevailing relative system of grading.
The average score that the students got in traditional learning process was 69.83 and their average under OBE was 58.20.

285
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

While there is a difference in the average score in the two education systems but the real picture becomes clear when the
score in the individual CLOs is considered. The average score of the 48 students in CLO 1, CLO 2, and CLO 3 were
41.90, 32.43, and 37.27 respectively. The basis of OBE’s implementation is rested on the points given by bloom in his
“bloom’s taxonomy”, especially in the cognitive domain, r e-mains invaluable for OBE assessment [3]. In OBE learning
model, Bloom’s taxonomy is associated as a frame of reference [4]. In this case CLO 1 is solely considered with learning
of the subject material of a particular program, known as cognitive domain in blooms taxonomy, and so is the case here.
Apart from the technical side an industrial engineer must also have soft skills such motivating and leading subordinates
and coworkers. He should be more responsible to the cultural and environmental impacts of his projects in performing
various tasks [5]. Traditional learning system also put major emphasis on the cognitive aspect of learning. On the other
hand in OBE there is an equal if not more, emphasis is given to the skills of practical usage and social viability of these
concepts. An engineering program is supposed to enable the engineer to develop the soft skills required in the field and
targeted by the program objectives i.e. ethics, knowledge of the current developments and lifelong learning [6]. The
results in CLO 2 and CLO 3, which are concerned with psychomotor and affective domains of blooms taxonomy; are
lower than that in CLO 1.

Figure 1: The comparison of the normalized scores for a subject

5. ANALYSIS OF THE TWO SCHOOLING SYSTEMS

In this section a succinct account of the observations, from the comparison of both OBE and the traditional
schooling systems, is given. This discussion can be started with the description of key aspects of an engineering
education system, both in terms of OBE and the existing so called traditional system. Conventional schooling
system in engineering starts with specification of the curriculum to be covered in a specified period of time while
OBE categorically comes down from institute’s mission statement, through PEOs, PLOs, and finally CLOs. These
outcomes define the pathway for the academic development of the graduates. The designing of outcomes for OBE
involves, besides students and faculty, the participation of all the stakeholders including employers, parents, and
administration. The curriculum specific learning only focuses on the increment of the knowledge but OBE takes into
account all the important aspects of the education of the graduate i.e. cognitive, psychomotor, and affective domains
of learning. Another important element of education is the assessment of the acquired knowledge. While traditional

286
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

system use pre-defined tools e.g. assignments, quizzes, and exams, OBE encourage self-study, solution and analysis
of real life problems, and research oriented tasks. Grading system is also a touchstone for the carrier of an engineer.
Passion for higher grades is mainly motivated by a student’s lust for recognition, parental expectations, and chances
for employability [7]. The attainment of higher grade has become the yard stick for measuring a graduate’s
performance. In traditional learning the procedure of instruction encourages self-study, which in its way is a good
practice but the results shown by adopting cooperative or group tasks has been proposed to be much more expedient
[8]. In traditional system transcript is the blue print of the previous performance of a student, whereas in OBE the
provision of multiple opportunities, flexibility, and pencil grading gives a student a great deal of motivation. Passion
for grades is mainly the concern of the students. Apart from students, teachers are another and perhaps the most
important pillar of any learning environment. The faculty should be given proper training as to know how to design
the outcomes of a program and the tools needed. Also they should be made familiar with assessment and instruction
techniques of the day. Besides the structure of the OBE curriculum and the outcomes, the teachers should also be
equipped with the knowledge of the relation between learning outcomes, the opportunities and methodologies
therein applicable. Furthermore, teachers should be given demonstration on the examples in working in their field of
expertise in the training [9]. Continual quality improvement (CQI) is another glaring feature of OBE which is
lacking in traditional system. CQI is brought about at many levels i.e. definition of PEOs, PLOs, and CLOs. These
improvements are made based on the survey of the alumni, final performance level after graduation, and course
outcomes of a student respectively. A practical example of the above mentioned discrepancies in the two systems
are evident from the results obtained during this study from the assessment of the test students. These discrepancies
can be rectified by yet another tool utilized in OBE learning i.e. the OBE Matrix. OBE matrix is a table which is
used in outcome based schooling systems in order to tally the CLOs defined for a subject to the tools and activities
which would make the desired competencies in student possible. The following table i.e. Table 2, shows the matrix
for outcomes of subject1. In this table first column shows the CLOs, second column contains the activities involved
in the learning process of CLOs, and the last column shows the assessment tools to be implemented in the teaching
of those CLOs. The above discussed results out of the OBE assessment shows that though OBE has a long way to go
in order to achieve perfection but still it shows a promising future for in the engineering programs. The grades and
GPA as shown above do show discrepancies but they are not so large as to discard OBE as a viable schooling
system and OBE is seen as a way forward in the direction of getting full advantage of the perks offered by the
Washington accord [10].

287
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Table 2. OBE matrix for the subject

Course learning outcomes Learning activities Assessment tools

To formulate real life problems Class room lectures, online study, Quizzes, midterm exam, lab
into optimization problems and and assignments of different small reports, and assignments.
to enter and solve them into software related tasks.
different optimization
softwares.
To apply different optimization Internet study, field visits, Assignments, quizzes, midterm
methods especially on linear individual real life problem solving exam, lab tasks, final term exam,
programming problems, tasks, and class room discussions. and presentations.
transportation, networking, and
queuing problems.
To interpret and take decision Decision making tasks, tentative Quizzes, midterm exam, lab tasks,
on solution obtained from problem solving projects, class presentations, and final term exam.
different optimization methods room demonstrations via software
and softwares. simulation.

5. CONCLUSION AND FUTURE WORK

Based on the discussion and analysis of the obtained results the inference drawn is summed up in the following lines.
Engineering programs have an existing schooling system i.e. traditional education system, which covers the cognitive and to
some extent psychomotor and affective domains of a graduate’s learning experience. With certain modifications as prescribed
above; these programs could easily adopt OBE. This would inculcate the vital market traits desired globally into the
graduates. PEC has already aligned its accreditation criteria with that defined by Washington accord members. Now with
PEC permanent membership; if OBE is implemented in its true spirit the graduates of engineering disciplines in Pakistan
could have a promising carrier internationally through the Washington accord. The study conducted here (Industrial
department, UET Peshawar) was tentative and showed discrepancies in the alignment of OBE with the traditional education
system in engineering institutes. These flaws can be addressed by more such studies. This study was based on one subject of
the fall semester. A future study could be conducted on all the running semesters for a subject or on a one batch all subjects.
Also the Bloom taxonomy being fully included in the curriculum design could yield better results. Similarly different
techniques apart from official data collection such as questionnaires and survey of the alumni, faculty, and administration
could be utilized.

6. REFERENCES

[1] Brindley, Geoff. (2001). Outcomes-based assessment in practice: some examples and emerging insights.
Language Testing - journals.sagepub.com.
[2] Spady, William. G. (1994) . Outcome-Based Education: Critical Issues and Answers. American Association of
School Administrators, Arlington, Va.ISBN-0-87652-183-9.
[3] Malan. SPT. (2000). The new paradigm of outcome-based education in perspective. Journal of Family Ecology
and Consumer Sciences.

288
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

[4] Eng Tang Howe, Akir Oriah , Malie Senian. (2012). Implementation of Outcome-based Education Incorporating
Technology Innovation. WC-BEM 2012, Procedia - Social and Behavioral Sciences.
[5] Heywood, John. (1997). Outcomes Based Engineering Education I: Theory and Practice in the Derivation of
"Outcomes" A European Historical Perspective. Frontiers in Education Conference, 0-7803-4086-8 01997 IEEE.
[6] Nordin Rosdiadee, Bakar A. Ashrif A , Zainal Nashruddin , Husain Hafizah. (2012). Preliminary Study on the
Impact of Industrial Talks and Visits towards the Outcome Based Education of Engineering Students. UKM
Teaching and Learning Congress. Procedia - Social and Behavioral Sciences 60, page no. 271 – 276
[7] Khan, Muhammad Asif. (2014). Students’ Passion for Grades in Higher Education Institutios Pakistan.
International conference on Education & Educational Psychology 2013 (ICEEPSY 2013), Procedia - Social and
Behavioral Sciences.
[8] Ahmad, Zaheer and Mahmood, Nasir. (2010). Effects of Cooperative Learning vs. Traditional Instruction on
Prospective Teachers’ Learning Experience and Achievement. Ankara University. Journal of Faculty of
Educational Sciences, vol: 43, no: 1, Page no.151-164.
[9] Najjar, Jad, Klobucar, Tomaž, Nguyen-Ngoc, Anh Vu, Totschnig, Michael, Müller, Franz, Simon, Bernd,
Karlsson, Mikael, Eriksson, Henning. (2011). Towards Outcome Based Learning: An Engineering Education
Case. IEEE Global Engineering Education Conference (EDUCON) – "Learning Environments and Ecosystems in
Engineering Education".
[10] Mahmood, Khalid, Khan, Khalil. Muhammad , Khan, Komal. Saifullah and Kiani, Saad. (2015).
Implementation of Outcome Based Education in Pakistan: A Step towards Washington Accord. IEEE, 7th
International Conference on Engineering Education (ICEED).

289
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

A HYBRID APPROACH FOR AUTOMATIC AORTA SEGMENTATION IN


ABDOMINAL 3D CT SCAN IMAGES
Arfa Ali1, 2, *Iftikhar Ahmad1,2, Hussain Rahman3, Sami Ur Rahman4
1,2 Department of Computer Science & Information Technology, University of Engineering and
Technology, Peshawar, Pakistan
1,2
Faculty of Computer Science & Engineering, Ghulam Ishaq Khan Institute of Engineering Sciences
and Technology, Topi, Pakistan
3,4
Department of Computer Science & Information Technology, University of Malakand, Pakistan
4
School of Computer Science & IT, Stratford University, USA.
Email: 1arfaali05@gmail.com, 2ia@uetpeshawar.edu.pk, 3hrahman@uom.edu.pk, 4srahman@stratford.edu
Abstract: This article proposes a hybrid approach for aorta segmentation in 3D computed tomography (CT) scan
images of abdomen. Aorta segmentation is generally achieved by using anatomical knowledge about shape and
position of aorta along with a specific segmentation algorithm. The knowledge of human anatomy serves as a
prior to segmentation algorithms on which they update their posterior knowledge about aorta boundaries. These
segmentation algorithms normally belong to one of the two broad categories; fast algorithms which exploit
intensity properties and spatial position of aorta in the images; and iterative algorithms which apply optimization
of some cost function to track aorta boundaries. The cost function might either use the anatomical knowledge
prior or shape prior and serves to converge on position or shape of aorta. Both of these categories have their
pros and cons. Fast algorithms offer lower computation cost and small processing time while maintaining an
acceptable segmentation performance. On the other hand, iterative algorithms offer better segmentation
performance at the expense of high computational cost and low speed. Therefore, there is always an inherent
trade-off between segmentation accuracy and computational cost. Proposed approach intends to increase
segmentation accuracy of intensity based (fast) approaches while inheriting their low computational costs and
faster speeds. It basically combines two intensity based fast algorithms; region growing and connected
thresholds; into one for enhanced performance. The region growing part of proposed approach incorporates the
anatomy of aorta by limiting the field of search to defined region while connected thresholds part forces it stick to
aorta boundaries. It can be used as a quick standalone segmentation procedure. On the other hand, it can also be
used as pre-segmentation stage for iterative and more accurate approaches. Its implementation on 3D abdominal
CT scan images shows promising results.
Keywords: Aorta segmentation, CT scan images, region growing, connected threshold, false color processing.

1. INTRODUCTION
Medical science has received its fair share of technological advances. Human anatomy is now observable in finer
details owing to advanced medical imaging technology. It has made it possible to clearly see the internal organs
with comparable level of details as external organs can be observed.
One of the most practical and useful outcome of medical imaging is the ability to visualize organs as
standalone units which is achieved by isolating organs in the images by tracing its boundaries. This process is
called segmentation. It can be vital in the diagnosis stage as segmenting the organ of interest gives physicians an
extra set of information to make more informed decisions. For instance, heart surgeons can have pre-surgery

290
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

knowledge about the size and state of the heart. Segmentation can also help identify cancerous cells in the body. It
can help visualize damage done by internal injuries.
The usefulness of segmentation can be extrapolated to entire human body and related medical conditions. Its
usefulness is highlighted while designing strategies for invasive treatment, such as the insertion of catheter. In
such cases, segmentation can provide information which is usually not accessible by other methods. For catheter
insertion or fat removal, prior knowledge about the size and structure of arteries, especially aorta, is necessary. Due
to its unique shape and location, segmentation of aorta becomes critical for invasive treatment. As aorta has a
distinct anatomy with considerable differences from its surroundings in tissue density, it is often visualized by
Computed Tomography (CT) scan imaging. CT scan imaging is cheaper and faster than 3D MRI scans. However,
the speed comes with a cost, it has lower resolution than MRI, and therefore can prove problematic in
segmentation. For most of the time, physicians need a quick estimate of the size and shape of aorta, so both speed
and segmentation accuracy are desired. Although the segmentation of aorta might be problematic, it does have
some features which help in its segmentation. For instance, aorta has a distinct vascular anatomy; it resembles a
tube with almost uniform cross section. Moreover, its spatial proximity to other non-circular anatomies makes it
easily distinguishable from neighboring organs. However, besides these helpful features, there is still a trade-off
between speed and accuracy. Segmentation algorithms which are more accurate are computationally more
expensive and vice versa. In order to achieve a reasonable segmentation accuracy while keeping the computational
cost low, we propose a hybrid segmentation algorithm which relies on the spatial and intensity properties of pixels.
We validate the proposed algorithm on 3D abdominal CT scan images for the purpose of aorta segmentation.
Results show that the proposed approach achieves reasonable accuracy.
Rest of the paper is organized as follows. Section 2 presents a literature review of automatic segmentation
algorithms. Section 3 presents proposed approach, results are discussed in Section 4. Discussion is covered in
Section 5 while Section 6 concludes the paper.

2. LITERATURE REVIEW
Many aorta segmentation techniques are reported in the literature [1]. These techniques can be broadly classified
into four categories; intensity based approaches, geometric shapes based approaches, active contours and
statistical shaped based models.
Intensity-based segmentation methods rely on the assumption that different anatomies are characterized
by different Hounsfield Unit (HU) values. Their main criterion of segmentation is contrast between different
anatomies. Higgins et al. [2] and Niki et al. [3] proposed thresholding methods followed by connected component
analysis. The resultant algorithms have excellent run time performance but limited robustness against variations in
contrast. Revol-Muller et al. [4] proposed a region-growing centric intensity based segmentation algorithm. Their
core concept is to perform region growing iteratively while increasing threshold value in each step. Once the steps
are completed, the most suitable result from the previous stage (region growing) is determined by an assessment
function. These algorithms have good speed but limited robustness.
The geometric shapes based approaches primarily attempt to detect geometric shapes inside the image.
Davies [5] proposed Hough Transform as circular edge detector for aorta segmentation which falls under
geometric based models. This approach works on the assumption that aorta are near perfect circles when seen
from top view. This approach suffers setbacks due to the presence of other tubular anatomies. Sato et al. [6] and
Frangi et al. [7] proposed matching shape filters which are convolved with the image to detect shapes of interest.
Main drawback of this method is the requirement of multiple shape filters at various scales and orientations.
Active contours find object contours using parametric curves which deform under the influence of
internal and external forces [8]. These active contours are often referred to as snakes. Lorigo at al. [9] proposed
energy criterion based on intensity values and local smoothness properties of the object boundary (vessel wall).
Nain et al. [10] combined image statistics and shape prior in their energy function. The snakes are quite robust but
computationally expensive because they rely on iterative optimization of a cost function.

291
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Statistical active shape models (ASM) [11] iteratively deform to find an object in a new image. Models
based on this approach loosely falls under supervised learning. Lekadir et al. [12] uses statistical shape metrics
based on inter-model landmark-based distances. Active Appearance Models (AAM) try to avoid cases of wrong
segmentation by using training set images to estimate the relationship between model parameter displacements
and the residual errors [13].
All of the above mentioned approaches are either fast but less robust or robust but slow (computationally
expensive). There is an inherent trade-off between speed (computational cost) and accuracy.

3. MATERIALS AND METHODS


Aorta represents circular anatomies which are easily distinguished by human operators owing to their distinct
circular cross section. This fact can be exploited along with the contrast of aortic regions for automatic aorta
segmentation. Sample aorta images are shown in Fig. 1.

Fig. 1: Sample aorta images. Left; descending aorta visible in lower abdomen. Right; both ascending and
descending aorta visible in upper abdomen.

We propose the following pipeline for automatic aorta segmentation


1) Pre-process each volume slice for contrast enhancement.
2) Use Hough Transform for automatic seed selection.
3) Use connected threshold assisted by region growing algorithm for slice by slice segmentation.
4) Reconstruct the segmented volume by spline interpolation.

4.1. Segmentation
We propose a union of region growing and connected threshold algorithms. Both algorithms are faster as
compared to other contemporary algorithms. We unite spatial properties of region growing and pixel similarity
property of connected threshold to exploit their strong points. Mathematically speaking, proposed approach can
be summed as follows
Let 𝐼 = 𝐼(𝐼, 𝐼)𝐼𝐼𝐼𝐼𝐼𝐼 ∈ 𝐼 is the image in question and 𝐼𝐼 is the ROI and is defined by
1
𝐼𝐼 = {𝐼(𝐼, 𝐼)|𝐼 ∈ 𝐼, 𝐼 ∈ 𝐼, 𝐼(𝐼𝐼 ) = 𝐼𝐼𝐼𝐼} (1)(𝐼, 𝐼) = {(𝐼, 𝐼)|((𝐼 − 𝐼0 )2 + (𝐼 − 𝐼0 )2 )2 ≤ 𝐼} (2)
𝐼(𝐼, 𝐼) is the gray value of a pixel at location (𝐼, 𝐼), (𝐼, 𝐼) is set of pixel locations in region of interest, (𝐼0 , 𝐼0 )
are location of seed pixel and 𝐼 is the expected radius of ROI. As the proposed algorithm is a union of region
growing and connected threshold algorithm, 𝐼(𝐼𝐼 ) is defined as the union of two properties as follows
𝐼1(𝐼𝐼 ) = 𝐼𝐼𝐼𝐼𝐼𝐼𝐼(𝐼𝐼 , 𝐼𝐼 ) ∈ 𝐼 ∧ 𝐼(𝐼𝐼±𝐼 , 𝐼𝐼±𝐼 ) ∈ 𝐼 (3)𝐼2(𝐼𝐼 ) = 𝐼𝐼𝐼𝐼𝐼𝐼𝐼(𝐼𝐼 , 𝐼𝐼 )
= 𝐼𝐼 (4)𝐼1 ∩ 𝐼2 = 𝐼𝐼𝐼𝐼 (5)

𝐼 is the threshold range, 𝐼𝐼 is the region 𝐼 being segmented. Eq. (1), (2) and (4) define the region growing part of
the proposed approach, Eq. (5) formulates the connected threshold part. For region growing algorithm, 𝐼(𝐼𝐼 ) can

292
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

be similar gray levels, shapes, color or texture. In connected threshold part, the main criterion is to examine if all
the connected pixels are in the desired threshold range.
For region growing, we use spatial proximity based on the assumption that aorta are circular vessels, so
the top view is almost a perfect circle. In other words, all the pixels in region of interest are located in a radii
range from central pixel.
Intra-anatomy tissue densities are uniform while inter-anatomy tissue densities are non-uniform. In term
of CT scan imaging, this inter-anatomy density difference translates into different intensity ranges for different
organs. To appreciate this fact, we use connected thresholds property along with the spatial proximity property in
proposed approach. According to Eq. (1), a pixel must have intensity value in a specific threshold range. Eq. (2)
further dictates that the intensities of all immediate neighbors of that pixel should also be in the range. Once this
criterion is satisfied, the pixel is segmented to be in the ROI.
Proposed approach is a hybrid technique based on region growing and connected threshold algorithm. It
satisfies the spatial similarity property of region growing algorithm and by enforcing neighboring pixels to be in
specific range, it satisfies the fundamental criterion of connected thresholds algorithm. The pseudo-code for
proposed approach is given in Algorithm 1.
Algorithm 1

Pre-process the input 3-D volume


Select a seed point, 𝐼(𝐼0 , 𝐼0 )
While (stopping criteria)
Calculate distance between seed point and current pixel
If distance is less than threshold and 𝐼(𝐼𝐼±𝐼 , 𝐼𝐼±𝐼 ) ∈ 𝐼
𝐼(𝐼𝐼 , 𝐼𝐼 ) ∈ 𝐼𝐼𝐼
Else
Break
End if
End while
Input 3-D volume needs a seed which acts as the initial reference point. The seed can either be selected
manually or automatically. The near perfect circular cross section of aorta makes it easier to select the seed
automatically. First we apply Hough Transform to the 3D volume, slice by slice. Then we evaluate Hough peaks
in potential ROI. Evaluating Hough peaks gives us potential candidates for initial seed points. We select two
points in the slice with highest Hough peak; one corresponds to ascending aorta while the other corresponds to
descending aorta.
Once we have located the seed points, we estimate radii of the two corresponding circular regions. These
radii give us an estimate for region boundary for region growing algorithm. By evaluating intensities range of
these circular regions, we also fix threshold ranges for connected threshold algorithm. Once we have seed points,
radii and threshold range, we implement proposed algorithm in both ways. For instance, let’s say 𝐼(𝐼1 , 𝐼1 ) and
𝐼(𝐼2 , 𝐼2 ) are seed points on slice𝐼, the algorithm is implemented in the following manner.

Algorithm 2

293
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

1. Start with slice 𝐼 (𝐼 ← 𝐼)


2. Segment regions of ascending aorta and descending aorta using seed points
3. For next slice, seed point is center of mass of current segmented region
4. For next iteration,
𝐼 ← 𝐼 − 1𝐼 ← 𝐼 + 1
The algorithm works in both directions of the seed slice. It tracks aorta one side by incrementing the counter
‘i’ while tracks aorta on the other side by decrementing the counter ‘j’.

4. RESULTS
4.1. Dataset
Proposed approach is tested on a set 3D volume of abdominal CT scan images. Each 3D volume consists of 69 or
73 individual slices, depending upon the patient. Each slice is 512 × 512 in dimension and saved in 16 bits
unsigned integer format.
4.2. Results
For segmentation, seed selection is automatic, so initial seed value and respective radii and range values are
completely dependent on the initial seed point. Therefore, there is no specific range for seed point or radii, the
seed point can be in starting slice or middle slice, likewise, the radii range can be 20 pixel or 200 pixels. Once
every slice is processed by the segmentation algorithm, the segmented ROI for all the slices are reconstructed in a
3D volume for comparison. We have manually segmented a 3-D volume by tracing aorta boundaries in every
slice, then segmented it by proposed approach for comparison. The manual and automatic segmented 3-D
volumes are shown in Fig 2.

Fig. 2: Segmented Volumes after 3-D reconstruction (a) Manually segmented volume (b) Automatic segmented
volume

5. DISCUSSIONS
In order to quantify our visual results, we generated a ground data set of your own and then compared it to the
automated segmentation results we discussed in previous section.. The accuracy is measured in two ways; one, by
counting hits and misses for the aorta by comparing it to manually annotated volumes and two, by finding the
overall (average) overlap between annotated and detected aorta.
For the purpose of finding standard precision, recall and accuracy, we have defined the corresponding
terms as following.

294
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Jaccard Index: Ratio of intersection to union of two sets (In this case, one set is the automatically segmented
aorta and the other set is manually segmented aorta) [14]
True positive: When the Jaccard Index is greater than 80%.
False positive: Detecting aorta in a slice where there is none.
False negative: When the Jaccard Index is less than 80% or there is no aorta detected at all in a slice where there
should be one.
Performance measures of proposed pipeline are summarized in Table 1

Precision Recall F1

98.5 % 98 % 98.25 %
Table 1: Performance parameters
Proposed segmentation algorithm is fast in terms of computation time as well. On an Intel Core i7
processor, 8GB RAM with Windows 10 as operating system, segmentation took only 1.5 seconds to complete
segmentation of a 3D volume which has 69 individual slices and each slice has 512 × 512 pixels of 16 bits
unsigned integers pixels.
Proposed approach performs well on the two standards it was supposed to work; to have an acceptable
segmentation performance and to be faster than computationally expensive algorithms. Its comparison to the
level-set algorithm (accurate but computationally expensive) and to region-growing (fast but not very accurate)
indicates that its performance in terms of precision and recall is close to that of level-set while its computational
time is comparable to that of region growing. Table 2 summarizes this comparison well.

Region Growing Level-set Proposed

Precision 96.5 % 100 % 98.5 %

Recall 97.5 % 100 % 98 %

Time per frame (sec) 2.16 57.4 0.49

Table 2: comparison with level-set based segmentation and region growing segmentation

6. CONCLUSION
A hybrid approach for aorta segmentation in 3D abdominal CT scan images is proposed and validated in this
article. The approach falls under the general category of intensity based approaches. It has the spatial properties of
region-growing and speed of thresholding algorithms. Without parameters optimization, it gives 91-95%
segmentation accuracy on CT scan images.

7. FUTURE WORK

295
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

We have discovered that contrast of a volume frame can be increased which in turns helps in better segmentation.
We are working on a specific pre-processing stage to increase inter-organ contrast as much as possible. We will
also evaluate the effects of proposed approach at each step. Moreover, we will also explore the use of supervised
and semi-supervised learning approaches for better segmentation.

[1] C. Kirbas and F. Quek , 2004, “A review of vessel extraction techniques and algorithms,” ACM Computing Surveys
36(2) 81-121
[2] W.E Higgins, W.J.T Spyra, and E.L Ritman, , 1989, “Automatic extraction of the arterial tree from 3-d angiograms,” in
IEEE Conference Engineering in Medicine and Biology Society vol. 2. 563-564
[3] N. Niki, Y. Kawata, H. Satoh, and T. Kumazaki, 1993, “3D imaging of blood vessels using x-ray rotational angiographic
system,” in IEEE Conference Nuclear Science Symposium and Medical Imaging vol. 3 1873-1877
[4] C. Revol-Muller, F. Peyrin, Y. Carrillon, and C. Odet , 2002, “Automated 3D region growing algorithm based on an
assessment function,” Pattern Recognition Letters. vol. 23 137-150
[5] E. Davies, 1987, “A high speed algorithm for circular object detection,” Pattern Recognition Letters. vol. 6, 323-333
[6] Y. Sato, S. Nakajima, H. Atsumi, T. Koller, G. Gerig, S. Yoshida, and R. Kikinis, 1997, “3D multi-scale line filter for
segmentation and visualization of curvilinear structures in medical images,” in J. Troccaz, E. Grimson, and R. Mosges, eds.,
Proc. CVRMed- MRCAS97, LNCS, 213-222
[7] A. Frangi, W. Niessen, K. Vincken, and M. Viergever, 1998, “Multiscale vessel enhancement filtering,” in Wells, W.M.
III, Colchester, A., Delp, S., eds.: Proc. of the 1st International Conference on Medical Image Computing and Computer-
Assisted Intervention, MICCAI98. Volume 1496 of LNCS. 130-137
[8] M. Kass, A. Witkin, and D. Terzoopoulos, 1998, “Snakes: Active contour models,” International Journal of Computer
Vision. 1(4) 321-331
[9] L. Lorigo, O. Faugeras, W. Grimson, R. Keriven, R. Kikinis, A. Nabavi, and C. Westin, 2001, “Curves: Curve evolution
for vessel segmentation,” Medical Image Analysis. vol. 5 195-206
[10] D. Nain, A. Yezzi, and G. Turk, 2004, “Vessel segmentation using a shape driven flow,” in Proc. of the 7th International
Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI04. Volume 3216 of LNCS. 51-59
[11] T. Cootes, C. Taylor, D. Cooper, and J. Graham, 1995, “Active shape models - their training and application,” Computer
Vision and Image Understanding 61(1) 38-59
[12] K. Lekadir, R. Merrifield, and Y. Guang-Zhong, 2007, “Outlier detection and handling for robust 3-D active shape
models search,” IEEE Transactions on Medical Imaging 26(2) 212-222
[13] T. Cootes, G. Edwards, and C. Taylor, 2001, “Active appearance models,” IEEE Transactions on Pattern Analysis and
Machine Intelligence 23 681-685
[14] Jaccard, Paul, 1901, "Étude comparative de la distribution florale dans une portion des Alpes et des Jura", Bulletin de la
Société Vaudoise des Sciences Naturelles, 37: 547–579

296
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

On the performance of digital image processing technique for modeling human


actions
Faisal Imtiaz, N. Minallah, Muniba Ashfaq, Waleed Khan, M. Jawwad

Department of Computer Systems Engineering (DCSE)


University of Engineering and Technology, Peshawar (UET Peshawar),
ABSTRACT

Abstract-Automatic action recognition is one of the challenging tasks in recent research. Complex human actions are
composed of simpler motion patterns. This research relates to give a new direction for identifying human actions by
comparing different classifiers in parent-child node model system. Each human action originates from basic human
movements. Human actions such as running and walking originate from legs movement while hand waving and hand
clapping originate from hands movement. In our proposed parent-child node model system, basic human movements reside at
top (parent) level and human actions reside at down (child) level. In this research Spatio-Temporal Interest Points (STIP) are
extracted using SIFT features from 50 consecutive frames of each action. Covariance of STIP features among action frames
are used as feature vector for classification using KNN, SVM and Naïve Bayes. Analysis has been done and it has been
concluded that each classifier’s behavior and performance is different for each action. Results from Table 1 show that KNN
classifier outruns SVM and Naïve Bayes classifiers with accuracy of 96.7%, 95% and 71.7% respectively at parent level.
Table 2 show that, at child level SVM performance is better for legs originated movements i.e., walking and running while
KNN performance is better for hands originated movements i.e., hand waving and clapping.

Index Terms— Spatio-Temporal Interest Points (STIP), SIFT features, KNN, SVM, Naïve Bayes

1. INTRODUCTION

As the development of science has helped in various aspects of human life, it is also playing a keen role in the field of
computer vision. Computer vision is an enormous field of research, including visual surveillance, which is currently under
the radar of researchers. This is due to its versatile nature of compatibility in many areas of life. Visual surveillance mostly
used for monitoring purposes and is being used in many departments such as, traffic monitoring, educational institutions,
prison surveillance systems, monitoring of restricted areas, city monitoring departments, airport monitoring systems and
many more. Visual surveillance systems have a centralized monitoring area where the feeds from all the monitored areas are
fed into a display. These centralized monitoring areas are monitored by people authorized for those areas [10]. These
authorized personals have to monitor many screens at a time, having different amount of traffic in the display, and react to a
specific event after noticing it. Due to the limitations in human nature such surveillance may sometimes cause error in
detecting specific activities known as human error. Such monitoring systems also need enormous labor cost. These types of
errors are mostly caused in visual surveillance systems leading to a misshape with a magnitude dependent on the environment
of surveillance. To overcome such errors researchers have proposed artificial intelligence into visual surveillance systems.
These systems work on the analysis of different types of movements of the subject and categorizing these movements for
further decisions. Such visual surveillance systems are capable of detecting a specific action or an object off a stream of
video. Variations in the environment of visual surveillance also play a keen role in categorizing the behavior detection of an
individual or group of individuals. Methodology of behavior detection in visual surveillance is subdivided into two steps; first
step is based on the detection or extraction of features from the visual screen. Second step consists of classification of
extracted features into different types of actions using classification algorithms also known as classifiers.
This paper is divided into different sections where section 1 is introduction, 2 discusses some of the related work in this field,
3,4 and 5 is combination of feature extraction from the visual feed, the proposed methodology of current research work and
technical details of it, respectively. In Section 6 results of the conducted simulations are discussed. Section 7 concludes this
paper and literates the future work applicable in this field.

297
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

2. RELATED WORK

Qian and Mao [1] used feature extraction using background subtraction and blog detection. Using energy motion and SVM
multiclass classifier with binary tree architecture recognized the activity of human. Bhaskar and Holte [2] used novel
approach and selective STIP feature detection and SVM is used for classification. Yanhua Chen [3] proposed approach for
human behavior detection based on limbs motion and moving body parts. Human behavior is detected with trained SVM
classifier. Maxwang and Chun [4] used bag of words and features are extracted with SIFT combination with CNN. They used
classifier SVM (linear, poly, additive che square kernel), KNN, random forest and their combination with caffenet for further
accuracy. Christian and Ivan [5] implemented complex motion pattern recognition using space time invariant features and
SVM classification. Ouanane and Serrier [6] proposed surf feature extraction and classification based on SVM. Muhammad
Ali and Ganghu [7] used action recognition using motion skeleton joint location. Farzad and Niarakha [8] used recognition
and classification based on hidden markov model. Our approach for detection of human actions is based on comparing
different classifiers.

3. FEATURES EXTRACTION FOR HUMAN ACTION RECOGNITION

STIP function is selected for features extraction in action recognition. Spatio-temporal interest points based on spatio-
temporal corners of the image.The point are find through scale SIFT features. SIFT features used in extraction features from
images of each action. These features can detect change in image independent of brightness, noise, orientation and scale.
SIFT has scale invariant features with extracted by Gaussian difference at key location of maxima and minima in scale space.
Scale invariant feature transform point are extracted by constructing space scale using Gaussian. By calculating difference of
guassian and locating extrema of its difference with subpixel localization, we find its location points.
For the image I(x,y) and variable scale Gaussian G(x,y,σ), the space scale L(x,y,σ) is obtained by convolution of image with
variable scale gaussian.
L(𝑥, 𝑦, σ) = G(𝑥, 𝑦, σ) ∗ I(𝑥, 𝑦) (1)

where key points are located through difference of Gaussian. By difference of two images and locating scale space extrema,
D(x,y,σ) can be calculated as
D(𝑥, 𝑦, σ) = L(𝑥, 𝑦, kσ) − L(𝑥, 𝑦, σ) (2)

Where k is the scale factor of one image with the other. To find maxima and minima of D(x,y,σ), 8 th neighbor comparison
occur at same scale and 9th neighbor at one up and down scale. The max or min value at this point defines extrema. By
laplacian, key points are localized and low contrast images are removed.

∂2 D−1 𝜕𝐷
𝐳 =− (3)
∂x2 ∂x

Where z is the location of extremum. If the value of z is lower than threshold value, it will be removed. In our experimental
setup top 40 extrema are selected among all features. The orientation of 40 extrema are calculated by further smoothing
image through guassian .The magnitude m and direction θ can be calculated as,

2
(𝐿(𝑥 + 1, 𝑦) − 𝐿(𝑥 − 1, 𝑦)) + ⋯
m(x, y) = √ 𝜃(𝑥, 𝑦) (4)
(𝐿(𝑥, 𝑦 + 1) − 𝐿(𝑥, 𝑦 − 1))2
𝐿(𝑥+1,𝑦)−𝐿(𝑥−1,𝑦)
𝜃 = tan−1 (5)
𝐿(𝑥,𝑦+1)−𝐿(𝑥,𝑦−1)

Key point descriptors are lined up using their orientation and weighted by key point i.e. 1.5* key scale. By using 16
histograms, 4*4 grid each having 8 orientations, their main direction vector shows the feature vector. These top 40 features
and their vector direction helps in finding action. These features are identified using nearest 8 neighbor relation with
minimum Euclidian distance. SIFT features are used in feature data extraction for training and testing.

298
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

4. KTH DATASET
KTH dataset contains 4 different human action includes walking, running, hand waving and hand clapping. These actions are
performed by 25 people with 4 scenarios including outdoor, outdoor with different clothing, different zoom factor and indoor.
All videos have homogeneous background with camera position static. Frame rate of camera is 25 frames per second. The
provided frame sequence is optimized with resolution are 160*120 pixel. Average length of video is 4 second. Both tested
data and training data is selected random among dataset. For each case we have 25 persons, 4 scenarios with 4 actions having
total video set of 25*4*4=400 videos. Most of feature of action is covered in 50 frames in each video.

5. PROPOSED APPROACH

For the last few decades, the main focus of research to detect human actions is usually based on different feature extraction
algorithms that applied motion patterns and body skeleton motions. Our approach for micro-behavior detection is based on
SIFT features. Extraction of features for action recognition depends on a lot of factors like orientation, scaling of camera,
motion of body and external environmental factors. STIP algorithm solves the problem of temporal alignment and shows
outstanding invariance to geometric transformations and is independent of orientation, scale, alignment, rotation and
viewpoint of image. As SIFT features detected are local so they are independent of segmentation problem. SIFT have
significant result in illumination variation and background clustering. Popular Haris features detector can detect high
intensity variation both in space and time by using spatio temporal corners. STIP features are extracted from first 50 frames
for action recognition. Each action contains 50 frames and each frame contains top 40 SIFT features. SIFT features from 50
fames for each action clip is further processed using covariance of features among 50 frames. For a single clip of action, we
have a single covariance matrix shown in figure 1.

Figure 1 Top row show 40 features points per frame, 50 frames per action and cascaded feature set of 40*50
respectively and bottom row show SIFT feature vector 40*50 for a complete action

These features can be directly taken as features vector but to make this process more effective we have generated covariance
matrix for each activity. In past research covariance is taken among features within a single frame but not among consecutive
frames of single activity. In this research, one more difference is that we have taken covariance among consecutive 50 frames
of a single activity. These covariance matrices for each separate activity will then be used as input features to classification
system for recognition. The main contribution of our propose approach research is the use of parent and child node model
system of human actions which recognize human actions by categorizing them into two main categories i.e., legs movement
and hands movement. Based on these parent categories, human actions are refined by falling in one of these categories.
Parent nodes are further subdivided as child nodes i.e., legs movement is sub-categorized as running and walking while hands
movement is subcategorized as hand waving and hand clapping. Action cannot be recognized unless it reaches child node.
Covariance matrix created for each action is divided as 70% training dataset and 30% testing dataset. SVM, KNN and Naïve
Bayes classifiers are used for classification.

6. SVM, KNN AND NAÏVE BAYES CLASSIFICATION

299
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

. In this research we analyzed that each human action is originated from some human body parts movements. Basic
movements contain legs movement and hands movement. KTH dataset is used for walking, running, hand waving and hand
clapping. All these four actions come under one of the basic body movements. In our research, we explored a new direction
for recognizing human actions where each action can be identified by first recognizing the class of these actions. Child
categories are identified after recognizing their parent categories. Actions are recognized using KNN, SVM and Naïve Bayes
classifiers. KNN is a classifier where function only approximates and all computation is differed till classification. In
classification nearest neighbor contribute more as compare to the far one. Neighbors are used to make a class for
classification for new data point. SVM is supervised learning algorithm that classifies the data and performs regression
analysis. SVM use non-probabilistic linear classifier to categories the data. Apart from linear classification SVM can perform
non-linear classification by mapping input to high dimensional linear space. Naïve Bayes is a probabilistic classifier with
strong independent assumption between features. It’s a highly scalable classifier that requires a number of features during
learning. It’s a linear time classifier rather iterative approximation time consumptive. Our research project is divided into
different phases. In order to extract STIP points from 50 frames, SIFT features are taken. For action recognition only top 40
SIFT features as their STIP points are selected from each frame shown in figure 2. Each activity contains 50 frames with 40
features. We divided feature vectors into two parts. All feature vectors for hand waving and hand clapping are grouped
together in category of hands movement. Similarly running and walking are grouped together in category of legs movement.
At parent level two categories were recognized, legs movement or hands movement. KNN, SVM and Naïve Bayes Classifier
are trained for each parent category using training data and tested with testing data. Once parent categories are recognized,
we have two sub-categories of child. Each child level is further categories. Hand movement has two categories, hand waving
and hand clapping while legs movement has two categories of walking and running. The main purpose of using different
classifiers is to analyze behavior of a classifier for recognition of a specific action. Dataset is analyzed using each classifier.
Results show variation in terms of accuracy for each category. The final strategy is to integrate best classifiers base on result
at each level for selection more accurate result.

(a) Hand Clapping Frames

(b) Running Frames

(c) Walking Frames

(d) Hand Waving


Figure 2: STIP features (magnitude and direction) shown by red lines of different actions frames
7. EXPERIMENTAL RESULTS

300
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Classification results are generated by implementation of SVM, KNN and Naïve Bayes. Second phase is the analysis phase in
which results are compared and analyzed for best classifier. Four actions are selected for our project from KTH dataset.
Experiments are performed using 50 frames from each video. 50 frames show complete one action. We have created
covariance matrix of a video considering 50 frames with 40 features each. Covariance is check among 50 frames of a video.
So for 400 videos (100 video for each action) 400 covariance matrices are generated. These covariance matrices are further
sub-divided as 70% training dataset and 30%testing dataset. Two main categories are created (Hands Movement and Legs
Movement). Hands movement is further subdivided into two child categories i.e., hand waving and hand clapping. Legs
movement is further subdivided into two child categories i.e., walking and running. Hand waving and clapping is the subset
of the training dataset for hand movement and walking and running is the subset of training dataset of legs movement.
Results are generated at each node. Details of accuracy using SVM, KNN and Naïve Bayes at parent level and child level are
given in table 1 and 2 respectively which is achieved using single classifier approach..

Table 1: Accuracy using KNN, SVM and Naïve Bayes using classifier at Parent Level
Classifier Category Type Total no. Correct Accuracy for Classifier
of videos identifications each action accuracy
Legs movement 60 58 96.7%
KNN (Parent) 96.7%
Hands movement 60 58 96.7%
Legs movement 60 58 96.7%
SVM (Parent) 95%
Hands movement 60 56 93.3%
Naïve Bayes Legs movement 60 60 100%
71.7%
(Parent) Hands movement 60 26 43.3%

Table 2: Accuracy using KNN, SVM and Naïve Bayes using single classifier at Child Level
Classifier Category Type Total no. of Correct Accuracy for Classifier
videos identifications each action accuracy
Walking 30 26 86.7%
KNN (Child 1) 73.3%
Running 30 18 60%
Hand Waving 30 13 43.3%
KNN(Child 2) 51.7%
Hand Clapping 30 18 60%
Walking 30 30 100%
SVM (Child 1) 75%
Running 30 15 50%
Hand Waving 30 24 80%
SVM (Child 2) 51.6%
Hand Clapping 30 7 23.3%
Naïve Bayes Walking 30 1 3.3%
50%
(Child 1) Running 30 29 96.7%
Naïve Bayes Hand Waving 30 30 100%
50%
(Child 2) Hand Clapping 30 0 0%

120.00%
KNN
96.70% 95%
100.00% SVM

Naïve
80.00% 71.70% 73.30% 75%
Bayes
60.00% 50% 51.70% 51.60% 50%

40.00%

20.00%

0.00%
Parent Child1 Child2
Figure 3: Results of using single classifier approach

8. CONCLUSION

301
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Result shows that KNN accuracy is better than SVM and Naïve Bayes at parent category, so KNN is selected at parent level.
At node level child 1 for walking and running SVM result is much better than KNN and Naïve Bayes so it can be selected at
this node .At child 2 for handwaving and hand shaking , KNN or SVM can be selected as shown in table 3. For child level
overall accuracy of classifier may varies but considering specific feature in child level the most accurate classifier is used for
correctly identification. Figure 3 shows summary of results single classifier approach

9. FUTURE WORK

Human actions are composed of simple motion patterns. In this research we modeled a parent and child structure for
modeling human actions that originate from simple body movements. This structure is first modeled with single classifier
approach using different classifiers standalone. Each classifier has different performance and accuracy for each action
classification. In future work we will use, multi-classifier approach and compare the results with single classifier approach. In
future, more efficient techniques can be implemented in hierarchical structure using hierarchical multi-classifier approach to
further improve the results.
10. REFRENCES

[1] Qian, Huimin, et al. "Recognition of human activities using SVM multi-class classifier." Pattern Recognition Letters
31.2 (2010): 100-111.
[2] Chakraborty, Bhaskar, et al. "Selective spatio-temporal interest points." Computer Vision and Image Understanding
116.3 (2012): 396-410.
[3] Chen, Yanhua. "Human Behavior Recognition Method based on Image Sequences." International Journal of Signal
Processing, Image Processing and Pattern Recognition 9.2 (2016): 189-202.
[4] Wang, Max, and Ting-Chun Yeh. "Human Action Recognition Using CNN and BoW Methods."
[5] Schuldt, Christian, Ivan Laptev, and Barbara Caputo. "Recognizing human actions: a local SVM approach." Pattern
Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on. Vol. 3. IEEE, 2004.
[6] Ouanane, A., A. Serir, and N. Djelal. "Recognition of aggressive human behavior based on SURF and SVM."
Systems, Signal Processing and their Applications (WoSSPA), 2013 8th International Workshop on. IEEE, 2013.
[7] Bagheri, Mohammad Ali, et al. "A framework of multi-classifier fusion for human action recognition." Pattern
Recognition (ICPR), 2014 22nd International Conference on. IEEE, 2014.
[8] Farzad, Adeleh, and Rahebeh Niaraki Asli. "Recognition and classification of human behavior in Intelligent
surveillance systems using Hidden Markov Model." International Journal of Image, Graphics and Signal
Processing 7.12 (2015): 31.
[9] http://www.nada.kth.se/cvap/actions/
[10] Rajpoot Q.M., Jensen C.D. (2014) Security and Privacy in Video Surveillance: Requirements and Challenges. In:
Cuppens-Boulahia N., Cuppens F., Jajodia S., Abou El Kalam A., Sans T. (eds) ICT Systems Security and Privacy
Protection. SEC 2014. IFIP Advances in Information and Communication Technology, vol 428. Springer, Berlin,
Heidelberg

302
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Calibration of Numerical Model Of RCC TEE Beam Bridge Deck with Scaled
Physical Model for Fatigue Analysis
Muhammad Arsalan Khattak1, Muhammad Fahad2, Muhammad Adeel Arshad3, Mohammad Adil4 and
Adil Rafiq5

1. Researcher, Department of Civil Engineering, University of Engineering and Technology, Peshawar, Pakistan
2. Assistant Professor, Department of Civil Engineering, University of Engineering and Technology, Peshawar, Pakistan
3. Assistant Professor, Department of Civil Engineering, University of Engineering and Technology, Peshawar, Pakistan
4. Assistant Professor, Department of Civil Engineering, University of Engineering and Technology, Peshawar, Pakistan
5. Lecturer, City University of Science and Information Technology

Abstract

Most of the bridges in Pakistan are constructed of reinforced concrete (RC) which deteriorates with
time, mainly due to fatigue. Fatigue test is a time consuming and expensive procedure especially when
used for large elements like RC bridge deck etc. Study using numerical model solves this issue to some
extent but calibration of numerical model of RC structures has always been a challenge. This paper
presents the experimental and numerical studies performed in order to develop a calibrated model which
can be used to obtain fatigue life of RC Tee Beam Bridge deck by applying load cycle to get stress
ranges for critical regions and then using SN curves for finding the fatigue life. This approach is fast and
cost effective alternate to fatigue tests.

Keywords— Numerical modeling, Abaqus, Reinforced Concrete Bridge, Fatigue, Non linear finite
element analysis

1. Introduction

Whenever a structure is subjected to repeated cyclic loadings it causes fracture at the most vulnerable
location. These repeated cyclic loadings are usually well below the static limits causing a phenomenon
called fatigue. Depending on the usage of structures, fatigue is induced due to a variety of loading
conditions, heavy traffic loading on bridges, wind loading on high rise buildings and ocean waves on
offshore structures are few of many examples. The number of cycles that a structure sustain before
undergoing such type of failure is called fatigue life of that structure. The fatigue life of a structure
depends upon the frequency of cycles, the amplitude of cycles as well as the material composition of the
structure [1, 2].

Bridges are important and most costly component of roadway infrastructure. Many existing
bridges nowadays are required to carry larger loads than what they were originally designed for. This
essentially means that bridges today are exposed to many load cycles with significant amplitudes.
Structural damage in bridges may occur if they are subjected to such cyclic loading below static strength

303
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

limits. Fatigue life evaluation is an important step for both states of design of new structures and
evaluation of existing structures.

1.1 Problem Description

An important part of highway bridges in Pakistan is fatigue relevant because of the following reasons:
• The traffic volume has increased and hence the traffic conditions are not the same for which the
bridges were initially designed.

• The vehicle weights are increased because of the introduction of new large vehicles to carry
more goods at a time. These vehicles with more carrying capacity were not considered during
initial design of bridges.

• Some of the vehicles don’t follow the limits set by National Highway Authority.

• Aging and environmental factors such as corrosion reduces the fatigue resistance of existing
bridges.

2. Methodology

The work in this paper is divided into the following stages:

• Selecting a specific reinforced concrete bridge for case study and acquiring its geometric and
material properties. Bagh e Naran Bridge, a roadway bridge, situated at Hayatabad, Peshawar,
Khyber Pakhtunkhwa Pakistan is selected in this case.

• Only a single T-beam girder of the bridge is considered. Making reduced scale model of the T-
beam girder of actual bridge in laboratory and performing static load test on it so as to find its
load deformation curve.

• For finite element modeling a static test model is built in Abaqus which is calibrated with the
experimental static test results.

• Fatigue life of the calibrated static model can be found by applying specific load cycle of interest
and obtaining stress ranges for critical regions. These stress ranges and SN curves from literature
can be used to find the number of cycles to failure.

3. Description of Bridge

304
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Bagh e Naran Bridge, a roadway bridge, situated at Hayatabad, Peshawar, Khyber Pakhtunkhwa Pakistan is
selected in this case. It has simply supported 12 meters span. Different cross sectional dimensions were
obtained for this bridge as shown in figure 1:

Figure 1: Cross-sectional dimensions of Bagh e Naran Bridge. All dimensions are in millimeters

3.1 Prototype

Prototype girder of Bagh e Naran Bridge is shown in figure 2:

305
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Figure 2: Prototype girder of Bagh e Naran Bridge

3.2 Reduced Scale Model

For laboratory purpose, 1:4 reduced scale model was made. The linear dimensions will be reduced by
1/4 while area is square of linear dimension, so reinforcement area will be reduced by (1/4)2 which is
1/16. This is shown in table 1:

Tee-Beam Prototype Model


Scale 1 ¼
Beam Length 12,000 mm 3,000 mm
Beam Depth 1,070 mm 267.5 mm
Flange Width 1,850 mm 462.5 mm
Flange Thickness 190 mm 47.5 mm
Stem Thickness 470 mm 117.5 mm
Beam Long. Bar (Bottom) Ø25mm - 6 bars Ø7.4mm - 4 bars
Shear Stirrups Ø12mm @ 150mm c/c Ø5.06mm @ 115mm c/c
Flange Long. Bar (T&B) Ø10mm @ 150mm c/c Ø5.06mm @ 154mm c/c
Flange Trans. Bar (T&B) Ø16mm @ 150mm c/c Ø5.06mm @ 59mm c/c
Reinforcement Yield Strength 414 MPa (≈60 ksi) 414 MPa (≈60 ksi)
Concrete Compressive Strength 16.6 MPa (≈2400 psi) 16.6 MPa (≈2400 psi)
Maximum Coarse Aggregate Size 50 mm 12.5 mm
Concrete Cover 50 mm (2 in.) 12.5 mm (0.5 in.)
Table 1: ¼ Reduced scale model T-beam details

306
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Concrete and reinforcement test results for the reduced scale model are given in table 2, 3 and 4:

Avg. Avg.
Yield Ultimate
Nominal Yield Ultimate Avg.
Strength Strength Percentage
Specimen Sample Diameter Strength Strength Percentage
MPa MPa Elongation
mm (in) MPa MPa Elongation
(ksi) (ksi)
(ksi) (ksi)

6.35 426 600


1 16.4
(0.25) (61.77) (87)

1 6.35 511 453 663 630


2 18.0 15.9
(0.25) (74.095) (65.6) (96.135) (91.35)
6.35 421 625
3 13.3
(0.25) (61.045) (90.625)
3.175 503 565
1 15.6
(0.125) (72.95) (81.93)
3.175 521 505 599 572
2 2 11.3 13.2
(0.125) (75.62) (73.24) (86.94) (83.01)
3.175 491 553
3 12.7
(0.125) (71.26) (80.15)
Table 2: Reinforcement test results for the model

Nominal Effective
Mass Avg. Mass
Specimen Sample Diameter Diameter
kg/m (lb/ft) kg/m (lb/ft)
mm (in) mm (in)
6.35 0.3452
1
(0.25) (0.2320)
6.35 0.3552 0.3425 7.40
1 2
(0.25) (0.2387) (0.2302) (0.29)
6.35 0.3273
3
(0.25) (0.2200)
3.175 0.1552
1
(0.125) (0.1043)
3.175 0.1607 0.1582 5.06
2 2
(0.125) (0.1080) (0.1063) (0.20)
3.175 0.1585
3
(0.125) (0.1065)
Table 3: Reinforcement Properties of the model

307
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

28 Days Avg. 28 Days


Compressive Compressive
Sample Mix Design W/C Ratio
Strength MPa Strength MPa
(psi) (psi)

1 1 : 1.5 : 3 0.60 16.94 (2457)

2 1 : 1.5 : 3 0.60 14.10 (2045) 16.41 (2381)

3 1 : 1.5 : 3 0.60 18.22 (2642)


Table 4: Model Concrete Test Results

Formwork was prepared using above dimensions and concreting was done using above mix
design. The slump noted was 3.25 inches respectively.

Figure 3: (a) Formwork Preparation and (b) Concreting of reduced scale RC T-Beam Girder

4. Laboratory Static Test

Static test was performed on the reduced scale T-beam girder in order to find its load deformation curve.
For this purpose, the T-beam was placed in the straining frame. Total length of beam is 3 meters (3000
mm). Roller and hinge supports made of steel were provided at the ends so that the beam becomes

308
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

simply supported. They are provided at 150mm from each end of the beam respectively. Two steel
rollers are place on the top surface of the beam, near the mid of the beam such that the distance between
them is 2 feet (610 mm). Steel I beam was placed above them so that the load is symmetrically
distributed to the T-beam. By this way we get two applied loads and two reactions hence creating a 4
point loading situation. The schematic diagram can be seen in figure 4:

Figure 4: Four Point Loading Scenario in Static Test

Displacement transducer was placed at the mid of the T-beam so that we may get the mid span
displacement value with time. The data acquisition system was connected with load cell as well as
displacement transducer to collect the required data. The test setup may be seen in figure 5:

Figure 5: Static Test Setup

As this is displacement controlled test, displacement is applied through the straining frame,
reaction force and mid span deflection of the T-beam girder is acquired through data acquisition system
connected to load cell and displacement transducer respectively. Mid span deflection and load carrying
capacity of the T-beam are plotted as shown in figure 6:

309
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Figure 6: Load versus mid span displacement graph for Laboratory Static Test

5. Finite Element Modeling using Abaqus

Finite Element Analysis (FEA) represents a numerical method that provides solutions to different
problems that would otherwise be difficult to obtain [3]. Now the same reduced scale T-beam girder is
modeled in powerful finite element software abaqus 6.14 and calibrated with the static test performed in
the laboratory. Concrete damage plasticity material model available in abaqus is chosen for the present
study which is described briefly in the following section. The explanation is mainly based upon [4, 5].

5.1 Concrete Damage Plasticity Model

Concrete Damage Plasticity model is implemented for both Abaqus/Standard and Abaqus/Explicit
which provides a general capability for analysis of concrete and also is suitable for masonry and other
quasi-brittle materials, under monotonic, cyclic or any other type of dynamic loading. Two main failure
mechanisms are assumed in this model i.e. tensile cracking and compressive crushing of concrete. The
evolution of the yield surface is controlled by tensile and compressive equivalent plastic strains [6]. The
response of concrete in tension and compression is characterized by concrete damage plasticity model as
shown in figure 7 and 8:

310
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Figure 7: Behavior of concrete under uniaxial compression [4]

Figure 8: Behavior of concrete under uniaxial tension [4]

Starting from any point of the strain softening branch of the stress-strain curves of tension and
compression, the response of concrete and alike material is enfeebled which is due to the damage or
degradation of elastic stiffness of the material. The elastic stiffness degradation on the strain softening
branch of the stress strain curves as seen in figure 7 and 8 is characterized by two damage variables, dt
and dc, which can take values from zero to one. Zero represents undamaged material where one
represents fully damaged material. E0 is the initial (undamaged) elastic stiffness of the material and 𝜀𝑐~𝑝𝑙,

311
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

𝜀𝑡~𝑝𝑙, 𝜀𝑐~𝑖𝑛, 𝜀𝑡~𝑖𝑛 are compressive plastic strain, tensile plastic strain, compressive inelastic strain and
tensile inelastic strain respectively [6]. The stress-strain relations under uniaxial loading are given by:
𝒑𝒍
𝝈𝒕 = (𝟏 − 𝒅𝒕 )𝑬𝟎 (𝜺𝒕 − 𝜺̃𝒕 ) (1)
𝒑𝒍
𝝈𝒄 = (𝟏 − 𝒅𝒄 )𝑬𝟎 (𝜺𝒄 − 𝜺̃𝒄 ) (2)

The yield surface size is determined by the effective uniaxial cohesion as:
𝝈 𝒑𝒍
̅ 𝒕 = (𝟏−𝒅𝒕 ) = 𝑬𝟎 (𝜺𝒕 − 𝜺̃𝒕 )
𝝈 (3)
𝒕

𝝈 𝒑𝒍
̅ 𝒄 = (𝟏−𝒅𝒄 ) = 𝑬𝟎 (𝜺𝒄 − 𝜺̃𝒄 )
𝝈 (4)
𝒄

Figure 9: Yield surface in plane stress [4]

5.2 Concrete Properties in Abaqus Model

Density: 2400 kg/m3

Elastic Modulus: 20690 MPa

Poisson’s ratio: 0.2

312
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

For defining the concrete damage plasticity material model, some input parameters are required. The
plasticity tab of concrete damage plasticity consists of:

Dilation Angle: ψ (in degrees, measured in the p-q plane at high confining pressure).

Eccentricity: 𝜖 a small positive number that defines the rate at which the hyperbolic flow potential
approaches its asymptote. (Default 𝜖 = 0.1)

fb0/fc0: that is σb0/σc0, the ratio of initial equi-biaxial compressive yield stress to initial uniaxial
compressive yield stress. (Default σb0/σc0 = 1.16)

K: is the ratio of the second stress invariant on the tensile meridian to compressive meridian at initial
yield. Kc must satisfy the yield condition, thus 0.5 < Kc < 1. (Default Kc = 2/3)

Viscosity Parameter: μ is used for the viscoplastic regularization on the constitutive equation in
Abaqus/Standard analysis. (Default μ = 0.0)

After various trials, it was noted that the default values give good results resulting in plasticity
parameters as shown in table 5. The model was sensitive to dilation angle and mesh size. Mesh size of
25mm gave good results.

Dilation Viscosity
Eccentricity fb0/fc0 K
Angle Parameter

37° 0.1 1.16 2/3 0


Table 5: Plasticity Parameters

For compressive behavior of concrete stress strain curve was obtained from laboratory test as
shown in figure 10:

Figure 10: Concrete stress strain curve in compression

313
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Tensile strength was assumed to be 10 percent of compression strength but to obtain post peak
behavior empirical equation was obtained from [7]. Trials were conducted by changing the constant
1000 in the equation and best results for the model were obtained for a value of 2500.

Figure 11: Proposed effective tensile stress strain curve for concrete [7]

Figure 12: Tensile curve for concrete used in the model

5.3 Reinforcement Steel Properties in Abaqus Model

Density: 7850 kg/m3

Poisson’s Ratio: 0.3

314
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Uniaxial Curve: The Stress strain curve used was obtained from laboratory testing as given in figure 13:

Figure 13: Stress strain curve of reinforcement

5.4 Modeling

The four point loading static test was modeled in the non-linear finite element software package abaqus.
The steel reinforcement and the geometry of the laboratory specimen made in abaqus can be seen in
figure 14 and 15:

Figure 14: Abaqus steel reinforcement

315
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Figure 15: Abaqus model geometry

Using embedded region technology in abaqus constraints tab, steel bars are embedded in
concrete with the same degrees of freedom hence creating a perfect bond between concrete and steel.
Concrete is modeled by using C3D8R element which is an 8-node linear brick element with reduced
integration formulation. While for steel T3D2 elements are used, this is a 2-node linear 3-D truss
element. The test is displacement controlled in the vertical direction. The size of mesh used is 25mm.
The meshed model can be seen in figure 16:

Figure 16: Meshed Model

316
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

5.5 Results

Load versus mid span deflection for the numerical model was obtained and compared with laboratory
static test as shown in figure 17. It can be seen that numerical results agree well with the experimental
results. If actual experimental data of uniaxial compression curve, uniaxial tension curve, biaxial test
and triaxial test is present for concrete, the concrete damage plasticity input parameters can be better
determined and abaqus can predict the behavior more accurately [8].

Figure 17: Experimental versus Numerical Load Displacement Curves

6. Conclusions and Recommendations

Numerical modeling predicts the behavior of laboratory static test with good degree of accuracy. It is
also observed that if material properties and geometrical information of real test are given in detail then
finite element modeling give better results as compared with missing information. The calibrated model
in this paper can be used further for applying specific load cycle of interest and hence finding the stress
ranges of critical locations. These stress ranges and SN curves from literature can then be used to find
out the fatigue life of the above T-beam girder.

317
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

7. References

[1] Olsson, K., & Pettersson, J. (2010). Fatigue assessment methods for Reinforced Concrete Bridges in
Eurocode. Comparative study of design methods for railway bridges.

[2] Raymond, L., & Browell, P. E. (2006). Predicting fatigue life with ansys workbench. In International
ANSYS Conference.

[3] Habeeba, A., Sabeena, M.V., & Anjusha, R. (2015). Fatigue evaluation of Reinforced Concrete
Highway Bridge. International Journal of Innovative Research in Science, Engineering and Technology,
4(4), 2561-2569.

[4] Abaqus 6.14 Theory Guide

[5] Abaqus 6.14 Analysis User’s Guide, Volume III: Materials

[6] Sümer, Y., & Aktas, M. (2015). Defining parameters for concrete damage plasticity
model. Challenge Journal of Structural Mechanics, 1(3), 149-155.

[7] Hwang, L., & Rizkalla, S. (1983). Effective tensile stress-strain characteristics for reinforced
concrete. Proceedings of the Canadian Society of Civil Engineering, 129-147.

[8] Jankowiak, T., & Lodygowski, T. (2005). Identification of parameters of concrete damage plasticity
constitutive model. Foundations of civil and environmental engineering, 6(1), 53-69.

318
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

RISK TOLERANT K-MIN SEARCH ALGORITHM


Muhammad Kashif Nawaz, Iftikhar Ahmad
Department of Computer Science and Information Technology, Peshawar
University of Engineering and Technology, Peshawar

ABSTRACT

In a simple k-min search scheme, the main idea is to buy k units of an asset at a minimum possible purchasing cost. It
worked as, at every instant of time t, a price 𝑞𝑡 is watched, and after every price is revealed the investor needs to make a
decision either she will accept the price and buy a single unit whenever the price is met or wait for the next coming value before
the last price. At final offered price investor needs to buy the remaining units.
In previous studies, the authors develop scheme but either they ignore the risk or try to mitigate it therefore those
techniques lacks the ability to manage the risk and thus not applicable in real world.
In our approach we provided the algorithm which has the functionality to manage the risk rather mitigating it, we
consider risk reward framework of Al-Binali and mapped that techniques according to our scheme. The performance of
algorithm is analyses using competitive analysis approach.

Index Terms: k-Min Search, Risk Reward Framework, Online Algorithms

1. INTRODUCTION
In recent time lot of work has been done in time series search problem. A popular example of time series is the
secretary problem, in which 𝑛 candidates are interviewed for a particular position one by one in random order, one at a time
[1]. After each interview, the interviewer decides whether to select the candidate or not. Once a candidate is rejected, she cannot
be recalled and the process continues until the interviewer select the candidate or reject 𝑛 − 1 and must accept the last nth
candidate. The interviewer needs to be very careful while making a decision because the decision she made through the entire
process is irrevocable.

Time series [10] search is further classified in two types that is minimization problem and maximization problem [2].
In minimization problem the player looks for the minimum price from the n prices, whereas in the maximization problem player
look for the maximum price from the interval.

Minimization (maximization) are further branched into two types, search for a single unit and search for multi-unit
[3]. In single unit or 1-min (max) search problem look for single min (max) value from the entire input horizon while multi-
unit or k-min(max) player buy k unit at minimum possible price.

𝑘-min (max) search is an online problem [4]. Online problem are the ones where the entire input is not known player
is unaware of the coming price [5]. In online problem the player is not aware of the future prices. The prices are unfolded
sequentially and player has to make a decision whether she will buy the unit at the offered price 𝑞𝑖 or not except the last offered
price she must buy all units. If the player made any decision at time t she cannot go back, the decision she take is irrevocable
except the last time T where she has to accept whatever the last price is offered.

To quantify the execution of the online algorithms, competitive ratio 𝑐 is used. Competitive analysis is the strategy
for contrasting the execution of online algorithm, which must fulfil the unpredictable succession of inputs, finishing each
demand without having the capacity to see the future contributions against the ideal offline algorithm. Optimal offline algorithm
has the total information of whole input sequence. Competitive ratio is the proportion of performance measure of the online
algorithm to the performance measure of the offline algorithm.

319
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

In the real world input fluctuates and do not remain at one extreme, for example in a scenario where player wants to
buy few units she will get the new prices daily, sometimes she will get a low price and sometimes the prices will be high. As
the player is unaware of the future prices, so there is risk involve. Competitive ratio is risk averse in nature, not feasible for real
world markets where players (investors) wants to manage the risk, rather mitigate it. Competitive ratio is not a good measure,
and we need algorithms that provides the flexibility to manage the risk rather than mitigate it.

In this work we are proposing an algorithm for 𝑘 − 𝑚𝑖𝑛 search problem which will be risk aware. The proposed
scheme will give the capacity of risk management to the financial specialist (investor). Rest of the paper is sorted out as takes
after. In Section 2, preliminaries and starting problem settings and some definitions are discussed. Section 3 shows the literature
review. Section 4, proposed algorithm is explained in this section. Section 5 includes the conclusion of the work and gives
headings to future works.

2. PRELIMINARIES AND PROBLEM SETTING


We present the initial setting and notations necessary to understand the problem. The details are as under:

Notations:
• 𝑘: number of unit players wanted to buy
• 𝑚: minimum offered price
• 𝑀: maximum offered price
• T: total length of the span
• 𝑞𝑡: is the price at time qt.
• 𝐹: forecast value
• ∆ : small change
• 𝑟1 : competitive ratio if forecast is false
• 𝑟2 : competitive ratio if forecast is true
• t : tolerance level
• 𝜆 : the particular instance where forecast come true

2.1 Online Algorithms:


In a formal manner, an online algorithm receives request in a succession 𝜎 = 𝜎(1), . . . , 𝜎(𝑛). these request must be
entertained in the rank of happening. When entertaining the request 𝜎(𝑡), an online scheme does not know the future request
𝜎(𝑡 ′ ) with 𝑡 ′ > 𝑡. Each serving request incurs a cost and the objective is to lessen the overall cost.

2.2 Offline Algorithms:


An algorithm is offline if it can create a feasible output given the whole input arrangement. We denote an ideal offline
algorithm by OPT . By definition, for each input sequence I the output of OPT is OPT (I) = sup O∈F(I) U(I, O).

2.3 Competitive Ratio


Competitive Ratio is the performance measure of the online algorithm ALG(F, I) for a particular problem with respect
to the performance measure of the optimal offline algorithm denoted as OPT − ALG(F, I).
ALG(F, I)
c=
OPT − ALG(F, I)
2.4 k-min Search problem
𝑘-min search is the minimization problem in which the player has the objective to minimize the total cost of k units.
At the beginning of the game the player has enough wealth to buy all the units. The player will calculate k reservation prices,
she will buy a unit whenever the reservation price 𝑞𝑖 ∗ where 𝑖 = {1,2,3,4, … . . 𝑇} is met except the last day 𝑇 where she has
to buy all the remaining unit if any. The game continues until the player buy all the k units.

320
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

3. LITERATURE REVIEW

Secretary Problem is studied under the mathematical, statistical and probabilistic problem nature. Secretary problem
is about finding the best suitable candidate out of the total number of the candidates called for the interview but the interviewer
is unaware of the skills of the candidate. The problem in this approach is that it is very restricted way of selecting the best
candidate. In 1961 Lindley seems to be the first to solve the problem in a scientific journal [6]. This essential problem has an
unusually simple solution. First, one demonstrates that consideration can be constrained to the class of that for some integer
r > 1 rejects the first r − 1 candidates, and then pick out the next candidate who is best in the comparative ranking of the
watched candidates. For such a rule, the probability ϕ(r), of selecting the most suitable candidate is 1/n for r = 1, and, for
r > 1.

A remarkable literature is available on search problems particularly max search, due to the inventive work of El-Yaniv
at al [3][7][8]. Max-search is classified into single unit (1-max) max-search and multi-unit (k-max) max-search problems. The
special case of max-search is 1-max search, it is actually a profit maximization problem where player is looking for the higher
possible price to convert that one unit to maximum reward. After applying the competitive analysis, reservation price policy is
proposed by El-Yaniv et al [9]. The reservation price technique expect Theoretical data about the lower and upper limits m, M
separately of offered cost, and figure out a reservation cost q ∗ = √Mm . The prices are computed and the first accepted price
should be of at least 𝑞1∗ . Another case where player can invest multiple units at an instant and achieve a the goal of scenario
is also discussed by El-Yaniv et al . In mulita-search technique in which the player need to increase her profit by putting more
units in purpose of time. It is considered that the asset/wealth is distinguishable into parts, and the online investor has full
information about the range that is 𝑚 𝑎𝑛𝑑 𝑀, where m is the least conceivable value is and M is the most noteworthy
conceivable value. In the proposed scheme also known as a “threat-based technique”, as it is assumed that investor faces the
possible threat that an adversary could drop the price to m (the minimum expected price) without any prior indication. He
considered different variants of this problem based on a theoretical information.

1-max search problem is widen by Lorenz et al [5] to the multi-unit search scheme also known as k-max (k-min)
search scheme. They accepted that the units are not divisible, and that the player has data of m and M. The suggested 𝑘 − 𝑚𝑖𝑛
(𝑘 − 𝑚𝑎𝑥) search scheme computes k reservation values (prices). The main principle value that is at most (minimum)q1∗ is
acknowledged, and one unit is purchased (sold). In like manner, the investor waits for the new coming price that is at most
(minimum) q∗2 , and purchases (sells) one more unit, the process continue for 𝑞𝑛 − 1∗ . On the final day T, the player purchases
(sells) all the leftover units at the last offered price qT .

The approach used by Lorenz is limited the investor to buy only one unit whenever the condition is favourable.The
other drawback was hat when k > T in that scenario LPS is not relevant, which encourage it for further improvement. Zhang et
al [9] further explained the process and derive the k-max search technique to a generalized form. The problem of 𝑘 − 𝑚𝑖𝑛
search of Lorenz et al [10] is further improved by Iqbal and Ahmad [11] .

Iqbal and Ahmad [11] take 𝑘 − 𝑚𝑖𝑛 search to the next level and proposed new much sophisticated way to solve the
issue which was there in the previous work. He suggest the scheme named it as “Hybrid k-min search algorithm” .Both
reservation price policy along with and principles of El-Yaniv et al [3] threat-based algorithm are combined together to make
it work even better and also keep the threat from the opponent all the time. Likely to the reservation price scheme, k reservation
cost values (prices) are computed and whenever reservation prices are satisfied only then few units are purchased. The adversary
will increase the price to the highest possible value (that is M in out the upper bound) as it happen in the threat based approach
without giving any earlier signal and sustain it there for the remaining investment time span. For more related work, we refer
the reader to [12].

The drawback of the hybrid scheme is that it did not accommodate the risk in their approach, the further enhancement
to their proposed techniques is to manage the risk in the approach.

We further improved the problem by mapping the risk tolerance scheme explained in Al-binali [13]. The solution
provided improves the competitive ratio by introducing a new risk reward framework and upon that give a new algorithm for
financial trading problem explained in El-Yaniv et al work. He dealt with the risk with the forecast value because when forecast
comes false he trade as less as the tolerance level t allows and when forecast which is 𝑚 + ∆ comes true. If the price qi ≥ m +
∆ , new competitive ratio r2∗ is calculated and trade as much as the tolerance level allows.

321
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

The table below shows the comparison of studies and the approaches studied in this paper.

Scheme Working Drawbacks


k-min search Buy k units by minimizing the overall Based on assumption the ideal case
total cost, purchase k units when the is considered, not applicable in real
desired price is met world.
Lorenz k-min search scheme LPS Minimizing the total cost for buying k Player is restricted to buy only one
units of an assets but buy only one unit unit even if the reservation price is
whenever the reservation price is met. satisfied. And if the k>T , LPS is not
relevant.
Hybrid k-min search The approach is the combination of The scheme lacks in managing the
reservation price and threat based scheme risk involved in the real world.
proposed by El-Yaniv. The idea is to buy
more than one units if the price is met.
And also applicable if k>T.

Table 3.1: The table shows the comparison of different techniques

4. PROPOSED SCHEME

Proposed Algorithm

In this research, our objective is to develop an online forecast based risk aware scheme for multi-value (𝑘 − 𝑚𝑖𝑛)
search. We will address this problem in an online manner. In an online 𝑘-min search scheme, an investor wants to purchase 𝑘
units of an asset within a finite time length investment span. At every instant 𝑡 = {1, 2, 3,4,5,6, . . . , 𝑇} , the player is offered
price qt. In 𝑘-min search player calculate k reservation prices like 𝑞1∗ , 𝑞2∗ , 𝑞3∗ , … … … . , 𝑞𝑘 ∗ . The first acceptable price is
𝑞1∗ where player buy a unit. If the offered price is less than or equal to calculated reservation prices, player will buy one unit
and if the reservation prices are not met the player reject that price and does not buy any unit. But on the last day the player
must accept the offered price and buy the remaining unit(s) if any.
We assume that there is a forecast that in future we will observe a price which is at most 𝑚 + 𝛥, nevertheless, it is
not necessary that the forecast is true all the time, it might also be wrong. When the forecast is wrong the player will buy single
or 0 unit but when it is true player will buy more than one units to ascertain minimum cost of the assets.

The game ends, if the investor has purchased all the k parts or if it is the last day 𝑇 where investor has to purchase all
the remaining unit irrespective of what is offered at 𝑞𝑇. The investor (player) may or may not have some prior information of
the investment time span T. We are considering the finite sequence of investment length. The algorithm is shown below in
figure 4.1.

322
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Figure 4.1: Algorithm

5. CONCLUSION
We presented an online k-min search scheme for the circumstance where financial specialist needs to purchase k units of an
asset and furthermore suit the hazard and limiting the aggregate purchasing cost. The proposed scheme permits the online
player (investor) to purchase in excess of one unit if the condition is good and less unit if the condition is not reasonable. It
accomplishes the better execution and preferable aggressive proportion over conventional approach. Future work can include
competitive analysis of the proposed algorithm, and experimental evaluation of the proposed scheme on real world data.

323
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

REFERENCES

[1] Y.S. Cho, H. Robbins, and D. Siegmund. The Theory of Optimal Stopping. Dover,New York, 1971.
[2] W. Zhang, Y. Xu, F. Zheng, and M. Liu (2011) Online algorithms for the general k-search problem. Information
Process Lett 111(14):678–682. doi:10.1016/j.ipl.2011.04.008
[3] R. El-Yaniv, R.Fiat, A.Karp, R.Turpin, “Optimal search and one-way trading algorithm”. Algorithmica (30), 101–139
(2001)
[4] Y. Xu, W. Zhang, F. Zheng. “Optimal algorithms for the online time series search problem”, Theoretical Computer
Science 412 (2011) 192–197
[5] J. Lorenz, K. Panagiotou, A. Steger, “Optimal algorithms for k-search with application in option pricing”. Algorithmica
55(2), 311–328 (2009)
[6] T S. Ferguson,” Who Solved the Secretary Problem, Statistical Science,1989”, Vol. 4, No. 3, 282-296
[7] P. Damaschke, P. H. Ha, P. Tsigas, “Online search with time-varying price bounds". Algorithmica, 55(4), 619–642.
(2009).
[8] G.-H. Chen, M.-Y. Kao, Y.-D Lyuu, H.-K Wong, "Optimal Buy-and-Hold strategies for financial markets with bounded
daily returns". SIAM Journal on Computing, 31(2), 447–459 (2001).
[9] R. El-Yaniv, “Competitive Solutions for Online Financial Problems”. ACM Computing Surveys, 30(1), 28–69. (1998)
[10] P.Schroeder, R.Dochow, G.Schmidt, Optimal solutions for the online time series search and one-way trading problem
with interrelated prices and a profit function, ScienceDirect March 2018.
[11] J. Iqbal, and I. Ahmad. "Optimal online k-min search." EURO Journal on Computational Optimization 3.2 (2015): 147-
160.
[12] E. Mohr, I. Ahmad, and G. Schmidt, “Online algorithms for conversion problems: A survey”, Surveys in Operations
Research and Management Science 19 (2), 87-104, 2014
[13] S.al-Binali, “A Risk–Reward Framework for the Competitive Analysis of Financial Games”. Page 99-155. Algorithmica
(1999).

324
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

INTRODUCTION TO CIVILMATICS

Mustafa Ayub*, I.U. Khalil


*Department of Civil engineering
City University of Science and Information Technology
Peshawar, Pakistan

ABSTRACT

The Applications of mathematics in construction and infrastructure development are well known. In civil engineering there
are several applications which use basic mathematics as a tool. These applications are mostly and widely used for solving
complex engineering problems of different complexity levels such as finite element analysis for structural designs, fluid study
equations along-with behavior of particle in liquid and air flow as well as determination of various sectional properties of heavy
structures etc. Moreover trigonometry, calculus, algebra and geometry are incorporated as a cognitive tool for different
parametrical analyses of fluid mechanics and solid mechanics. In this paper concise and compendious outlines of course of
mathematics for civil engineering are identified with linkage to routine engineering problems of different complexity levels.
Index Terms— Civilmatics

1. INTRODUCTION

Skyscraper, bridges, roads, dams, tunnels and canals along with hydraulics and irrigation etc. are domains of civil
engineering, which need lot of mathematical calculations based on prior knowledge, has ultimately raised living standard across
the world. Infrastructure and industrial development play an important role in human development index (HDI) of country.
Today civil engineering is often synchronized with environmental engineering as countries across the world are more focusing
on sustainable buildings and protecting their structures from damage caused by environmental disasters like wind, floods and
earth quake, it also resulted in academic expansion of curriculum as per demand such as earth quack technology etc. Fluid
mechanics we study about the analysis of type of motion of fluid particle, type of flow, energy equation, continuity equation,
velocity potential, stream function etc. all can only be defined by application of mathematics such as eulerian and langrangian
approach of particle analysis, curl of function, vector calculus etc.

Multi stories structures, rigid pavement of roads and analysis of particle under difference forces like seismic force and
wind pressure etc., usually use finite element method (FEM) for different parametrical analysis, some softwares which are
mostly based on the principle of FEM such as staad pro, etabs, ANSYS, sap etc. are also used for the said purposes. Many of
important theorems of engineering mechanics like Lami’s theorem, Varignon’s theorem, Euler’s theorem etc. are also the key
tools for analyzing of particles in rest or as well as in motion under the action of external forces. Mathematics not only in
undergrad civil engineering courses can be used as a tool but in also basic geometric calculations such as determining properties
like area, volume, centroid, moment of inertia, radius of gyration, slenderness ratio etc. of different sections like rectangle,
triangle, circle and other compound sections, is used which in long term results in efficient structural design and analysis.
Therefore, Object oriented mathematics is now becoming need of every civil engineer.

2. MATHEMATICS OF CIVIL ENGINEERING

Undergrad civil engineering coursework consists of (a) survey, which is incorporated with angles, elevations and
areas, (b) engineering mechanics, after studying which students usually able to calculate center of gravity, moment of inertia
and center of gyration etc. (c) Building constructions which requires knowledge of ratio (d) A course of structure analysis
enables students to examine different parameters of structures such as trusses, beams and frames and different concepts such
as virtual work , energy methods and influence lines. (e)Mechanics of solid covers list of topics which is mostly consists of
internal forces and deformation in solids, stresses and deflections in beams along with column theory and its analysis. (f)

325
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Courses of fluid mechanics usually involve study of the properties of fluids, fluid dynamics and dynamic similitude which will
result in better analysis of flow of compressible and incompressible fluids in closed conduits.

3. PROPOSED SCHEME OF STUDIES OF CIVILMATICS

Course outline of mathematics for civil engineering is proposed which is based on magnitude and percentage of usage for
solving complex engineering problems. Geometry, Calculus, Algebra, Probability and Statistics, Trigonometry and linear
algebra are ingredients of new proposed series of courses which only suggests topics and techniques that are preferably followed
and used in civil engineering domain. Out lines are:

1. Angles along with calculations of triangle, which is used in survey for calculation of elevations and distance finding.
Trigonometry portion is supposed have a share of 20%.

2. Of all the Engineering disciplines, Civil Engineering uses Geometry the most. Geometry means "to measure the earth"
and clearly Civil Engineers involved in surveying are doing precisely that. More generally, Geometry involves the
analysis of shapes and the relationships among them. Civil Engineers must know how to design and assemble shapes
to construct buildings, dams, bridges, tunnels, highway systems, etc. [1]. The geometry of those shapes determines
their functionality. Proposed geometry outline consists of descriptive geometry, in which structure and objects are
visualized and engage in their design and analysis, beside this fractal geometry is used by Civil Engineers to analyze
such entities as the friction between objects, the clumping of materials, and the porosity of soils, all of which involve
geometric patterns that repeat on an ever-decreasing scale. Share of descriptive and fractal geometry is supposed to
be 35% based on market demand and trends.

3. Quantification of risk and safety is the most important parameter for sustainability of structures and designs which is
mostly overlooked by design engineers. Civil engineer must have skill and knowledge to develop a sustainable civil
structure which should have fuzzy compensation for any external disturbance caused due to environmental actions
[2]. Topics like frequency interpretation of probability, probability theory, discrete probability and combinatory are
used in quantification of risk and safety. Probability and statistics must consist of above said topics along with
numerical problems which will help students to solve complex engineering problem regarding failure analysis. Share
of these topics should be at least 10% with aspect of analytical calculation of practical case studies of failures of
designs across the world.

4. Rate of change of function covers topics of velocity, acceleration and optimization which is mostly used in canal
survey and design by calculation of water discharge speed and acceleration. Current meter works on same principle.
Derivate and integration with 2nd and 3rd iteration techniques are used for calculation of surface area, volume integrals
and partial derivatives [3]. Demystified Laplace transformation techniques usually help student to solve difficult
higher order differential equations [4]. These topics are pre-requisite of many engineering courses therefore, should
be covered in start of degree program. Share of calculus should not be less than 25%.

5. Quadratic forms of different equations, linear transformation including functions spacing, determinants and vector
space along with study of solution systems of equations are mostly used in calculations of placement of column and
beams and solution for equal distribution of forces on beams. All these are covered in linear algebra or matrix algebra.
Numerical analysis techniques are also used for different calculations. Share of these topics are supposed to be 10%.

4. CONCLUSION

Mathematics in civil engineering is a very basic tool for solving different level of engineering problems and also
provides a pathway with a mindset of lifelong learning by solving different complex engineering problems during engineering
carrier. Market oriented mathematics will help student to take an engineering solving process to a next level of simulation.
Proper share of different streams of mathematical knowledge also enables civil engineer to justify his technique of solving
problem with quick and efficient solutions have reasonable magnitude of accuracy and precision.

326
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

5. REFRENCES

[1] J Mayur Jain, “Application of mathematics in civil engineering”, International Journal of Innovation in Engineering
and Technology (IJIET), Vol. 8 Issue 3, June 2017. (ISSN: 2319-1058) http://dx.doi.org/10.21172/ijiet.83.011
[2] PreCup, Radu, “Ordinary differential equations” page 97-112. 1st edition, published by DE GRUYATER ,2018.
[3] Pengzhen Lu, Shengyong Chen, and Yujun Zheng,” Artificial intelligence in civil engineering”, Mathematical
Problems in Engineering-Hundawi, Vol. 2012 Issue 4, Article ID 145974, 22 pages.
http://dx.doi.org/10.1155/2012/145974
[4] Jurgita Antucheviciene, Zdeněk Kala, Mohamed Marzouk, and Egidijus Rytas Vaidogas,” Solving civil engineering
problems by means of fuzzy mathematics: current state and future research,” Mathematical Problems in Engineering
Vol. (2015), Article ID 362579, 16 pages.

327
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Optimization of Lot size and backorder quantity considering


learning phenomena with random defect rate.
Khalid khan, Misbah Ullah*
Department of Industrial Engineering
University of Engineering and Technology Peshawar
Peshawar, Khyber Pakhtunkhwa, Pakistan.
Email: misbah@uetpeshawar.edu.pk

Abstract: In recent time the main fundamental focus of various researchers is the determination
of batch quantity in a production system. In most archetypical cases it is reported that most of the
work for the exploration of traditional optimal inventory has been done, little work have been
done with random defective rate, backorders, rework process and learning phenomena. The
paper extends in inventory model to allow random effective rate, planned backorders and
Learning phenomena. Fundamentally, three distinctive inventory models are created for three
diverse appropriation density function, for example, uniform, triangular, and beta. We try to
minimize the total cost of the imperfect production system through optimal determinations of the
production quantity. From the optimality condition of the proposed model, a few fascinating and
valuable properties are researched and a productive and compelling calculation is created to scan
for the optimal solution. Likewise, the convergence of the iterative algorithm in this paper is
shown.
Keywords: Lot sizing, Economic production quantity, Imperfection in process, Random
defective rate, Backorders, Learning effects.

1. INTRODUCTION
These days, for a few enterprises the correct levels of inventories have changed over a vital
monetary issue, the organizations need to decision great choices in regards to inventories so as to
survive and help in the savage and focused organizations.The organizations need to decision
great choices with respect to inventories to be survive and developed you in the present
organizations. To limit the cost, distinctive researchers showed a couple of ideal parcel estimate
issues while keeping many conditions into thought.

328
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

The first inventory model i.e. economic order quantity developed by (Harris, 1913) and the
second economic order quantity model introduced by (Taft, 1918). From that point onward, these
sorts of models have been contemplated and reached out by a few specialists and researchers.
The two models are extremely vigorous implies that changing little changes in the parameter
does not have huge impact on the general arrangement. Be that as it may, EPQ model was
created in light of some presumption, the critical of which was the thought of an impeccable
generation framework. In actual period issues an imperfect system produces low quality product
in light of numerous reasons i.e. imperfections in crude material, distinction in skill of
administrators or vary in device ability. In view of these issues Still unique models have been
produced in there contemplation.
For getting an optimal lot size (Jamal, Sarkar et al, 2004) researched on EPQ model. For the
uncommon instance of single stage producing setup the model takes two sort of adjusting
underway i.e. prompt rework and revamp after N manufacture cycle. Cardenas exhibited a
similar model with an extra term of delay backorders by extending the research work of (Sarkar
et al, 2004). Furthermore, the rework procedure is finished amid a similar production cycle. This
model includes the two traditional backordering costs (fixed and linear). There are also many
assumptions in (Cardenas’s model, 2009). A model which he developed that include rework with
imperfection in the parts in that production always steady and will not change with time. In any
case, it is somewhat clear that in any imperfect system the defective items are not constant but
random and follow various probability distributions. Recently further work was done on
(Cárdenas-Barrón, 2009) research by (Sarkar, Cárdenas-Barrón et al, 2014) and considered the
defects in items are not constant and will follow various distributions of uniform, triangular and
beta.
Very little work has been done on lot sizing decision with incorporating the learning phenomena
of the unit production time in a imperfect production system previously. In our current research
paper we define out the Learning effect in unit production time for the imperfect production
system.

329
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Table 1. Contribution of authors

Authors Imperfect Backorder Lot Learning


size phenomena

Reject Rework Planned Partially Lot size Regular Rework

Cheng-Kang Chen et al ✓ ✓ ✓
(2007)

Biswajit Sarkar et al ✓ ✓
(2012)

Hui-Ming Wee (2012) ✓ ✓

Tsung-Hui Chen(2014) ✓ ✓ ✓

Hui-mung wee (2014) ✓ ✓

Biswajit sarkar and ✓ ✓

Mitali sarkar (2014)

ShibSankarSan (2014) ✓ ✓

This Paper ✓ ✓ ✓ ✓

2. MATHEMATICAL MODELING
This chapter involves the development of an economic production quantity model in imperfect
system with random defects rate, rework, backorders and labour cost for the case of single stage
production system. The model considers the following notation and assumptions.
2.1. Notations
Q Batch size (units)
B Size of backorder
D Demand rate, units per time
P Production rate

330
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

K Cost of production setup


C Manufacturing cost of a product
H Inventory carrying cost per product per unit of time
I Inventory carrying cost rate
W Backorder cost per product per unit of time
F Fixed backorder cost
J Backorder average (units)
T Time between production runs
TC (Q, B) Total cost per unit of time
L Learning rate
In addition to this notation, we define the following symbols
R Proportion of defective products in each cycle following a probability distribution (uniform,
triangular and beta)
E(R) Expected value of proportion of defective products in each cycle
2.2. Assumptions
1. For the unit production time (Wright’s formula, 1936) of the learning effect is used to describe the
learning phenomenon
2. Production rates and demand are steady and known over perspective planning. The demand rate is
lower than the production rate, indicates that the shortage finished.
3. The proportion of defective products is random variable in each production cycle and it follows
three different distribution density functions i.e. uniform, triangular and beta.
3. In every single production cycle the percentage of defective products is random variable, we can
categorize it in to three distinct distribution functions, which are uniform, triangular as well as beta.
4. After the completion of every single production system, at the manufacturer side all the products
checked wholly solely without checking and filtering cost, mean that screening cost is negligible.
5. During the production cycle every single product with any defect are assumed to be reworked in
order to make it defect less.
6. Inventory carrying cost are mainly depends upon the average inventory.
7. Backordering costs are mainly to be considered of two types. Linear i.e. the cost of backordering is
practiced to average backorder and fixed i.e. for instance, backordering cost is applied to higher level
of backorder.

331
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

8. In the same production system and with similar rate both production and rework is done.
9. Both the availability of capitals and inventory shortage space are infinite.
10. The current model is mainly designed for the description of single item.
2.3 Model formulation:
As there is imperfection in process therefore, keeping in mind the stated assumptions and notation,
three kind of inventory models have been developed for this case by considering that the defects rate
follows uniform, triangular and beta distribution.

2.3.1 Case A: The proportion of defective product follows a uniform distribution


The learning effect on the per unit production time follows (Wright’s power function formula, 1936)
and can be expressed.
𝑇𝑖𝑗 = 𝑇𝑖𝑗−𝐿

The production cost (PC) per cycle is composed of the setup cost K, the material cost CQ and labor
cost 𝛽t(Q). Hence, We note that t(Q) is the cumulative time to produce Q units in cycle i and t(Q) can
be obtained by summing up all of the production time in cycle. Namely,

𝑄 𝑄 −𝐿
t (Q) = 𝑗 =1 𝑇𝑖𝑗 = 𝑗 =1 𝑇𝑖𝑗

𝑄
−𝐿
𝑇𝑖1 𝑄1−𝑙
≅ 𝑇𝑖1 𝑦 𝑑𝑦 =
0 1−𝐿

Labour cost: Labour cost is equal to 𝛽 time t (Q)


Labour cost=𝛽t(Q)
The total cost is the sum of setup cost, inventory cost, backorder cost, production cost and labor cost
is as follows:
KD FBD (1)
TC Q, B = + HI + βt Q + + WJ + CD 1 + E R
Q Q

By substituting the value of I , J and E[R] from (Biswajit Sarkar et al, 2014) in equation (1), we
obtain

332
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

KD βDTi1 Q−L FBD WB 2 A HQM HB 2 A


TC(Q, B) = + + + + + − HB
Q 1−L Q 2QE 2 2QE (2)
+ CD 1 + E R
To find out the cost minimization, take the 1st derivative of total TCPUT (Q, B) with respect to B and
Let it to be equal to zero
∂TC Q, B FD WBA HBA
= + + −H= 0
∂B Q QE QE (3)

By rearranging Eq. (3), B can be communicated as a component of Q which is appeared as


QEH − FDE (4)
B∗ =
W+H A

Given fixed Q, the convexity of total cost function can be examine by checking the second derivative
of TCPUT (Q, B)
∂2 TCPUT(Q, B) W+H A (5)
2
= >0
∂B QE

From equation (5), we note down that specified fixed Q, TCPUT (Q, B) is convex w.r.t B. In this
way, we can substitute Eq. (4) into the objective function (1) and consequently the required function
contains just a single decision variable Q.
KD βDTi1 Q−L QEH − 2H 2 QE − 2FDEH F 2 D2 E
TCPUT(Q, B) = + + −
Q 1−L 2 W+H A W + H AQ
(6)
HQM
+ + CD 2 − A
2

Take the 1st derivative of Eq. (6) w.r.t Q also put equal to Zero.

KD βLDTi1 Q−L−1 F 2 D2 E EH(1 − 2H) HM (7)


− − + + + = 0
Q2 1−L 2 W + H AQ2 2 W + H A 2

It is hard to locate the closed solution of optimal Q* from Eq. (7). Rather, a few intriguing and
valuable properties will be inferred for the proposed issue so the optimal solution Q* can be acquired
effectively. These fascinating and helpful properties are abridged into the accompanying
recommendations
Preposition 1: If TCPUT (𝑄) is given by Equation (6) in that case there exist a unique

333
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

𝑄∗ >0, which minimizes TCPUT (𝑄)


Proof. Since𝑄𝐿 > 0, multiply 𝑄𝐿 on both sides of Eq. (7), and after that we have the following
equation.
𝐿𝐷𝑇𝑖1 𝑄−1 𝐹 2 𝐷2 𝐸𝑄𝐿−2 𝐸𝐻{1 − 2(𝐻)}𝑄𝐿 𝐻𝑀𝑄𝐿 (8)
𝑓 𝑄 = −𝐾𝐷𝑄𝐿−2 − 𝛽 + + +
1−𝐿 2𝐴(𝑊 + 𝐻) 2𝐴(𝑊 + 𝐻) 2

Presently we continue to demonstrate the uniqueness of Q*. By checking the 1 st derivative of Eq. (8),
the result obtained is follows
𝐿𝐷𝑇𝑖1 𝑄−2 (𝐿 − 2)𝐹 2 𝐷2 𝐸𝑄𝐿−3
𝑓 / (𝑄) = (2 − L)KD𝑄𝐿−3 + 𝛽 +
1−𝐿 2𝐴(𝑊 + 𝐻) (9)
𝐿−1 𝐿−2
𝐸𝐻𝐿{1 − 2(𝐻)}𝑄 𝐿𝐻𝑀𝑄
+ +
2𝐴(𝑊 + 𝐻) 2
From Eq. (9), we take note of that f(Q) is a monotonic expanding capacity of Q. Thus Q* is unique.
Presently we continue to demonstrate that Q* minimize the TCPUT (Q). By multiplying 𝐿𝑄−1 on the
two sides of Eq. (9), the accompanying outcome can be acquired
𝐾𝐿𝐷 𝛽𝐿2 𝐷𝑇𝑖1 𝑄 −𝐿−2 𝐹 2 𝐷2 𝐸𝐿 𝐸𝐻(1 − 2𝐻)𝐿𝑄 −1 𝐻𝑀𝐿𝑄−1 (10)
− − + + +
𝑄 1−𝐿 2 𝑊 + 𝐻 𝐴𝑄3 2 𝑊+𝐻 𝐴 2

Now take the 2nd derivative of the total cost function can be obtained as follows
𝜕 2 𝑇𝐶𝑃𝑈𝑇 𝑄 2𝐾𝐷 (𝐿 + 1)𝛽𝐿𝐷𝑇𝑖1 𝑄 −𝐿−2 𝐹 2 𝐷2 𝐸 (11)
= + −
𝜕𝑄2 𝑄3 1−𝐿 𝑊 + 𝐻 𝐴𝑄3

Since 0 ≤L < 1 and 𝑄> 0, we conclude that


𝑑 2 𝑇𝐶𝑃𝑈𝑇 𝑄
𝑄 = 𝑄∗ > 0
𝑑𝑄2
Consequently, that 𝑄 ∗ minimize TCPUT (𝑄) has been demonstrated.
Preposition 2: If the learning effect on the production time is ignored (i.e. L=0), thus Q* is given as
Proof: If L=0, in Eq. (7), taking derivative and put equal to zero, we have

334
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

3F 2 D2 E
KD − HM
2a W + H
Q∗ L = 0 = + (12)
EH − 2H 2 E 2
2a W + H

The relationship between Q∗ L = 0 and Q∗ still unknown. The 3rd preposition obviously expresses
the relation between Q∗ L = 0 and Q∗ .
Proposition 3.The solution Q∗ L = 0 is lesser than solution Q∗ (i.e. Q∗ L = 0 < Q∗ ) for all Q ≥ 1.
Proof. Inequality holds which is given as
βLDTi1 Q−L−1 EH(1 − 2H) HM F 2 D2 EQL−2 EH{1 − 2(H)}QL (13)
− + + < +
1−L 2 W+H A 2 2A(W + H) 2A(W + H)

By Re-arranging Eq. (7), we obtained as


KD βLDTi1 Q−L−1 F 2 D2 E EH(1 − 2H) HM
− 2
− + 2
+ + = 0
Q 1−L 2 W + H AQ 2 W+H A 2 (14)

The denominator of Eq. (14) is equivalent to the left side of Eq. (13). Consequently,
2 2
KD − F D E 2 W + H
Q=
EH(1 − 2H) HM βLDTi1 Q−L−1
+ 2 −
2 W+H a 1−L
(15)
3F 2 D2 E
KD − 𝐻𝑀
2a W + H
> 2 + = 𝑄∗ L = 0
EH − 2H E 2
2a W + H

Therefore, 𝑄∗ L = 0 < 𝑄∗ .
By utilizing the properties appeared in Propositions 1– 3 and Eq. (7), an iterative search technique,
which depends on the Newton– Raphson method, for the optimal production quantity 𝑄∗ can be
encouraged and is outlined as
Algorithm 1:
Step 1: Given n cycles Let i=1
Step 2:Let j = 0 and Qi 0 = Qi ∗ L = 0

335
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

[KD − 3F 2 D2 E/2A(W + H)
=
(EH − 2H 2 E) HM
+ 2
2A W + H

Step 3: Calculate f (Qi(j)), f/ (Qi(j)) and Qi(j+1)

(j) LDTi1 Q−1


i F 2 D2 EQL−2
i EH{1 − 2(H)}QLi HMQLi
f(Q i ) = −KDQL−2
i −β + + +
1−L 2A(W + H) 2A(W + H) 2
j j
/ j j L−3
LDTi1 Q−2 (L − 2)F 2 D2 E(Qi )L−3 EHL{1 − 2(H)}(Qi )L−1
f Qi = 2 − L KD(Q i ) +β + +
1−L 2A(W + H) 2A(W + H)
j
LHM(Q i )L−2
+
2

j+1 j f(Q i )(j)


Qi = Qi −
f / (Q i )(j)

Step 4: Let j = j+1, do step 3 until Qi(j) converge.

Step 5: output Qi* = Qi(j), B*and TCPUT (Q, B) from equation

2.3.2. Case B: The proportion of defective products follows a triangular distribution


The total cost of the system by considering all relevant costs is as follows:
KD FBD (16)
TC(Q, B) = + HĪTri + + WJTri + CD 1 + E R
Q Q

By substituting the value of ĪTri , JTri and E[R] from Biswajit Sarkar et al (2014) in equation (1), we
obtain
𝐾𝐷 𝛽𝐷𝑇𝑖1 𝑄−𝐿 𝐹𝐵𝐷 𝑊𝐵2 𝐴 𝑇𝑟𝑖 𝐻𝑄𝑀𝑇𝑟𝑖 𝐻𝐵2 𝐴 𝑇𝑟𝑖 (17)
𝑇𝐶(𝑄, 𝐵) = + + + + + − 𝐻𝐵
𝑄 1−𝐿 𝑄 𝐿𝑄𝐸𝑇𝑟𝑖 𝐿 𝐿𝑄𝐸𝑇𝑟𝑖
+ 𝐶𝐷 2 − 𝐴 𝑇𝑟𝑖
Algorithm 2:
Step 1:Given n cycles considered the system, Let i=1
Step 2: Let j=0 and Qi(0)=Qi*(L=0)

[𝐾𝐷 − 3F 2 D2 𝐸𝑇𝑟𝑖 /2𝐴𝑇𝑟𝑖 (𝑊 + 𝐻)


=
(𝐸𝑇𝑟𝑖 𝐻 − 2H 2 𝐸𝑇𝑟𝑖 ) 𝐻MTri
+ 2
2𝐴 𝑇𝑟𝑖 𝑊 + 𝐻

Step 3: Calculate f (Qi(j)), f/(Qi(j)) and Qi(j+1)

336
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

(j) 𝐿𝐷𝑇𝑖1 𝑄𝑖−1 𝐹 2 𝐷2 𝐸𝑇𝑟𝑖 𝑄𝑖𝐿−2 𝐸𝑇𝑟𝑖 𝐻{1 − 2(𝐻)}𝑄𝑖𝐿 𝐻MTri 𝑄𝑖𝐿
f(Q i ) = −𝐾𝐷𝑄𝑖𝐿−2 −𝛽 + + +
1−𝐿 2𝐴 𝑇𝑟𝑖 (𝑊 + 𝐻) 2𝐴 𝑇𝑟𝑖 (𝑊 + 𝐻) 2
j
/ j j L−3
LDTi1 Q−2 (L − 2)F 2 D2 ETri (Q i )L−3
f Qi = 2 − L KD(Q i ) +β +
1−L 2ATri (W + H)
j j
ETri HL{1 − 2(H)}(Q i )L−1 LHMTri (Q i )L−2
+ +
2ATri (W + H) 2

j+1 j f(Q i )(j)


Qi = Qi −
f / (Q i )(j)
Step 4: Let j=j+1, perform step 3 until Qi (j) converges.
Step 5: output Qi* = Qi (j), B*and TCPUT (Q, B) from equation

2.3.3 Case C: The proportion of defective products follows a beta distribution


Therefore the total cost of the system by considering setup cost, inventory cost, backorder cost and
production cost is as follows:
KD FBD (18)
TC = + HĪBeta + + WJBeta + CD 1 + E R
Q Q

By substituting the value of ĪBeta , JBeta and E[R] from Biswajit Sarkar et al (2014) in equation (1),
we obtain
KD βDTi1 Q−L FBD WB 2 ABeta HQMBeta HB 2 ABeta
TC Q, B = + + + + + − HB
Q 1−L Q LQEBeta 2 LQEBeta (19)
+ CD 2 − ABeta
Algorithm 1:
Step 1:Given n cycles considered the system, Let i=1
Step 2: Let j=0 and Qi(0) = Qi*(L=0)

[KD − 3F 2 D2 EBeta /2ABeta (W + H)


=
(EBeta H − 2H 2 EBeta ) HM
+ 2
2ABeta W + H

Step 3: Calculate f (Qi(j)), f/ (Qi(j)) and Qi(j+1)

(j) LDTi1 Q−1


i F 2 D2 EBeta QL−2
i ETri H{1 − 2(H)}QLi HMBeta QLi
f(Q i ) = −KDQL−2
i −β + + +
1−L 2ABeta (W + H) 2ABeta (W + H) 2

337
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

j
/ j j L−3
LDTi1 Q−2 (L − 2)F 2 D2 EBeta (Q i )L−3
f Qi = 2 − L KD(Q i ) +β +
1−L 2ABeta (W + H)
j j
EBeta HL{1 − 2(H)}(Q i )L−1 LHMBeta (Q i )L−2
+ +
2ABeta (W + H) 2
fQ i (j)
Qi(j+1)= Qi(j) –
f / Q i (j)

Step 4: Let j=j+1, perform step 3 until Qi(j) converges.


Step 5: output Qi* = Qi(j), B*and TCPUT (Q, B) from equation

3. Numerical Examples
Numerical example are based on the data of (Sarkar et al, 2014) and (Cheng-Kang Chen, 2007)
Example 3.1 Different parameters values and there suitable units are the following: L=0.01 D = 12
units/batch, a = 0.03, b = 0.07, P = 22 units/batch, W = $0.5/unit/batch, H = $0.2/unit/batch, F =
$1/unit short, K = $200/lot size, C = $7/unit. Then the optimal solution is
TC∗ =$175.3/batch B ∗=15.23 units Q∗ = 198.6 units
Example 3.2 Different parameters values and there suitable units are the following:
D=12 units, a=0.03, b=0.04, c=0.07, P=22 units, L=0.01, W=$0.7/unit, H=$0.2/unit, Q=178.2
F=$1/unit short, K=$200/lot size, C=$7/unit.
TC∗ =$174.35/batch B ∗=9.76 units Q∗ = 148.85 units
Example 3.3 Different parameters values and there suitable units are the following:
D=12units/batch, α=0.03, β=0.07, P=22 units/batch, L=0.01 W=$0.5/unit/batch, H=$0.2/unit/batch,
F=$1/unit short, K=$200/lot size, C=$7/unit.
TC∗ =$174.71/batch B ∗=6.89 units Q∗ = 282.84 units

4. CONCLUSION AND FUTURE RECOMMENDATIONS


In the research work of (Sarkar et al. 2014) three different probabilistic distribution functions for the
proportion of defective items was presented. While on the other hand the research work of Chang-
Keng Chen was merely to investigate the learning effect on an imperfect production system. This
paper improved the research work of sarkar et al with learning effect that was presented by Chang-
Keng Chan. We minimized the total cost of the imperfect production system through optimal
determinations of the production quantity. From the optimality condition of the proposed model, a

338
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

few fascinating and valuable properties are researched and a productive and compelling calculation is
created to scan for the optimal solution. Likewise, the convergence of the iterative algorithm in this
paper is shown. Also, numerical examples were provided to illustrate the features of the model.
The model can be stretched out in numerous ways as variable demand, inflation, and so forth. The
inspection cost is considered as negligible in this model. We will attempt to broaden this model by
considering the inspection cost for the defective items. This will be the research work that should be
possible in not so distant future.
References
Harris, F. W. (1913). "How many parts to make at once." Factory, the magazine of management 10
(2): 135-136.
Taft, E. (1918). "The most economical production lot." Iron Age 101(18): 1410-1412.
Jamal, a., B. R. Sarker and S. Mondal (2004). "Optimal manufacturing batch size with rework process
at a single-stage production system." Computers & Industrial Engineering 47 (1): 77-89.
Cárdenas-Barrón, L. E. (2009). "Economic production quantity with rework process at a single-stage
manufacturing system with planned backorders." Computers & Industrial Engineering 57 (3): 1105-
1113.
Sarkar, B., L. E. Cárdenas-Barrón, M. Sarkar and M. L. Singgih (2014). "An economic production
quantity model with random defective rate, rework process and backorders for a single stage
production system." Journal of Manufacturing Systems 33 (3): 423-435.
Widyadana, G. a. and H. M. Wee (2012). "An economic production quantity model for deteriorating
items with multiple production setups and rework." International Journal of Production Economics
138 (1): 62-67.
Chang, H.-J., R.-H. Su, C.-T. Yang and M.-W. Weng (2012). "An economic manufacturing quantity
model for a two-stage assembly system with imperfect processes and variable production rate."
Computers & Industrial Engineering 63(1): 285-293.
Sarkar, B. (2012). "An EOQ model with delay in payments and stock dependent demand in the
presence of imperfect production." Applied Mathematics and Computation 218 (17): 8295-8308.
Sarkar, B. (2012). "An inventory model with reliability in an imperfect production process." applied
Mathema
Sarkar, B. (2013). "A production-inventory model with probabilistic deterioration in two-echelon
supply chain management." Applied Mathematical Modeling 37(5): 3138-3151.

339
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Keachie, E.C., Fontana, R.J., 1966. Effects of learning on optimal lot size. Management Science 32
(2), 102–108.
Chung, K.J., Hou, K.L., 2003. An optimal production runs time with imperfect production processes
and allowable shortages. Computers and Operations Research 30, 483–490.
T.Wright, Factors affecting the cost of airplanes, Journal of aeronautical Science 3 (1936) 122-128.
Jamal AA, Sarker BR, Mondal S. Optimal manufacturing batch size with rework process at single-
stage production system. Comp Ind Eng 2004; 47:77–89.

340
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Analytical Modelling of Ultimate Flexural and Shear Strengths of Lightly Reinforced


Ferrocement Beams
Mohammad Adil
Assistant Professor, Department of Civil Engineering,
University of Engineering and Technology, Peshawar, Pakistan
adil@uetpeshawar.edu.pk

Adeed Khan
Researcher, Department of Civil Engineering,
Iqra National University Peshawar, Pakistan
adeedkhan@hotmail.com

Irshad Hussain
Researcher, Department of Civil Engineering,
University of Engineering and Technology, Peshawar, Pakistan
irshadhusyn@gmail.com

ABSTRACT
In reinforced cement concrete (RCC) construction industry, the advent of ferrocement has
proven durability and reparability and thus may be used in manufacturing load bearing
elements to achieve low-cost and post-event repairable construction. Although, thin bendable
panels and reinforced concrete retrofitting jackets are commonly built using ferrocement, its
application in major load bearing elements like beams or columns etc., is not evident from
literature. Furthermore, replacement of typical, concentrated applied, heavy deformed rebar in
RCC with light, distributed steel mesh may induce ductility in cost of strength, this behavior
need to be investigated.
In this paper, the equations for theoretical estimation of ultimate flexural and shear strengths
of lightly reinforced ferrocement (LRF) beams have been derived and presented. Although,
equations for combined moment capacity for retrofitted reinforced concrete beams with
densely reinforced ferrocement have been developed, in the past, but may not be used for
estimating bending and shear strengths of pure ferrocement elements with light reinforcement.
The equations thus developed can be used to study or estimate the effects of different
parameters, including diameter of mesh wire and the vertical; horizontal and lateral spacings
of wire mesh, on the bending and shear strength.
Keywords
Ferrocement, light reinforcement, bending, shear, strength, estimation, beams
1. INTRODUCTION
Generally, ferrocement elements are thin concrete (cement-sand) mortar, reinforced with
closely spaced steel wire meshes distributed throughout its thickness. Compared to typical
reinforced concrete which is relatively weak and brittle in nature, the uniform distribution of
the steel throughout the thickness transforms it into a high-performance building material with
larger ductility, tensile strength to weight ratio, crack resistance and durability. Because of the
similar chemical and physical nature of ferrocement as reinforced concrete structure,
ferrocement is commonly used as a strengthening or rehabilitation material for defective
reinforced concrete structural elements.

341
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

ACI Committee 549 (1997), in a state of the art report on ferrocement defined it as a variant of
thin walled reinforced concrete panel commonly constructed in hydraulic cement mortar
reinforced with closely spaced layers of continuous and relatively small size wire mesh. The
mesh may be made of steel or other appropriate materials. The quality of the mortar matrix and
its alignment should be well-suited with the mesh and framework systems it is meant to
summarize. The matrix may contain uneven filaments.
Ductility requirement of beams in reinforced concrete (RC) structures is a major requirement
especially when the structure is affected by earthquake or other types of lateral loads. Among
several techniques of improving ductility of RC beams, ferrocement can be considered more
economical.
Physically, ferrocement is a lighter material which has scattered reinforcement all over its
thickness with longitudinal and transverse reinforcement, has a matrix made of fine mortar or
paste to hold larger aggregates. Mechanically, ferrocement has consistent isotropic properties
in two directions, it has better impact and punching shear resistance, its tensile strength can be
of the same order as its compressive strength, its ductility increases with an increase in the
volume fraction and specific surface of reinforcement. Although, ferrocement provides
poorer fire resistance than ordinary reinforced concrete, ferrocement has quite high modulus
of rupture and a tensile strength (Naaman, 2000). Alwash (1982) provided the detailed review
of bending behavior of ferrocement plates. Alwash discovered that reinforcement plays a major
role in affecting deflection and bending strength. Alwash discovered that existing bending
theories for reinforced concrete underestimate the bending strength of ferrocement plates. It
has also been observed that the strength of woven wire mesh ferrocement plate is much greater
than welded wire mesh ferrocement plates.
Tests have been performed on RC beams with a composite layer of ferrocement on the tension
side only (Bong et al., 2010), while this paper highlights the mechanical behavior of beams
completely built with ferrocement. The aim of this study was to provide equations for
theoretical estimation of ultimate bending and shear strengths of ferrocement. Although,
equations for moment capacity have been developed in the past for reinforced concrete beams
retrofitted with ferrocement but not for estimating bending and shear strengths of pure
ferrocement elements.
1. Ferrocement Beam
Although, the usage of ferrocement as beams has not been seen but ferrocement flexural
elements in the form of panel can be found. These panels resist compression as well as tensile
forces and add rigidity and ductility to the structure.
Azad and Assi (2012), conducted tests on fibrocement discs. The discs samples were tested to
establish the tensile stress-strain relationship. They observed that the tensile strength depends
on both matrix strength and wire mesh to matrix ratio. Nassif et al. (1998), researched the shear
transfer behavior in ferrocement concrete composite beams and the area required for the steel
mesh in the ferrocement layer to confirm overall suitable flexural response.
Balaguru (1977) experimentally investigated the flexural behavior of ferrocement panels. He
also proposed a theoretical model to predict the moment-curvature and load-deflection
relationships of ferrocement panels. Rosenthal and Bljuger (1991) observed the bending
behavior of a concrete-ferrocement complex. He built reinforced concrete beams wrapped in
L-shape ferrocement, either reinforced with rectangular mesh or expanded metal. A quite
smooth bond between both reinforced concrete and ferrocement was found in this research.
Bond was lost between two layers at the failure point. They concluded that shear connectors

342
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

should be provided to make a full bond between the two layers at the failure point.
Nassif and Najm (2003) investigated an experimental and theoretical assessment of
ferrocement concrete composite beams. The method of shear transfer is used to observe
composite layers. Al-Kubaisy and Jumaat (2000) investigated the flexural behavior of
reinforced concrete slabs with ferrocement tension zone coverand found that it reduces the
deflection.
Tests have been performed on RC beams with a composite layer of ferrocement on the tension
side only (Bong et al., 2010), while this paper highlights the mechanical behavior of beams
completely built with ferrocement. The aim of this study was to provide equations for
theoretical estimation of ultimate bending and shear strengths of ferrocement. Although,
equations for moment capacity have been developed in the past for reinforced concrete beams
retrofitted with ferrocement but not for estimating bending and shear strengths of pure
ferrocement elements.

2. ESTIMATION OF ULTIMATE FLEXURAL STRENGTH OF LRF BEAM


In order to estimate the ultimate flexural strength of LRF beam, it has been assumed that all
the steel in ferrocement beam has yielded, while the concrete in compression block has reached
its crushing strength, as shown in figure 1. It is also assumed that either the steel in compression
zone doesn’t exist or has no contribution.
From the stress distribution shown in figure 1(c), the ultimate flexural strength of the LRF
beam has been derived in Appenix A and results in the following set of equations:
Replacing, in above equation and solving the summation, we get
𝑗 𝑗 (2𝑛+1)𝑠 1
𝑀 = 𝑛𝑠(𝑚 + 1)(𝑛 + 1)𝐴𝑏 𝑓𝑦 {𝑛𝑠 − 2(𝑑−𝑐) + − 2} Eq-1
6(𝑑−𝑐)

Where,
0.85𝑐
j=𝑑− Eq-2
2
(𝑚+1)𝐴𝑏 𝑓𝑦 𝑆 𝑛(𝑛+1)
𝑐= {𝑛 − 𝑑−𝑐 [ ]} Eq-3
0.852 𝑓𝑐′ 𝑏 2

3. ESTIMATION OF SHEAR CAPACITY OF LRF BEAM


Several building codes such as the American Concrete Institute (ACI 318 Section 11.4.1),
permits the use of higher strength wire and welded wire reinforcement to be used as shear
reinforcement or stirrups in beams. Hence, for normal weight concrete members subjected to
flexure and shear only, a combination of ACI 318-11 equation 11.3 and 11.15 can be written
as shown in equation 4 to represents of ultimate shear strength (without strength reduction
factor) of reinforced concrete beam.
𝑑
𝑉𝑛 = 2√𝑓𝑐′ 𝑏𝑤 𝑑 + 𝐴𝑣 𝑓𝑦 𝑠 Eq-4

The second term in equation 4 represents the strength contribution of vertical steel stirrups.
For the folded wire mesh, the area of a single stirrup can be calculated as, 𝐴𝑣 = 𝐴𝑏 ∗ (𝑚 +
1), hence, equation 4 for ferrocement beam can be written as:
𝑑
𝑉𝑛 𝑓𝑒𝑟𝑟𝑜 = 2√𝑓𝑐′ 𝑏𝑤 𝑑 + 𝐴𝑏 ∗ (𝑚 + 1)𝑓𝑦 𝑠 Eq-5

343
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Equation 5 can now be used to estimate the shear capacity of LRF beams.
4. ASSUMPTION AND LIMITATION FOR ESTIMATION OF ULTIMATE
BENDING STRENGTH OF LIGHTLY REINFORCED FERROCEMENT BEAM
Following assumptions and limitations are considered while deriving the above mentioned
flexure and shear strength equations:
1. All welded wire meshes conform to ASTM A1064 (2017) standard.
2. The wire mesh and concrete bond is adequate enough, so that the strain in concrete is
equal to that of wire at the same level.
3. Strain in concrete in compression is linearly proportional to the distance from the
neutral axis.
4. Young’s modulus of steel in wire mesh is taken as Es = 29000 ksi (200 GPa).
5. Plane sections remain plane after bending.
6. Concrete has no tensile strength after cracking.
7. At ultimate state, both steel and concrete behaves in-elastically.
8. At ultimate state, the maximum strain at the extreme compression fibers is equal to
0.003 and the compressive stress distribution is approximated as rectangular, as
shown in figure 1c.
9. The steel in compression zone either does not exist or has no contribution in resisting
compression
10. The shear behaviour of LRF beam is same as that of reionfornced concrete beam.
5. ESTIMATED ULTIMATE BENDING STRENGTH BEHAVIOR
The effect of variation of different parameters in the developed ultimate bending strength
equation (Eq. 1) have been studied. The plots in figure 2 shows the effect of the number of
vertical spaces “n” of wire mesh (shown in figure 1a.) and the moment capacity “M”, which is
estimated by using equation 1. In calculating different moments the number of horizontal
spaces in wire mesh “m” and the spacing “s” were kept constant. These n-M relationship for
18” deep beam shows that by increasing “n” up to around 4 vertical spaces the moment capacity
gradually increases as steel area increases while moment arm decreases, but remains constant
for further increase as the mesh crosses the neutral axis and enters the compression zone. In
figure 2, the four smallest area bars are the ones that are easily available and typically
manufactured, while the rest are assumptions in order to study the effect of further variation of
area of wires. Increasing number of spaces “n” until mesh remains below the neutral axis (i.e.,
in tension zone) significantly increases the bending strength.
In comparison to figure 2, figure 3 shows the effect of variation of number of horizontal spaces
“m” on the moment capacity “M”, estimated by using equation 1. It is clear that moment
capacity linearly increases with increase in “m”.
As increasing “m” will increase the area of steel by keeping the constant moment arm while
increasing “n” will increases area with reducing moment arm, therefore, the result of changing
“n” is less effective compared to changing “m”. Furthermore, it is more practical to increase
“m” as compared to increase “n”.
6. ESTIMATED SHEAR BEHAVIOR
The shear strength of LRF beams with different areas of steel wire meshes have been estimated
and provided in Table 1. The 3D contour plot in figure 4 shows the effect of variation of
diameter as well as the spacing “s” and number of spaces “m” (also representing number of
legs in a stirrup, which is equal to m+1) on the shear strength of ferrocement beam. The shear

344
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

capacity of a plane cement concrete (PCC) beam of similar size “𝑉𝑐 = 2√𝑓𝑐′ 𝑏𝑤 𝑑” was
calculated to be 332.55 kips. It can be seen that wire meshes with close spacing (0.5 inch) may
increase the shear strength of ferrocement beam up to 2.5 times to that of PCC. It has also been
found that increasing the number of spaces “m” exponentially increasing the shear strength.
While increasing the diameter of wire has a gradual effect on increasing the shear strength.
Figure 5 shows the effect of increasing number of stirrup legs on the shear strength of
ferrocement beam. The convergence of the plots at m+1=7, it can be concluded that for lesser
number of wires stirrup legs “m+1< 7”, there is no effect of steel stirrups on the shear
strength of the beam, which almost remains equal to the shear capacity of concrete “Vc”.
Also by about 30% increase in the number of vertical legs, the shear capacity is also
increased by about 30%, thus suggesting a linear relationship.
Figure 6 shows the effect of increase in area of wire in wire mesh “Ab” on the shear strength
of ferrocement beam. It is evident that by almost doubling the number of vertical vertical legs
(from 13 to 25) the shear strength is increased by about 87.5%.
7. CONCLUSION AND RECOMMENDATIONS
Based on this study, following conclusions can be made:
1. LRF beam is a new concept and understanding its different behaviors require more
experimental and numerical studies.
2. Increasing number of vertical spaces “n” in tension zone of the beam significantly
increases the bending strength. While n has no significance in compression zone.
3. Ultimate moment capacity linearly increases with increase in “m”.
4. The result of increasing “n” is less effective compared to increasing “m”.
5. It is more practical to increase “m” as compared to increase “n”.
6. For “m+1< 7”, there is no effect of steel stirrups on the shear strength of the beam.
7. By about 30% increase in the number of vertical legs, the shear capacity is increased
by about 30%.
8. By almost doubling the number of vertical vertical legs (from 13 to 25) the shear
strength is increased by about 87.5%.
In future, the ultimate flexural and shear strengths will be investigated experimentally and the
required adjustments in the equations, proposed in this paper, will be established.
Furthermore, numerical investigation may expose the detailed bending and shear behavior
(both elastic and in-elastic) of LRF beams, which will be useful in proposing further
limitations and design modifications. Steel can also be replaced with other meshed materials
and their behavior be studied.

Table. 1 Shear strengths of wire mesh reinforced ferrocement beams for variety of wires and
mesh spacing.
Spacing # of Vc Vn (kip) for different wire areas (in2)
“s”(in) stirrup (kip)
legs (m+1) 0.0017 0.009 0.0128 0.0188 0.035 0.1 0.3 0.45

0.5 25 332.55 398.85 683.55 831.75 1065.75 1697.55 4232.55 12032.55 17882.55
1 13 332.55 349.79 423.81 462.35 523.19 687.45 1346.55 3374.55 4895.55
2 7 332.55 337.19 357.12 367.50 383.88 428.10 605.55 1151.55 1561.05
All calculations are for d=12 in, bw=8 in,fc'=3 ksi,fy=65 ksi

345
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

(a) (b) (c)

Figure 1. a) Lightly Reinforced Ferrocement (LRF) beam section, b) strain and c) bending stress
distribution of ferrocement beam at ultimate state

n-M chart (m=4, s=2 inches)


1400

1200

1000 Ab=0.009
Ab=0.0128
M in kip-in

800
Ab=0.017
600
Ab=0.0188
400 Ab=0.035
200 Ab=0.065
Ab=0.1
0
0 2 4 6 8
n

Figure 2. Effect of variation of vertical spaces “n” of moment capacity “M” for 18” deep
LRF beam built with different gauge wire meshes.

346
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

m-M chart (n=4, s=2 inches)


2000
1800
Ab=0.009
1600
1400 Ab=0.0128
1200 Ab=0.017
1000
Ab=0.0188
800
600 Ab=0.035
400 Ab=0.065
200 Ab=0.1
0
0 2 4 6 8

Figure 3. Effect of horizontal spacing “m” of moment capacity “M” for 18” deep LRF beam
of different gauge wire meshes.

20000

Shear strength (kips)


15000

10000

5000
0.45 0
0.3
0.1
0.035
0.0188
0.0128
0.009
0.0017
Wire area (in2)

Figure. 4 3D plot showing the effect of variation of diameter, number of spaces and spacing
on shear strength of beam.

347
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

900

800

700 Area of wire


mesh bar
600 (Ab) in in2
500
Vn (kips)

0.0017
400
0.009
300 0.0128
200

100

0
0 5 10 15 20 25 30
m+1 (number of wire stirrup legs)

Figure. 5 Shear capacity variation for different spacing of wire meshes.

20000
18000
16000
Number
14000 of wire
12000 stirrup legs
Vn (kips)

10000 25
8000 13
6000 7
4000
2000
0
0 0.1 0.2 0.3 0.4 0.5
Area of wire (Ab) in2

Figure. 6 Shear capacity variation for different diameter of wire meshes.


9. REFERENCES
1. ACI Committee 549-1R, 1997, ‘State of the Art Report on Ferrocement’, American
Concrete Institute, Farmington Hills, Michigan.
2. Abdul Salam A. Alwash 1982, ‘Flexural behavior of ferrocement’, University of
Sheffield.
3. Al-Kubaisy, M. A. and Jumaat, M. Z., 2000, ‘Flexural behavior of reinforced concrete
slabs with ferrocement tension zone cover, Construction and building materials’,
14:245-252.
4. ASTM A1064 / A1064M-17, 2017, ‘Standard Specification for Carbon-Steel Wire and
Welded Wire Reinforcement, Plain and Deformed, for Concrete’, ASTM International,
West Conshohocken, PA, DOI: 10.1520/A1064_A1064M-17

348
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

5. Azad A. Mohammed and Dunyazad K. Assi, 2012, ‘Tensile Stress-Strain Relationship


for Ferrocement Structures’, vol. 20, University of Sulaimani.
6. Bong et al., 2010, ‘Study the structural behaviour of ferrocement beam’, UNIMAS e-
Jouranal of Civil Engineering, Vol. 1(2).
7. Nassif H. H., Chirravuri G., Sanders M., 1998, ‘Flexural behavior of
ferrocement/concrete composite beams’. In: Naaman AE, editor. Ferrocement 6: Lambot
Symposium, Proceedings of Sixth International Symposium on Ferrocement. University
of Michigan, Ann Arbor, p. 251–8.
8. Balaguru PN., 1977, ‘Ferrocement in bending.’ PhD Thesis, University of Illinois at
Chicago Circle, the Graduate College, p. 9–24.
9. Rosenthal I, Bljuger F., 1991, ‘Bending behavior of ferrocement reinforced concrete
composite.’ J Ferrocement 1985;15(1):15–24. [11] Tan KH, Ong KCG. Use of
ferrocement in composite construction. In: International Conference on Steel and
Aluminum Structures, 22–24.
10. Nassif, H. H. and Najam, H., 2003, ‘Experimental and analytical investigation of
ferrocement-concrete composite beams, cement and concrete composites’, 26:787-796.
11. Naaman, A. E., 2000, ‘Ferrocement and laminated cementitious composites’, Techno
Press 3000, Ann Arbor, Michigan, USA.

349
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Notations

b = breadth of a rectangular beam


d = effective depth of a beam, considering the lowest steel mesh wires only
𝐴𝑏 = cross-sectional area of steel-mesh in tension
n = number of vertical spaces
m = number of horizontal spaces
s = spacing between bars
c = distance from extreme compression fiber to neutral axis (N.A.)
𝑓𝑦 = Yield strength of steel
𝑓𝑐′ = Compressive strength of concrete
𝑦𝑖 = Moment arm, the distance of center of steel in i-th row from the centroid of compression
block.

350
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Appendix A

Derivation of equation for estimating ultimate flexural capacity of LRM beam

From figure 1(c),


0.85𝑐
𝑦𝑖 = 𝑑 − 𝑛𝑠 + 𝑖𝑠 − Eq-A1
2

As stress at the bottom steel row is considered to be at yield, as shown in figure 1(c), the stress
on consecutive upper layers can be calculated using following equation.
𝑛𝑠
𝑓𝑠 = 𝑓𝑦 (1 − 𝑑−𝑐) Eq-A2

If 𝑖 = 0 to n for n rows of steel starting from bottom upwards, the stress at any row “𝑖” can
be calculated using equation A3.
(𝑛−𝑖)𝑠
𝑓𝑠𝑖 = 𝑓𝑦 [1 − ] Eq-A5
𝑑−𝑐
0.85𝑐
Replacing, j = 𝑑 − in above equation and solving the summation, we get
2
𝑗 𝑗 (2𝑛+1)𝑠 1
𝑀 = 𝑛𝑠(𝑚 + 1)(𝑛 + 1)𝐴𝑏 𝑓𝑦 {𝑛𝑠 − 2(𝑑−𝑐) + − 2} Eq-A6
6(𝑑−𝑐)

The depth of neutral axis “c” can be calculated from the compression stress block depth
using 𝑎 = 0.85𝑐, where,
(𝑚+1)𝐴𝑏 𝑓𝑦 (𝑛−𝑖)𝑆
𝑎= ∑𝑛𝑖=0 [1 − ] Eq-A7
0.85𝑓𝑐′ 𝑏 𝑑−𝑐

therefore,
(𝑚+1)𝐴𝑏 𝑓𝑦 𝑆 𝑛(𝑛+1)
𝑐= {𝑛 − 𝑑−𝑐 [ ]} Eq-A8
0.852 𝑓𝑐′ 𝑏 2

351
1
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Performance evaluation of Moving Objects Detection


Algorithms
Muhammad Shoaib1 , Usman Khan Khalil2 , Syed Zain Kazmi3 , and Javed Iqbal4
1
Abdul Wali Khan University, Mardan, Pakistan
2,3,4
Sarhad University of Science and Information Technology, Peshawar(KPK), 25000, Pakistan
4
javed.ee@suit.edu.pk

Abstract
Current research regarding computer vision has emerging on advanced systems for focusing moving object detection. In computer
vision the most progressive research topic is visual analysis of motion. Motion detection is the process of detecting moving objects
from a captured or live video. Motion detection mostly used in various computer vision tasks like human tracking, vehicle counting,
traffic monitoring and recognition etc. In this paper, we compare three algorithms which detect the motion that is depend on
background subtraction,Frame difference and Blob analysis method. Frame difference algorithm based on two consecutive frames
is compared and then previous is subtracted with the current frame to evaluate the difference in value of pixels above threshold
to identify the motion. Background subtraction is distinguish between the current frame and background, and background updated
after few frames to reduce illumination changes. Blob is the region of connected neighborhood pixels. BLOB stands for Binary
Large Object. The term ”large” reveal that only large objects are of interest and the ”small” is considered as noise.According
to the experimental results the method proposed has high accuracy and performance in comparison to previous methods using
stationary camera. We used Morphological operations which can be prevent the noise in motion detection.Comparison between
these algorithms are possible, calculating their qualities and shortcomings and enabling the client to perform a dependable decision
of the best algorithm for a specific system.
Keywords: Motion detection, Digital image processing, Computer vision.

I. I NTRODUCTION
In video recordings, the dectection of moving things is the intial step in varoius applications of computer vision that are traffic
monitoring, Human recognition, object detection and recognition,Video surveillance etc. As increase in the terrorism threats,now
a days video investigation has more significant compared to later. Rapid development of introduced cameras to be checked and
the limited supply of well-trained observing team have motivated two crutial problems in video surveillance market [1],[2].
There are three approaches to detect the moving objects, Frame Difference algorithm, Background subtraction algorithm and
Blob Analysis algorithm. We compare these three algorithms based on performance in this paper. Frame difference technique is
the difference between two consecutive frames or a difference method simply compares the existing frame with former frame to
evaluate the variation in value of pixels according to the threshold to detect the motion [3]. Background subtraction technique
is a simple way that is used for video to detect the moving objects via stationary camera compares the existing frame with a
background frame pixel by pixel and simply telly the number of pixels with change more than the esteem value and thus moment
is identified [3],[4]. We use background update concept for to reduce illumination changes such as tree branches, sea waves,
car parking etc. The Blob described as the area of joined pixels. And the procedures separate the pixels through their value and
fixed them in any of these two types which are foreground or the background.
These frames are transformed to gray scale intensity and used to govern the esteem estimation of gray scale frames. The limit
esteem estimation is resolved to such an extend that the pixel values on any side of the esteem estimation are reputable whether
it is a foreground or background pixel.The continuous two gray scaled pictures are distinguished and the relative distinction is
utilized to find the development between frames. The noise gathered because of distinction the frames is expelled by applying
the esteem estimation to the pictures. Pixels below the esteem value eliminated from the distinction frame and leaving back to
the interest of object. The Morphological processing technique are combined with these algorithms to gain better results.

II. R ELATED WORK


Over the past few years, numerous research papers have been published tending to the assessment of motion detection
approaches. However, evaluation of current background subtraction algorithms with respect to the difficulties up in video
surveillance are absent, outdated, or of minimum quality. Numerous challenges background subtraction algorithms normally
have to handle with.They created a set of particular videos which cover multiple challenges they found, others like shadows,
noise or compression were abandoned.
The authors in this paper had acknowledged previously developed salient techniques of moving objects detection by other
scholars that were related to theirs. In their technique, they combine temporal alteration imaging and temporal filtered optical

352
2
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

flow. Their hypothesis was that the object with prominent motion moves roughly consistent in a direction within a certain time
period. The calculation accessible for the temporal difference is fairly analogous to the accumulative difference images calculation
presented by Ramesh, Rangachar and Brian. They implemented the Lucas and Kanade technique for calculating the optical flows
in [5],[6]. The author in [7] presented an persuasive and versatile background modeling technique for identifying prominence
stuffs in both static and dynamic scenes. The projected access evaluate sample consensus (SACON) of the background samples
and calculates a static model for each pixel.
The author has presented Ultra Wideband Radar (UWR) technology develop one of the elected selection for through-wall
detection due to its moral diffusion high resolution. The motion resultant of wide bandwidth of UWR and cooperative for good
distinct of various goals in complicated atmosphere [5].
In contrast to the computed assessments, we interpretation for a wide challenges for motion detection in the field of video
investigation. Therefore, we introduce an evaluation developed a moving object detection system in C# with accurate and
morphological operation also applied to reduce noise from video frames.

III. M OVING O BJECTS A LGORITHMS


A. Frame Difference Algorithm
Videos are the arrangements of multiple pictures, each of them entitled a frame, exhibited in quick rate so that the human
eyes can etiquatte the stability of its content. It is evident that all image processing methods can be applied to distinct frames.
Further, the contents of two sequential frames are typically closely interrelated. [8] As its name recommends, frame differencing
contains taking the distinction among the two frames and utilizing this change to identify the purpose.Our proposed methodology
is consist of two part process. In initial step, the object is sensed using frame differencing. In this implementation we stack two
after casings from given request of given order of video frames. These colorful frames are the two continues gray scaled pictures
are recognized and the relative difference is used that detect motion among frames. Noise is accepted because of this operation
and is eliminate by applying the esteem value to the frames transformed to gray scale intensity and also used morphological
operations. As labelled in equation (1), the relative difference among two frames should be exceeding the esteem which identify
the object.

Fi − Fi − 1 > T (1)
As given in the given figure.

Fig. 1. A.Current frame Fig. 2. B.Previous frame

353
3
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Fig. 3. Flow chart of frame difference algorithm.

1) Proposed Algorithm For Frame Difference: The following flow chart shows how the frame difference algorithm works.
The steps are given as
a) Start.
b) Input the pair of frame i sequence of N video frames.
c) Convert frames RGB to Grayscale.
d) Find the difference between frames.
e) The desired difference convert to Binary using thresholding.
f)If the pixel value of current pair of frame i is greater than threshold value then the motion is detected in the frame of moving
clip. Otherwise the value less than threshold variation only when there is no motion in a pair of frame.
g) Hence the motion is detected.
h) Stop
The following sections covers description of proposed algorithm which is used to tracking and finding an moving object.
These are the subsections algorithms given below:
2) Difference Algorithm: All preceding frames are kept in a memory buffer and the existing frame in video is Fi , Take ith
frame (Fi ) as input. Take (i − 1)th frame (Fi−1 ) from the frame buffer. This frame buffer is basically a momentory buffer used
to kept some of preceding frames for future use. Now, execute Frame Differencing Operation on the ith and (i − 1)th frame.
The resultant picture produced which is represented as:
Fi : A grayscale image of the existing frame of the scene,
Fi−1 : A grayscale image of the previous frame of the scene, and
T hresh : The threshold that govern whether the movement is motion or not.
This planned procedure dynamically defining the background from the all received video frames, So it deducted the following
frame and matched with the background esteem. If it is greater than the background esteem, it marked as objects or else it’s
background. The Background is bring upto-date in whole differencing of frame [9].

354
4
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

3) Threshold: It does convert frame in to binary then using started esteem estimation. whole pixels with intensities equivalent
or greater than esteem estimation are transformed to white pixels. Rest of pixels with intensities smaller than esteem estimation
are transformed to black pixels. For each pixel (X) in Image Z:
If X.getP ixel(x).Intensity > threshold
Z.setP ixel(x) = W hite
Else
Z.setP ixel(x) = Black
4) Morpological Operations: It is a wide-ranging group of image processing tasks and it produce picture in light of shapes.The
working of morphology implement a shaping element to an input picture, which produce the similar dimension at the output.
Morphological operation has two procedures erosion and dilation. In a morphological procedure, the value of every pixel in the
produced image is constructed on an evaluation of the corresponding pixel in the input image with its neighbors. By selecting
the size and contour of the neighborhood, you can create a morphological operation that is sensitive to particular shapes in the
input image.
Erosion: Erosion ”shrink” or ”disperses” objects in a binary frame. As in dilation, the method and range of shrinking is
controlled by a structuring element. Erosion procedure is rather opposite to the dilation operation. After execution the erosion
process on binary image A using 3∗3 square matrix as structuring element is assumed by:
Mathematically proof of the erosion of a set A by B, denoted A B, is defined as:
proves how erosion works using the similar input object and three altered structuring elements. Erosion is plainer in the way
of the longer dimension of the rectangular Shaping element.

Fig. 4. Effect of erosion using 3*3 square structural element B.

B. Background Subtraction Algorithm


It is the most widespread and simple methodology for detect the motion. [9], [10]. Concept of this algorithm is to deduct the
existing frame from a mention background frame, which is updated for the given period. When the static cameras are installed
it work extremely fine. As the subtraction is done it leaves only dynamic or new items, that contain complete shape area of
an object. This method is understandable and computeable for real-time systems, but they are tremendously intense to dynamic
scene variations from illumination variations such as tree branches, sea waves, animated background etc. So that it is necessary
for good background maintenance model. This approach detects moving domain in a frame by adapting the transformation among
the existing picture and the mention background frame catches from a stationary background during a given time. The difficulty
with background subtraction [9], [10] is to automatically update the background from the inward video frame and it should be
capable to reduce the following difficulties:
Motion in the background: Dynamic background domain, such as branches and leaves of trees, a flag waving in the storm,
or flowing water, should be acknowledged as part of the background.
Clarification changes: The background model should be capable to familiarize, to continuing changes in illumination over a
period of time.
Shadows: Shadows cast by moving object should be acknowledged as portion of the background not for object.
Camouflage: Motion should be identified even if pixel features are related to those of the background.
Bootstrapping: The background model should be able to continue background even in the absence of foreground object. This
will remove a lot of noisy pixels which have, the most of the times, a neighboring value and will eliminate too some of the
Pixels which represents the shadows make by the moving objects.
In this paper, a morphological operation used to make a morphology opening in the binary frame. The interest of this
morphological technique is that it can eliminate small objects considered as a noise. The morphology opening is collected of two

355
5
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

basics operator in the area of mathematical morphology with the same element. Some morphological post processing techniques
are implemented to overcome noise and to improve the detected domain.
Result = |F ramei − Backgroundi |
If
Result > threshold
T hen
Result = 1
Else
Result = 0
1) Algorithm for Background Subtraction: The proposed algorithm dynamically extracting the background frame from incoming
video frames, it is then subtracted from every subsequent frame and compared with the background threshold. If is greater than
the background threshold, it is assume as foreground otherwise it is background. The Background is updated in each and every
frame. Flow chart for Background Subtraction algorithm are as follows:
a) Start.
b) Input the pair of frame i sequence of N video frames.
c) Convert frames RGB to Grayscale.
d) Find the difference between frames.
e) The desired difference convert to Binary using thresholding.
f) If the pixel value of current pair of frame i is greater than threshold value then the motion is detected in the frame of
moving clip, and Opening also applied for noise removing. Otherwise the value less than threshold variation only when there is
no motion in a pair of frame.
g) Hence the motion is detected.
h) Stop

Fig. 5. Flow Chart for Background Subtraction Algorithm.

356
6
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

2) Difference: The difference of two frames of the equal size and resolution, produces an image, where each pixel equivalents to
absolute difference among matching pixels from provided images. Assuming we have two images X and Y , we are manipulating
these images to obtain image Z. For each pixel (X) in Image Z:
R = |X.getP x(x).R − Y.getP x(x).R|
G = |X.getP x(x).G − Y.getP x(x).G|
B = |X.getP x(x).B − Y.getP x(x).B|
Z.setP x(x) = Color(red, green, blue)
3) Threshold : It does convert image into binary using specified esteem value. All pixels with intensities equivalent or higher
than esteem value are transformed to white pixels. All other pixels with intensities below esteem value are transformed to black
pixels. For each pixel (X) in Image Z:
If
X.getP x(x).Intensity > threshold
Z.setP x(x) = W hite
Else
Z.setP x(x) = Black
4) Opening: The opening operation is a combination of the two fundamentals operation Erosion and Dilatation. It is the
dilation of the erosion and its primary feature is too remove noise. This operation will separate blobs which are connected with
a small layer. Let X be a subset of image and let B denote the structure element. The morphological erosion is defined by:
ΘB (X)=δB (B (X))=(X B)⊕B
All pixels which can be enclosed by the mask with the mask being completely within the foreground region will be preserved.
However, all foreground pixels which cannot be touched by the mask without parts of it moving out of the foreground region
will be eroded away. As follows, an example of an opening operation with a 3*3 square mask.

Fig. 6. Opening of an image with a 3x3 square mask.

5) Edge detection: It is a mathematical technique which detects the points in a digital frame at which the brightness of a
picture changes or more regular discontinuities. That points at which picture brightness is changes quickly, structured into a
number of curved line slices labelled edges. The main tool in image processing is edge detection. It can also used in machine
vision, mainly in the areas of feature detection and feature extraction.

C. Blob Analysis Algorithm


Binary Large Object often called as BLOB, which collect a neighborhood pixel in a binary picture. There are two terms used
in it which are ”Large” used to show the large object and the other term is ”Small” that object are consider as distortion.

357
7
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Fig. 7. Blob Analysis Algorithm on real time video.

For a given area of interest in image processing, a Blob is an area which connect pixels. Blob procedures differentiate the
given pixels by their estimation and place them as a foreground or background. The uses of blob analysis is that the blob
structures usually considered a given dimension, ferret diameter, blob contour, location. Blob analysis tools creates these pixels
for a wide range of applications such as pick-and place. After all blob can define as a neighborhood pixels and investigate the tool
mostly consider as neighbor foreground pixels as part of the similar blob [?]. It is the region of connected neighborhood pixels.
Neighborhood component is 4 & 8-connectivity. The 8-connectivity is more precise than 4-connectivity, but the 4-connectivity
is often functional since it involves less calculations. Therefore it can process the image rapidly.

Fig. 8. 4 & 8 connected pixels.

1) Image Acquisition: An image is acquired.


2) Image Segmentation: Isolating the foreground pixels of interest from the image background using operations like thresholding
and others, called segmentation.
3) Extract Features: Features like area (the number of pixels), center of gravity, or the orientation of a blob or blobs are
calculated.
Blob Detection:
In the blob detection model,the pixels which was detected in Background subtraction model are grouped, in current frame,
together by utilizing a contour detection algorithm. The contour detection algorithm groups the individual pixels into disconnected
classes, and then finds the contours surrounding each class. Each class is marked as a candidate blob (CB). These CB are then

358
8
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

checked by their size and small blobs are removed from the algorithm to reduce false detection. An example result of the blob
detection module can be seen in following figure.

Fig. 9. Left: Original video frame, Right: The result of the blob detection
module.

IV. R ESULTS
This paper is about the implementation of Frame Difference, Background subtraction and Blob Analysis Algorithm. So we
compared these algorithms based on performance separately. The Frame Difference Algorithm is very intense to noise and
variations in brightness, the objects must be continuously moving for detection. This algorithm will not give excellent result if
the objects is moving rapidly and the frame rate is stagnant, and not perfect edges of the moving objects, but extremely easy to
implement and use.
The background subtraction approach is very sensitive to very slight gesture, the background image must be updated for good
results if background is fixed so must adopted to illumination changes. If the object is moving smoothly it’s incredible to become
the whole moving object, if there were various object on the beginning of the frame and then it is gone, the algorithm we’ll
continuously have detected the motion on the place, where the object placed. The blob analysis algorithm is more efficient in
case when we have more than one moving object, but segmentation is must for Blob detection. The Comparative analysis of
these algorithms can be seen in the following table and the Background Subtraction Algorithm is the most robust and efficient
in use.

359
9
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Fig. 10. Left: Table of Comparison of algorithms.

V. C ONCLUSIONS
The fast evolution in computing and digital image processing made moving object detection and tracking an attractive research
topic now a days.
In this paper, we compared three algorithms namely Frame difference, Background subtraction and Blob Analysis algorithms.
Frame differences contains taking the distinguish between two following frames and using this transformation to identify
the object. Background subtraction refers the absolute difference among the background and the existing frame. It is also
enthusiastically updating the background frame by frame. Blob analysis is the region of connected neighborhood pixels. In
addition to these algorithms we use morphological operations to remove noise from video frames. We used static background
in this paper, for future it can be improved for dynamic background. Future work will be directed towards achieving human
detection, vehicle counting, and face recognition easily.

R EFERENCES
[1] Y. Yoo and T.-S. Park, “A moving object detection algorithm for smart cameras,” in Computer Vision and Pattern Recognition Workshops, 2008. CVPRW’08.
IEEE Computer Society Conference on. IEEE, 2008, pp. 1–8.
[2] A. I. Singh and G. Kaur, “Motion detection method to compensate camera flicker using an algorithm,” International Journal Of Computational Engineering
Research, vol. 2, no. 3, pp. 919–926, 2012.
[3] S. Joudaki, M. S. B. Sunar, and H. Kolivand, “Background subtraction methods in video streams: A review,” in 2015 4th International Conference on
Interactive Digital Media (ICIDM), Dec 2015, pp. 1–6.
[4] Y. Benezeth, P.-M. Jodoin, B. Emile, H. Laurent, and C. Rosenberger, “Comparative study of background subtraction algorithms,” Journal of Electronic
Imaging, vol. 19, Jul. 2010. [Online]. Available: https://hal.inria.fr/inria-00545478
[5] J. Li, Z. Zeng, J. Sun, and F. Liu, “Through-wall detection of human being’s movement by uwb radar,” Geoscience and Remote Sensing Letters, IEEE,
vol. 9, no. 6, pp. 1079–1083, 2012.
[6] A. K. Sahu and A. Choubey, “A motion detection algorithm for tracking of real time video surveillance,” in International Journal of Computer Architecture
and Mobility, 2013.
[7] H. Wang and D. Suter, “Background subtraction based on a robust consensus method,” in Pattern Recognition, 2006. ICPR 2006. 18th International
Conference on, vol. 1. IEEE, 2006, pp. 223–226.
[8] D. J. Shah, D. Estrin, and A. Azari, “Motion based bird sensing using frame differencing and gaussian mixture,” Undergraduate Research Journal, vol. 47,
2008.
[9] K. Toyama, J. Krumm, B. Brumitt, and B. Meyers, “Wallflower: Principles and practice of background maintenance,” in Computer Vision, 1999. The
Proceedings of the Seventh IEEE International Conference on, vol. 1. IEEE, 1999, pp. 255–261.
[10] L. Maddalena and A. Petrosino, “A self-organizing approach to background subtraction for visual surveillance applications,” Image Processing, IEEE
Transactions on, vol. 17, no. 7, pp. 1168–1177, 2008.

360
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Real Time Eye Detection and Tracking Method


for Driver Assistance system
Umar Mahboob,Syed Zain Kazmi, Usman Khalil, Javed Iqbal
Electrical Engineering Department
Sarhad University of Science & IT, Peshawar, Pakistan
Corresponding Email: javed.ee@suit.edu.pk

Abstract—Drowsiness is one of the curtail issues that makes accidents in the working environment, particularly when it
approaches to critical events. The main motive of this work is to advance safety by detecting drowsiness in acute situations.
The method for detecting drowsiness in subject is established by using image processing analysis. Camera is pointed in the
direction of the subject’s face and captures the images. Once the images are captured, the proposed algorithm is pragmatic
upon them to check the drowsiness. This technique accelerates a failsafe system when accomplished with hardware. In
addition, it does not require any physical attachments with the body of subject, like in Electroencephalography (EEG) based
systems, which is quite more practical. This is a simple technique based upon energy maps of eyes, thus its effortlessness makes
work quicker simultaneously.

Keywords—Computer Vision, Algorithms, Image Processing, Drowsiness Detection.

I. INTRODUCTION
Road Safety is one of the huge problem concerns of the entire world. Present days the ratio of death and injuries are
increases due to the cause of drowsiness. According to the different kind of research the drowsiness results its position as
20% harms or accidents [1]. Therefore safety is necessary so that’s why we design a system which detect the drowsiness.
In this remark many systems designed some of them are still using for safety involves pulse rate and heart rate system,
Eyelid, gaze and head moment detection system, physiological like EEG beside this system image processing technique is
to be consider more reliable and efficient. The work is considering to develop an image processing techniques to detect
drowsiness, which will be simple, cheap, faster, quiet and more practical than EEG. It does not require any camera of high
resolution, if the light intensity is maintained properly. The proposed system will help to cover the main reason of accident
and will provide safety for that by studying facial expression of driver. Different types of research have been done in this
area but image processing system is one of efficient and precise one. This work aims is to design an image processing
system that will detect the drowsiness which make certain safety assurance. To save human lives and protect them from
any accidents, provide them the resources to make better life.
The rest of the paper is organized as follows: In Section II, the current work in the area is provided. In Section III, we
present our proposed algorithm and methodology to find the drowsiness of the driver. First, we compare our proposed
algorithm with state-of-the-art algorithms in MATLAB. Then in Section IV, we present the implementation of the driver
assistance system. In Section VI, we provide the conclusion of the work.

II. RELATED WORK


There are many systems for drowsiness detection which are mainly into three types: Measurements of person using
Physiological concept, System based on Visual cues and Taking decision on the basis of performance of working. Road
users know very well to fall asleep while driving. During the long journey the fatigue occur and that makes causing lack of
concentration and occasionally road accidents. This work based on eye detection and simple distributed force sensor to
monitor the driver’s fatigue [2], [10]. In the Physiological measurements the electrodes are connected to the subject to
monitor the characteristics like: Brain waves monitoring Electroencephalogram (EEG), Eye movements monitoring
Electrooculography (EOG) & Heart rate monitoring Electrocardiogram (ECG) that detects drowsiness from physiological
signals using wavelet-based nonlinear features and machine learning [13].
Various Researches on an EEG- built system have suggested by [3], [4], [5] and [6]. It is more efficient and have the
capability to detect the drowsy state with at least 10% inaccuracy. System based on visual cues have different techniques
to detect the drowsiness like: Decision on the basis of orientation face, Expressions of face (yawning) & Movement of
Eyelid.[7], [12]. The way of detecting driver’s drowsiness level by using facial expressions. This method is executed
according to the following steps: taking a driver’s facial image, tracing the facial features by image processing, and
classifying the driver’s drowsiness level by pattern classification.
The method for drowsiness detection involves of technique that monitors how the subject is behave his task. If the
subject is a driver of car then drowsiness can be noticed several steps:
i. Variations in the steering wheel angle
ii. Lateral position

361
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

iii. Speed of vehicle.


Method of drowsiness developed by the supposition that an alert driver will make lesser variations during driving as
compared to drowsy driver [1], [9], [11].

III. PROPSED METHODOLOGY


Drowsiness is the condition of a desire sleeping so by using image processing algorithms, this proposed work is focus
on locating the iris in eyes by looking at the entire captured image of the face. As the iris is located, the system checking
whether the eyes are opened or closed that is how the system is designed to determine drowsiness. For developing
algorithms and logics the best tool that is MATLAB. On the bases of energy mapping of eyes we detect iris in eye in this
work. As we know that eyes have the maximum light energy so if we delivered light continuously by considering average
and mean energy and facing the high energy area we can observe the area of eyes.

Figure 1: Eye Flow Diagram

When iris detection process is completed we are progressing in the direction of developing a check either eyes is
opened or closed. If we detect fatigue, a warning signal will be send to trigger alarm (hardware device) to active the
subject. A fail safe system can also be activated. In addition to the alarm, the procedure that is being supported out by the
subject can be stop through stop button by sending Signal to stop button. We warn the subject accordingly by measuring
eye portion standard deviation and eye mean area in this work while using digital logic based algorithm to determine
drowsiness. The proposed architecture of this work is shown in Fig. 2.

The design can be describes by the following steps:


• Camera is in operating condition and capturing the video continuously.
• Now the camera is linked with MATLAB software.
• From the ongoing video image is taken by using MATLAB algorithm (energy map concept).
• We apply different types of filter on the taken image to find out edges.
• After edge detection we make pair of 8 pixel as the size of eyes.
• Once the eyes are detected.
• A threshold value is set to clarify whether eyes are closed or opened.
• If the number of closed eyes is greater than the number of opened eyes drowsiness is detected and signal is
generated to activate the Alarm.

362
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

• If the number of closed eyes is less than the number of opened eyes, so drowsiness is no detected then system will
again take an image from the ongoing video.

IV. IMPLEMENTATION

A. Development Stages of Image Acquisition in Image Processing


Our implementation is based on energy map algorithm which is possible and can be done in MATLAB. Different
stages are shown in the Following flow-chart as first we capturing image using camera. With the help of webcam the
image has been captured. Then the conversion of that image into hue, saturation, value (HSV), filtering HSV (HSV is the
scale upon which the image color changed from RGB) based image, Calculating gradient magnitude from this we can get
the information from which we can achieve our target, applying mask application which helps us to declared the region of
the entire edges.

Figure 2: Block diagram for MATLAB implementation

It mainly consists of two marks one is horizontal and other is vertical which are necessary for edge detection and to
evaluate the difference of image pixel intensity in the area of edge. The next step is the convolution it helps us to get the
edges from the original image and convolve the horizontal as well as vertical mask to the processed image. As the
convolution is completed it goes to the generating energy map from HSV based image, separation of Global and Local
maxima and minima and at last we find to select the desired maxima of eye area. This whole process is implemented on
MATLAB. Block diagram of the whole process is Fig. 2.

B. Hardware Implementation
As the simulation is completed, now we have to generate a sending signal in MATLAB and give this signal to the
controller for carrying out the desired action to let know about the subject drowsy state. For this situation we have to load
the desired output results in a variable ‘m’ as ‘1’ if the drowsy state is detected certainly, and as ‘2’ represent active state.
Through serial communication these output signals are being directed to the controller to trigger the failsafe alarm. How
the hardware is working and activate the failsafe system. Our hardware has two main parts.
One is for displaying purpose and other is for indication and failsafe system. When an Arduino UNO is connected to
laptop through USB port, ORANGE LED will begin glowing, indicates that the power is in ON state. Arduino pin 13 is
ON. After that we will run the code that will interface Arduino with MATLAB. This code also include interfacing of
Camera with MATLAB. When Camera turns on, it will capture the image from subject and detection operation will be
performed. After all the process has been done, number of objects detected will be measured and will be stored in a
variable ‘m’. If number of objects detected are equal to or less then 2, it means eyes are detected thus 1 will be stored in
the variable otherwise if number of objects detected are greater than 2, it means eyes are not detected. So ‘2’ will be stores
in the variable that should be passed to the Arduino When 1 is send to the Arduino, it will turn on red led. If two is
received from MATLAB, it will turn on the buzzer which is used as failsafe system.

363
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

V. RESULTS

In the result section, we discuss three cases on this work:


Case I : We consider as driver is awake then we send signal from Matlab to Arduino through A B cable as result
hardware show driver is active as shown in Fig. 3.

Figure 3: Experimental Setup case I


Case II: We assume that driver is drowsy so we send signal from Matlab to Arduino through A B cable as a result we
get driver is drowsy and buzzer will be activated at the same time as shown in Fig. 4.

Figure 4: Experimental Setup case II


Case III: If the system detects five times continuously the drowsiness it will generate a signal to operate a break fail
system as shown in Fig. 5.

364
1st National Conference on Mathematical Sciences in Engineering Applications (NCMSEA - 18), April 18 - 19, 2018

Figure 5: Experimental Setup case III

A. Conlcusions
This work provides proto-type based computer vision system which is helpful to detect drowsiness and fatigue while
using algorithm. We take decision whether a person is in drowsiness state or not on the basis Image processing in
MATLAB. First we recognize face and then select the eye portion and then using different Algorithms we decide the
condition i.e., either open or closed.
It is to be noted that image capture from online video depend on the threshold value.as if there is a change in threshold
value will cause change in the whole process. By changing the light intensity will cause to affect threshold value and thus
whole proceeding process will change directly. The Light intensity must be keep as constant as much possible. Anything
which affect the intensity of light will be removed from background. To resolve this matter, a camera of high intensity is
required Otherwise, the light intensity must be keep constant.

REFERENCES
[1] Ani Syazana Bt Jasni, “Drowsiness Detection For Car Assisted Driver System Using Image Processing Analysis”, Bachelor Degree of Electrical
Engineering (Electronics) Thesis, University Malaysia Pahang (UMP), pp.1-24, November, 2010.
[2] J. Teeuw, “Comparison of Error-Related EEG Potentials”, University of Twente, The Netherlands, 21 June, 2010.
[3] Saroj K.L. Lal, Ashley Craig, Peter Boord, Les Kirkup, Hung Nguyen, "Development of an algorithm for an EEG-based driver fatigue
countermeasure, Journal of Safety Research", Volume 34, Issue 3,2003,Pages 321-328,
[4] Vuckovic, Aleksandra, Radivojevic, Vlada, Chen, Andrew C.N., Popovic,"Automatic recognition of alertness and drowsiness from EEG by an
artificial neural network," Medical Engineering and Physics,Vol.24,pp.349-360, March 2002.
[5] Shen, Kai-Quan, Li, Xiao-Ping, Ong, Chong-Jin, Shao, Shi-Yun, Wilder-Smith, Einar P.V.,"EEG-based mental fatigue measurement using multi-
class support vector machines with confidence estimate,"Clinical Neurophysiology, Vol.119, pp. 1524-1533, May 2008.
[6] Budi Thomas Jap, Sara Lal, Peter Fischer, Evangelos Bekiaris, “sing EEG spectral components to assess algorithms for detecting fatigue”, Expert
System with Appication, vol. 36, no. 2, pp. 2352-2359, March 2009.
[7] S. Abtahi, B. Hariri and S. Shirmohammadi, "Driver drowsiness monitoring based on yawning detection," 2011 IEEE International
Instrumentation and Measurement Technology Conference, Binjiang, 2011, pp. 1-4.
[8] R. Sayed and A. Eskandarian, “Unobtrusive drowsiness detection by neural network learning of driver steering,” Proceedings of the Institution of
Mechanical Engineers, Part D: Journal of Automobil Engineering, vol. 215, no. 9, pp. 969–975, June 2001.
[9] Ankit S.Jayswal, Rachana V.Modi, “Face and Eye Detection Techniques for Driver Drowsiness Detection,” International Research Journal of
Engineering and Technology, vol. 04, April 2017.
[10] Susanta Podder, Sunita Roy, “Driver’s drowsiness detection using eye status to improve the road safety,” International Journal of Innovative
Research in Computer and Communication Engineering, vol. 1, Issue 7, september 2013.
[11] Triyanti and H. Iridiastadi, "Challenges in detecting drowsiness based on driver’s behavior", IOP Conference Series: Materials Science and
Engineering, vol. 277, p. 012042, 2017.
[12] N. Alioua, A. Amine and M. Rziza, "Driver’s Fatigue Detection Based on Yawning Extraction," International Journal of Vehicular Technology,
vol. 2014, pp. 1-7, 2014.
[13] Pankti P . Bhatt, “Various Methods for Driver Drowsiness Detection: An Overview,” International Journal on Computer Science and Engineering,
vol. 9, March 2017.

365
Contact
Dr. Noor Badshah (Focal Person)
Department of Basic Sciences & Islamiat
University of Engineering and Technology, Peshawar

+92-91-9222220

ncmsea@gmail.com
@ noor2knoor@gmail.com

w http://121.52.147.78/modules/ncmsea/

You might also like