You are on page 1of 68

Editor-in-Chief

Dr. Sergey Victorovich Ulyanov


State University “Dubna”, Russian Federation

Editorial Board Members

Ebtehal Turki Alotaibi, Saudi Arabia Reza Javanmard Alitappeh, Iran


José Miguel Rubio, Chile Luiz Carlos Sandoval Góes, Brazil
Luis Pérez Domínguez, Mexico Abderraouf Maoudj, Algeria
Brahim Brahmi, Canada Ratchatin Chancharoen, Thailand
Behzad Moradi, Iran Shih-Wen Hsiao, Taiwan
Hesham Mohamed Shehata, Egypt Nguyen-Truc-Dao Nguyen, United States
Mahmoud Shafik, United Kingdom Lihong Zheng, Australia
Siti Azfanizam Ahmad, Malaysia Hassan Alhelou, Syrian Arab Republic
Hafiz Alabi Alaka, United Kingdom Fazlollah Abbasi, Iran
Abdelhakim Deboucha, Algeria Chi-Yi Tsai, TaiWan
Karthick Srinivasan, Canada Shuo Feng, Canada
Ozoemena Anthony Ani, Nigeria Mohsen Kaboli, Germany
Rong-Tsu Wang, Taiwan Dragan Milan Randjelovic, Serbia
Yu Zhao, China Milan Kubina, Slovakia
Aslam Muhammad, Pakistan Yang Sun, China
Yong Zhong, China Yongmin Zhang, Canada
Xin Zhang, China mouna Afif, Tunisia
Anish Pandey, Bhubaneswar Yousef Awwad Daraghmi, Palestine
Hojat Moayedirad, Iran Ahmad Fakharian, Iran
Mohammed Abdo Hashem Ali, Malaysia Kamel Guesmi, Algeria
Paolo Rocchi, Italy Yuwen Shou, Taiwan
Falah Hassan Ali Al-akashi, Iraq Sung-Ja Choi, Korea
Chien-Ho Ko, Taiwan Yahia ElFahem Said, Saudi Arabia
Bakİ Koyuncu, Turkey Michał Pająk, Poland
Wai Kit Wong, Malaysia Qinwei Fan, China
Viktor Manahov, United Kingdom Andrey Ivanovich Kostogryzov, Russian Federation
Riadh ayachi, Tunisia Ridha Ben Salah, Tunisia
Terje Solsvik Kristensen, Norway Andrey G. Reshetnikov, Russian Federation
Hussein Chible Chible, Lebanon Mustafa Faisal Abdelwahed, Egypt
Tianxing Cai, United States Ali Khosravi, Finland
Mahmoud Elsisi, Egypt Chen-Wu Wu, China
Jacky Y. K. NG, Hong Kong Mariam Shah Musavi, France
Li Liu, China Shing Tenqchen, Taiwan
Fushun Liu, China Konstantinos Ilias Kotis, Greece
Volume 1 Issue 1 · April 2019 · ISSN 2661-3220 (Online)

Artificial Intelligence
Advances

Editor-in-Chief
Dr. Sergey Victorovich Ulyanov
Volume 1 | Issue 1 | April 2019 | Page 1-58
Artificial Intelligence Advances

Contents
Article
1 To Perform Road Signs Recognition for Autonomous Vehicles Using Cascaded Deep Learn-
ing Pipeline
Riadh Ayachi, Yahia ElFahem Said, Mohamed Atri
11 GFLIB: an Open Source Library for Genetic Folding Solving Optimization Problems
Mohammad A. Mezher
18 Quantum Fast Algorithm Computational Intelligence PT I: SW / HW Smart Toolkit
Ulyanov S.V
52 A Novel Dataset For Intelligent Indoor Object Detection Systems
Mouna Afif, Riadh Ayachi1 Yahia Said, Edwige Pissaloux, Mohamed Atri

Review
44 Architecture of a Commercialized Search Engine Using Mobile Agents
Falah Al-akashi

Copyright
Artificial Intelligence Advances is licensed under a Creative Commons-Non-Commercial 4.0 International Copyright
(CC BY- NC4.0). Readers shall have the right to copy and distribute articles in this journal in any form in any medium,
and may also modify, convert or create on the basis of articles. In sharing and using articles in this journal, the user must
indicate the author and source, and mark the changes made in articles. Copyright © BILINGUAL PUBLISHING CO. All
Rights Reserved.
Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

Artificial Intelligence Advances


https://ojs.bilpublishing.com/index.php/aia

ARTICLE
To Perform Road Signs Recognition for Autonomous Vehicles Using
Cascaded Deep Learning Pipeline
Riadh Ayachi1 Yahia ElFahem Said1,2* Mohamed Atri1
1. Laboratory of Electronics and Microelectronics (EμE), Faculty of Sciences of Monastir, University of Monastir, Tunisia
2. Electrical Engineering Department, College of Engineering, Northern Border University, Arar, Saudi Arabia

ARTICLE INFO ABSTRACT

Article history Autonomous vehicle is a vehicle that can guide itself without human con-
Received: 26 February 2019 duction. It is capable of sensing its environment and moving with little or
no human input. This kind of vehicle has become a concrete reality and
Accepted: 6 April 2019 may pave the way for future systems where computers take over the art of
Published Online: 30 April 2019 driving. Advanced artificial intelligence control systems interpret sensory
information to identify appropriate navigation paths, as well as obstacles
Keywords: and relevant road signs. In this paper, we introduce an intelligent road
Traffic signs classification signs classifier to help autonomous vehicles to recognize and understand
road signs. The road signs classifier based on an artificial intelligence
Autonomous vehicles technique. In particular, a deep learning model is used, Convolutional
Artificial intelligence Neural Networks (CNN). CNN is a widely used Deep Learning model to
Deep learning solve pattern recognition problems like image classification and object
detection. CNN has successfully used to solve computer vision problems
Convolutional Neural Networks CNN
because of its methodology in processing images that are similar to the
Image understanding human brain decision making. The evaluation of the proposed pipeline
was trained and tested using two different datasets. The proposed CNNs
achieved high performance in road sign classification with a validation
accuracy of 99.8% and a testing accuracy of 99.6%. The proposed meth-
od can be easily implemented for real time application.

 
1. Introduction error in judging situation like human does. Traffic signs

I
classifier is the feature key for developing autonomous ve-
n the recent years, we notice that the number of ac- hicles. It provides a global overview about the road rules
cidents increases with a huge way. According to the to control the vehicle and the way how it reacts according
American safety council [13] more than 40000 dies to given situation.
because of cars accidents. The main cause of accident was Generally, an autonomous vehicle is composed from a
non-respect of the road rules and speed limits. Automated big number of sensors and cameras. The visual informa-
technologies have been developed and reaches a signifi- tion provided by the cameras can be used to recognize the
cant result. Autonomous vehicles are proposed as a solu- road signs. To process visual information, a well-known
tion to make roads safer by taking the control. An autono- Deep Learning model, Convolutional Neural Networks
mous vehicle based on artificial intelligence will not make (CNN) [1], are proposed. They are widely used in image

*Corresponding Author:
Yahia ElFahem Said,
Electrical Engineering Department, College of Engineering, Northern Border University, Arar, Saudi Arabia;
Email: said.yahia1@gmail.com

Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.569 1


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

processing tasks such as object recognition, image classi- from University of Oxford, and was the 1st runner-up of
fication[2] and object localization[3]. CNNs are successfully the classification task in the ILSVRC2014 challenge[32]
used to solve computer vision tasks[4] because of their and the winner of the localization task. The second one is
power in visual context processing that mimic the biolog- the Deep Residual Network ResNet[11]. It was arguably the
ical system were every neuron in the network is applied most groundbreaking work in the computer vision/deep
in a restricted region of the receptive field[5]. Then all the learning community in the last few years. ResNet makes it
neurons of the network overlapped to cover the entire re- possible to train up to hundreds or even thousands of lay-
ceptive field. So, features from all the receptive field are ers and still achieves compelling performance.
shared everywhere in the network with less effort. The By testing the proposed networks, we achieve high
major advantage of the Convolutional Neural networks is performance in both validation and tests. The best per-
the ability to learn directly from the image[6], unlike other formance was achieved using the 34 layers ResNet archi-
classification algorithm that need a hand-crafted feature to tecture with a validation accuracy of 99.8% and a testing
learn from. accuracy of 99.6%. Also achieving an inference speed of
For human, recognizing and classifying a traffic sign is more than 40 frames per second, the pipeline can be im-
an easy task and the classification will be totally correct plemented for real time applications.
but for an artificial system, it is a hard task that needs a The remainder of the paper is organized as follows.
lot of computation effort. In many countries the shape and Related works on traffic signs classification are presented
the color of the same road sign is different. Figure 1 illus- in Section 2. Section 3 describes the proposed pipeline
trates an example of the stop sign in different countries. to recognize and classify road signs. In Section 4, exper-
In addition, the road sign can look different because of the iments and results are detailed. Finally, Section 5 con-
environment factors like rain, sun and dust. Though the cludes the paper.
mentioned challenges need to be processed successfully
to make a robust road sign classifier with the minimum of 2. Related Works
error.
The need for a robust traffic sign classifier became an
important benchmark that must be solved. Many research
works were presented in the literature[14,15,36]. Ohgushi et
al.[16] introduced a traffic signs classifier based on color
information and Bags of Features (BoF) as a features
extractor and a support vector machine (SVM) as a clas-
sifier. The proposed mothed struggle in recognizing the
traffic signs in real condition especially when the sign is
intensively illuminated or partially occluded.
Some research investigated the detection of the traffic
sign without performing the classification process[17,18].
Wu et al.[17] proposed a method to detect only round traffic
signs in the Chinese roads. In other side, researchers focus
on detecting and recognizing the traffic sign[19]. The pro-
Figure 1. Stop Sign in Different Countries
posed method only detects round signs and cannot detect
In this paper, we propose a pipeline based on data other signs shapes.
preprocessing algorithm and deep learning model to rec- A three steps method to detect and recognize traffic
ognize and classify traffic signs. The data preprocessing signs was proposed by Wali et al.[20]. The first step was
pipeline is composed by five stages. First, data loading data preprocessing. The second was detecting the exis-
and augmentation are performed. Then, all the images are tence of the sign and the third was classifying it. For the
resized and shuffled. All the images are then transformed detection process, they apply the color segmentation with
to gray scale channel. After that, we apply a local histo- shape matching and for the classification process they
gram equalization[8, 9, 10]. Finally, we normalize the images use SVM as a classifier. The proposed method achieves
to feed them to the proposed convolutional neural net- 95.71% of accuracy. Lai et al.[21] introduced a traffic signs
work. recognition method using smart phone. They used color
As CNN model, we propose two different networks. detection to perform color space segmentation and shape
The first one is 14 layers subset from the VGGNet mod- recognition method using template matching by calculat-
el[12], which is invented by VGG (Visual Geometry Group) ing the similarity. Also, an optical character recognition

2 Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.569


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

(OCR) was implemented inside the shape border to decide velop new techniques to perform traffic signs classifica-
on the sign class. The proposed method was very limited tion. As an example, BMW announced the integration of a
on red traffic signs only. Gecer et al.[38] propose to use traffic sign classifier in the BMW 5 series. Moreover, oth-
color-blob-based COSFIRE to recognize traffic signs. The er vehicle manufactories were trying to implement those
proposed method was based on a Combination of Shifted technologies [27]. Volkswagen implement a traffic sign
Filter Responses with compute the response of different classifier in the Audi A8[28]. All the existing researches on
filters is different regions in each channel of the color the traffic signs classification proved the important of this
space (ie. RGB). The proposed method achieves 98.94% technology for autonomous cars.
as accuracy on the GTSRB dataset.
Virupakshappa et al.[22] used a machine learning meth- 3. Proposed Method
od by combining the bag-of-visual-words technique
As mentioned above many traffic signs classification
with Speeded up Robust Features (SURF) for features
techniques are proposed. Our method focusses on the data
extraction then feed the features to an SVM classifier to
preprocessing technique to enhance the images quality
recognize the traffic signs. The proposed method achieves
and to reduce the number of features learned by the con-
an accuracy of 95.2%. A system based on a BoW descrip-
volutional Neural Network so we ensure the real time
tor enhanced using spatial histogram was used by Shams
implementation. As shown in figure 2, the preprocessing
et al.[23] to improve the classification process based on an
technique contain five phases: data loading and augmen-
SVM classifier.
tation, images resizing and shuffling[29], gray scaling, local
Lin et al.[24] introduced a two-stage fuzzy inference
histogram equalization[30] and data normalization.
model to detect traffic signs in video frame the they apply
As a first phase, we load the data and we generate new
a two-stage fuzzy inference model to classifier the signs.
examples using a data augmentation technique. The data
The method provides high performance only on prohibito-
augmentation process is applied to maximize the amount
ry and warning signs. In[25], Yin et al. presented a revolu-
of the training data. Also, the data augmentation was used
tionary technique for real time processing based on Hough
in the tests by generating more points of view of the tested
transformation to localize the sign in the image the use the
image to ensure better prediction.
rotation invariant binary pattern (RIBP) descriptor to ex-
In the second phase, we resize all the images to
tract features. As a classification method they use artificial
height*width*3 where 3 denotes the 3 channels color
neural networks.
space. Then the images are shuffled to avoid obtaining
A cascade Convolutional Neural Network model was
minibatches of highly correlated examples. So, the train-
introduced by Rachmadi et al.[26] to perform the traffic
ing algorithm will choose a different minibatch each time
signs classification process of the Japanese road signs.
it iterates. In third phase, we perform gray scaling to re-
The proposed method achieves a performance of 97.94%
duce the number of channels of the image so the images
and can be implemented for real time processing with a
are scaled to height *width*1. As result of the gray scal-
speed less than 20 ms per image. The mothed of Sermanet
ing technique the number of learned filters was reduced in
et al.[39] was based on a multi-scale convolutional neural
the convolutional neural network. Also, the training and
network. This method introduces a new connection way
inference time can be reduced. In the fourth phase, we ap-
by skipping layers and the use of pooling layers with
ply local histogram equalization[31] to enhance the images
down sampling ratios for connection that skip layers dif-
contrast by separating the most frequent intensity values.
ferent than those that do not skip layers. The proposed
Usually, this increases the global contrast of the images
method improves its efficiency by reaching 99.1% accura-
and allows to the areas of lower local contrast to gain a
cy. Cireçsan et al[37] used a combination of CNNs and train
higher contrast. The fifth phase consists of data normaliza-
them in parallel using differently preprocessed data. It
tion which is a simple process applied to get the same data
uses an arbitrary number of CNNs each is combined from
scale of all the examples ensuring an equal representation
seven layers, input layer, two convolution layers, two max
of all the features. The preprocessing pipeline is an im-
pooling layers and two fully connected layers. The predic-
portant stage to enhance the data injected to the network
tion is provided by averaging the output of all the CNNs.
in both training and testing process.
The proposed technique further boosts the classification
accuracy to 99.4%. The use of convolutional neural net-
works has led to enhance the classification accuracy com-
pared with the machine learning techniques.
In the recent years, several vehicle manufactories de-

Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.569 3


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

Figure 3. Convolutional Neural Network Architecture

Figure 2. Data Preprocessing The first CNN to use is VGGNet[12]. VGGNet have two
main architectures: the VGG16 which is a 16 layers CNN
The second part of our method is the Convolutional and the VGG19 which is a 19 layers CNN. The VGGNet
Neural Network (CNN). Generally, a convolutional neu- architectures are presented in figure 4. VGGNet achieves
ral network is feedforward neural network used to solve a top 5 error in the ILSVRC2014 classification challenge
computer vision tasks. Usually, a CNN contains six types [32]
of 7.32%. In our work we will just use 14 layers from
of layers: input layer, convolution layers, nonlinear layers, the VGGNet by saving the first 10 layers and the 4 last
pooling layers, fully connected layers and an output layer. layers. Also, in the third block we will use just 2 convolu-
Figure 3 illustrates a CNN architecture. tional layers and a pooling layer.
The complete proposed pipeline is composed from a
data preprocessing stage and a convolutional neural net-
work for traffic signs classification. The proposed pipeline
can be summarized by the pseudo code presented in algo-
rithm 1.
Algorithm 1: proposed pipeline for traffic signs classification
Train input: images, labels
Test input: images
Output: images classes
Mode: choose the mode (training or testing)
Figure 4. VGGNet Architecture
Batch size: choose a batch size (number of images per batch) The second CNN that we will explore is ResNet [11]
Image size: choose the images size
Number of batches: choose a number of batches which presents a revolutionary architecture to accelerate
If mode: training the convergence of the very deep neural networks (more
For batch in range (number of batches):
than 20 layers) by implementing residual blocks instead
Load the data (images and labels)
Apply data augmentation of classic plain blocks used in VGGNet. An illustration of
Resize the images the residual block is shown in figure 5. ResNet wins the
Shuffle the images
Apply local histogram equilibration
ILSVRC2015 classification contest [32] achieving the top-
Normalize the images 5 validation error of 3.57%[11]. To perform traffic signs
Fit the images into the convolutional neural network classification, we choose ResNet 34 architecture. Figure
Initialize the CNN parameters (load weights from pretrained model)
Compute the mapping function
5 presents the structure of ResNet 34 which is a 34 layers
Generate the output CNN with residual blocks. A residual block is an accumu-
Repeat lation of the input and the output of the block.
Compute the loss function (difference between output class and input
label) VGGNet and ResNet are trained to classify natural im-
Optimize CNN parameters (apply backpropagation algorithm) ages according to the ImageNet [32] with 1000 classes. To
Until output class input label make it perfect for the traffic signs classifier, the transfer
Chose next batch
Else (mode: testing) learning technique was applied by replacing the output
Load the data (images) layers of those architectures by another layer contains the
Apply data augmentation classes of the traffic signs. The transfer learning technique
Resize the images
Apply local histogram equilibration is well known technique in deep learning which helps to
Normalize the images use existing architecture to solve new tasks by freezing
Fit the images into the convolutional neural network
some layers and fine tuning the other layers or retrain
Load parameters from trained model
Compute the mapping function them from scratch. The transfer learning is used to speed
Generate the output up the training process and to improve the performance

4 Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.569


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

of the used deep learning architecture. Using the transfer In all our experiments, all the networks are developed
learning technique allows to use the pre-trained weights using the TensorFlow deep neural network framework.
as a starting point to optimize the existing architecture for The training is performed using a desktop with Intel i7
the news task. processor and an Nvidia GTX960 GPGPU.
To achieve good performance, we use a variant of
configuration by manipulating the images sizes, the batch
size, the dropout probability and choosing the learning
algorithm (optimizer). We start by resizing the images to
32*32. Also, we start by using a large batch size (1024),
the dropout probability of 0.25 and as learning algorithm
we use stochastic gradient descent and we perform train-
ing the network.
Figure 5. ResNet34 Structure The final used images resizing value was determined
after testing many different values such as 32*32, 64*64,
Another advantage of the transfer learning is possibili-
96*96 and 128*128, and after several tests, we end up
ty to use a small amount of data to train the deep learning
by the best configuration which is resizing the images to
model and achieve high performance.
96*96, using a minibatch of 256, a dropout probability of
4. Experiments and Results 0.5 and the Adam optimizer. The Adam optimizer is an ex-
tension of the stochastic gradient descent optimizer which
In this work two datasets were used to train and evaluate guarantee a better and faster converge. In addition, it does
the networks. The first dataset is the German traffic signs not need a learning rate, it will generate its own learning
dataset GTSRB[34], which is a large multi-class dataset rate and optimize it until finding the best value.
for traffic signs classification benchmark. In this dataset
there is a training directory and a testing directory, each
contain 43 traffic signs classes providing more than 50000
total images of traffic signs in real conditions. Figure 6
represents the classes of the German traffic signs dataset.
The second data set is the Belgium traffic signs dataset
BTSC[35]. This dataset provides a training and teasing data
separately. The training and the testing data contain 62
traffic signs classes and more than 4000 images of real
traffic signs in the Belgium roads.

Figure 7. the Belgium Dataset Classes


In the data pre-processing pipeline, the data was pre-
pared for training and testing the model. First, loading the
data and applying the data augmentation technique. Figure
8 shows an example of the generated data using the pro-
posed data augmentation technique. Second, resizing the
data and shuffle it to generate mixed mini batches. Then,
images were transformed to the gray scale space color.
Figure 9 illustrates an example of the gray scaled images.

Figure 6: the German Traffic Signs Dataset Classes

Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.569 5


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

Figure 11. Normalized Gray Images and the Original Col-


or Images
In the training process, the data was injected to the
CNN architectures and the parameters are optimized. In
the ResNet 34, the first convolution layer was used to per-
form feature extraction and down sampling in the same
Figure 8. Data Augmentation
time by using 7*7 kernels to incorporate features with
larger receptive field and a stride of 2. Figure 12 presents
the output feature maps of the first ResNet 34 convolution
layer. The residual blocks are used for features extraction
using 2 convolutional layers with 3*3 kernels and zero
padding was applied. The input and the output of each re-
sidual block are accumulated to control parameters num-
ber explosion. Figure 13 presents the output feature maps
of the first ResNet 34 residual block.

Figure 9. Gray Scaling


The local histogram equalization was then applied to
equilibrate the images contrasts. Figure 10 present images
after applying the local histogram equalization. Finally,
normalizing the data and feed it to the convolutional neu-
ral network. An example of the normalized data is pre-
sented in figure 11.

Figure 12. Features Maps of the First ResNet34 Convolu-


tion Layer

Figure 10. Local Histogram Equalization

6 Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.569


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

Table 1 summarize the obtained accuracy on the test-


ing data of the trained models on the GTSRB and the
BTSC datasets. As shown in table 1 the best performance
is obtained on the GTSRB dataset using the ResNet 34
architecture and this proves the importance of the residual
block to enhance the network performance without any
explosion in the complexity when using very deep convo-
lutional neural network. The results obtained on the BTSC
data set are lower because of the lack of data. The dataset
contains only 4965 images divided on training data and
testing data. The reported data on the GTSRB dataset
proved that the proposed traffic sign classifier outper-
formed the human accuracy which is 98.32%. The most of
the false negative examples are caused by totally or par-
tially damaged images after performing the data pre-pro-
Figure 13. Features Maps of the First ResNet34 Residual cessing. Figure 15 illustrate an example of the damaged
Block images.
A way to visualize the CNN performance is by repre-
senting the corresponding confusion matrix. The confu-
sion matrix shows the ways in which the classification
CNN model is confused when it makes predictions. Fig-
ure 14 shows the confusion matrix of the ResNet on the
GTSRB dataset.

Figure 15. Damaged Images after Preprocessing

Table 2. Inference Speed of Each Architecture

Architecture frames/second
VGG (12 layers) 57
ResNet 34 43

Table 2 summarize the number of images processed


Figure 14. Confusion Matrix of ResNet34 per second by each architecture. For real time implemen-
tation, we need an equilibration between accuracy and
speed. Our best proposed CNN achieve an accuracy of
Table 1. Performance of the Proposed Architectures in
99.621% which is an acceptable value in comparison of
Term of Accuracy in Both Datasets
human accuracy and outperform the state-of-the-art mod-
Accuracy (%) els in the traffic signs classification task.
Dataset GTSRB BTSC Table 3 presents a comparison between our architec-
VGG (12 layers) 99.3 98.3 tures and other proposed architectures and methods tested
ResNet 34 99.6 98.8 on the GTSRB dataset.

Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.569 7


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

Table 3. Accuracy Comparison

Architecture Accuracy (%)


Committee of CNN [37] 99.4
Color-blob-based COSFIRE filters
98.9
[38]
Sermanet [39] 99.1
Proposed VGG (12 layers) 99.3
Proposed ResNet 34 99.6

As reported in table 3, our proposed ResNet 34 archi-


tecture outperform state of the art methods in traffic signs
classification. Also, our architecture can be easily imple-
mented for real time applications. A real time application
needs at least a 25 frames per second and as reported in Figure 16. ResNet34 Softmax Probabilitie
table 2, the lowest architecture processes 43 frames per
second. In other hand, all the proposed architecture out- Conflicts of Interest:
performs human accuracy in the traffic signs classification
benchmark. The authors declare no conflict of interest.
To make it useful for real word application and human
References
interpretable, we implement the ResNet 34 architecture in
traffic signs classification application where we label the [1] O'Shea, K., & Nash, R. An introduction to con-
images with human understandable labels. In both train- volutional neural networks.  arXiv preprint arX-
ing and tests label were encoded as integers. As example iv:1511.08458, 2015.
the labels were encoded from 0 to 42 range in the GTSRB [2] Ciresan, D. C., Meier, U., Masci, J., Maria Gambar-
dataset. The testing images was collected from the web della, L., & Schmidhuber, J. Flexible, high perfor-
and does not belong to the datasets. The top 5 probabili- mance convolutional neural networks for image clas-
ties of the softmax layer were visualized. Figure 16 pres- sification. In IJCAI Proceedings-International Joint
ents an example of the top 5 probabilities of the softmax Conference on Artificial Intelligence, 2011,22(1):
layer and their corresponding input images. The classifier 1237.
achieves a good performance when applied to the new im- [3] Tompson, J., Goroshin, R., Jain, A., et al. Efficient
ages and proves the generalization power. object localization using convolutional networks. In:
Proceedings of the IEEE Conference on Computer
5. Conclusion Vision and Pattern Recognition. 2015: 648-656.
[4] Simonyan, K., & Zisserman, A. (2014). Very deep
Traffic signs classification was and still an important ap-
convolutional networks for large-scale image recog-
plication for autonomous cars. Cars need real time and
nition. arXiv preprint arXiv:1409.1556.
embedded solutions that is why we need to provide a
[5] Simard, P. Y., Steinkraus, D., & Platt, J. C. Best prac-
balance between speed and accuracy. In this paper, we
tices for convolutional neural networks applied to vi-
propose an artificial intelligence technique based on deep
sual document analysis. In null (p. 958). IEEE, 2003.
learning model, Convolutional Neural Network to perform [6] LeCun, Y., Jackel, L. D., Bottou, L., Cortes, C.,
the traffic signs classification benchmark. The reported Denker, J. S., Drucker, H., ... & Vapnik, V. Learning
results prove that the proposed solutions can be effective- algorithms for classification: A comparison on hand-
ly implemented for real time applications and provide an written digit recognition. Neural networks: the statis-
acceptable accuracy outperforming human performance. tical mechanics perspective, 1995, 261: 276.
The proposed architectures can be more optimized for [7] Zhu, H., Chan, F. H., & Lam, F. K. Image contrast
embedded implementation. enhancement by constrained local histogram equal-
ization.  Computer vision and image understand-
ing, 1999, 73(2): 281-290.
[8] Kim, J. Y., Kim, L. S., & Hwang, S. H. An advanced
contrast enhancement using partially overlapped
sub-block histogram equalization. IEEE transactions
on circuits and systems for video technology, 2001,

8 Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.569


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

11(4): 475-484. phones. In Proceedings of the 2010 International


[9] Stark, J. A. Adaptive image contrast enhancement us- Conference on Technologies and Applications of
ing generalizations of histogram equalization. IEEE Artificial Intelligence, Hsinchu, Taiwan, 2010: 195–
Transactions on image processing, 2000, 9(5): 889- 202.
896. [21] Virupakshappa, K.; Han, Y.; Oruklu, E. Traffic sign
[10] He, K., Zhang, X., Ren, S., & Sun, J. Deep residual recognition based on prevailing bag of visual words
learning for image recognition. In Proceedings of the representation on feature descriptors. In Proceedings
IEEE conference on computer vision and pattern rec- of the 2015 IEEE International Conference on Elec-
ognition, 2016: 770-778. tro/Information Technology (EIT), Dekalb, IL, USA,
[11] Sercu, T., Puhrsch, C., Kingsbury, B., & LeCun, Y. 2015: 489–493.
Very deep multilingual convolutional neural net- [22] Shams, M.M.; Kaveh, H.; Safabakhsh, R. Traffic sign
works for LVCSR. In Acoustics, Speech and Signal recognition using an extended bag-of-features model
Processing (ICASSP), 2016 IEEE International Con- with spatial histogram. In Proceedings of the 2015
ference on, IEEE, 2016: 4955-4959. Signal Processing and Intelligent Systems Confer-
[12] U.S. vehicle deaths topped 40,000 in 2017, National ence (SPIS), Tehran, Iran, 2015: 189–193.
Safety Council estimates. https://www.usatoday.com/ [23] Lin, C.-C.; Wang, M.-S. Road sign recognition with
story/money/cars/2018/02/15/national-safety-coun- fuzzy adaptive pre-processing models. Sensors, 6415.
cil-traffic-deaths/340012002 2012.
[13] Vitabile, S.; Pollaccia, G.; Pilato, G.; Sorbello, F. [24] Yin, S.; Ouyang, P.; Liu, L.; Guo, Y.; Wei, S. Fast
Road signs recognition using a dynamic pixel ag- traffic sign recognition with a rotation invariant bina-
gregation technique in the HSV color space. In ry pattern-based feature. Sensors, 2015, 2161–2180.
Proceedings of the 11th International Conference on [25] Rachmadi½, R. F., Komokata½, Y., Íchimura½, K.,
Image Analysis and Processing, Palermo, Italy, 2001, & Koutaki½, G. (2017). Road sign classification
26–28: pp. 572–577. system using cascade convolutional neural network,
[14] Zeng, Y.; Lan, J.; Ran, B.; Wang, Q.; Gao, J. Resto- 2017.
ration of motion-blurred image based on border de- [26] Continental. Traffic Sign Recognition. Available on-
formation detection: A traffic sign restoration model. line: 2017. http://www.contionline.com/generator/
PLoS ONE, 10, e0120885. 2015. www/de/en/continental/automotive/general/chassis/
[15] Ohgushi, K.; Hamada, N. Traffic sign recognition safety/hidden/verkehrszei chenerkennung_en.html
by bags of features. In Proceedings of the TENCON [27] Choi, Y.; Han, S.I.; Kong, S.-H.; Ko, H. Driver status
2009—2009 IEEE Region 10 Conference, Singapore, monitoring systems for smart vehicles using physi-
2009, 23–26: 1–6. ological sensors: A safety enhancement system from
[16] Wu, J.; Si, M.; Tan, F.; Gu, C. Real-time automatic automobile manufacturers. IEEE Signal Process.
road sign detection. In Proceedings of the Fifth Inter- 2016: 22–34.
national Conference on Image and Graphics (ICIG [28] Dean, J., & Ghemawat, S. MapReduce: simplified
’09), Xi’an, China, 2009: 540– 544. data processing on large clusters. Communications of
[17] Belaroussi, R.; Foucher, P.; Tarel, J.P.; Soheilian, B.; the ACM, 2008, 51(1), 107-113.
Charbonnier, P.; Paparoditis, N. Road sign detection [29] Kim, J. Y., Kim, L. S., & Hwang, S. H. An advanced
in images: A case study. In Proceedings of the 20th contrast enhancement using partially overlapped
International Conference on Pattern Recognition sub-block histogram equalization. IEEE transactions
(ICPR), Istanbul, Turkey; 2010: 484–488. on circuits and systems for video technology, 2001,
[18] Shoba, E.; Suruliandi, A. Performance analysis on 11(4), 475-484.
road sign detection, extraction and recognition tech- [30] Abdullah-Al-Wadud, M., Kabir, M. H., Dewan, M. A.
niques. In Proceedings of the 2013 International Con- A., & Chae, O. A dynamic histogram equalization for
ference on Circuits, Power and Computing Technolo- image contrast enhancement. IEEE Transactions on
gies (ICCPCT), Nagercoil, India, 2013: 1167–1173. Consumer Electronics, 2007, 53(2).
[19] Wali, S.B.; Hannan, M.A.; Hussain, A.; Samad, S.A. [31] Olga, R., Jia, D., Hao S., Jonathan, K., Sanjeev,
An automatic traffic sign detection and recognition S., Sean, M., Zhiheng, H., Andrej, K., Aditya,
system based on colour segmentation, shape match- K., Michael, B., Alexander, C. B., and Li, F. Im-
ing, and svm. Math. Probl. Eng, 2015. ageNet Large Scale Visual Recognition Chal-
[20] Lai, C.H.; Yu, C.C. An efficient real-time traffic sign lenge. IJCV, 2015.
recognition system for intelligent vehicles with smart [32] Hinton, Geoffrey E, Srivastava, Nitish, Krizhevsky,

Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.569 9


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

Alex, Sutskever, Ilya, and Salakhutdinov, Ruslan tors and Spatial Models for Traffic Sign Recogni-
R. Improving neural networks by preventing co-ad- tion, In Proceedings of the 17th Scandinavian Con-
aptation of feature detectors. arXiv preprint arX- ference on Image Analysis, SCIA, LNCS 6688, 2011:
iv:1207.0580, 2012. 238-24.
[33] J. Stallkamp, M. Schlipsing, J. Salmen, C. Igel, Man [36] CireşAn, Dan, et al. "Multi-column deep neural
vs. computer: Benchmarking machine learning algo- network for traffic sign classification." Neural net-
rithms for traffic sign recognition, Neural Networks. works 2012, 32: 333-338.
2012, ISSN 0893-6080. [37] Gecer, B., Azzopardi, G., & Petkov, N. Color-blob-
http://www.sciencedirect.com/science/article/pii/ based COSFIRE filters for object recognition. Image
S0893608012000457 and Vision Computing, 2017, 57: 165-174.
[34] Radu Timofte, Karel Zimmermann, Luc van [38] Sermanet, P., & LeCun, Y. Traffic sign recognition
Gool, Multi-view traffic sign detection, recognition, with multi-scale convolutional networks. In Neural
and 3D localisation, IEEE Workshop on Applications Networks (IJCNN), the 2011 International Joint Con-
of Computer Vision, WACV, 2009. ference on. IEEE, 2011: 2809-2813.
[35] Fredrik, L. and Michael, F., Using Fourier Descrip-

10 Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.569


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

Artificial Intelligence Advances


https://ojs.bilpublishing.com/index.php/aia

ARTICLE
GFLIB: an Open Source Library for Genetic Folding Solving Optimi-
zation Problems
Mohammad A. Mezher*
Dept. of Computer Science, Fahd Bin Sultan University, Tabuk, KSA

ARTICLE INFO ABSTRACT

Article history This paper aims at presenting GFLIB, a Genetic Folding MATLAB
Received: 8 March 2019 toolbox for supervised learning problems. In essence, the goal of
GFLIB is to build a concise model of supervised learning, and a free
Accepted: 16 April 2019 open source MATLAB toolbox for performing classification and regres-
Published Online: 30 April 2019 sion. The GFLIB is specifically designed for most of the traditionally
used features, to evolve in applications of mathematical models. The
Keywords: toolbox suits all kinds of users; from the users who implemented GFLIB
GF toolbox as “black box”, to advanced researchers who want to generate and test
new functionalities and parameters of GF algorithm. The toolbox and its
GF Algorithm documentation are freely available for download at: https://github.com/
Evolutionary algorithms mohabedalgani/gflib.git
Classification
Regression
Optimization
LIBSVM

1. Introduction be iterated to generate the fittest chromosome which are

A
evaluated using one of the performance measurements.
ll evolutionary algorithms [1] are biologically GF algorithm is a member of the Evolutionary algorithm’s
stimulated, by using the “survival fittest” con- family, but it uses a simple floating-point system for
cept found with the aid of Darwinian evolution. genes, in formulating the GF chromosomes.
GF algorithm is one of the EA relative member which is Certainly, there are quite a number of open source evo-
used to resolve complicated problems through random- lutionary algorithms toolboxes used for MATLAB[2, 3], but
ly producing populations of computer programs. Every none specific for genetic folding algorithm. GFLIB looks
computer program (Chromosome) undergoes a number forward to providing such a free open source toolbox that
of natural adjustments called crossover and mutation, to can be used and developed by others. Accordingly, the
create a brand-new population. These operators then could GFLIB toolbox was designed from scratch and adopted to

*Corresponding Author:
Mohammad A. Mezher,
Dept. of Computer Science, Fahd Bin Sultan University, Tabuk, KSA;
Email: mmezher@fbsu.edu.sa

Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.608 11


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

ensure code reusability and clarity. The end result was to 2.2.1 Data Structures
deal with a wide variety of machine learning usage prob-
lems. The need for a fast and easy way is to try different GFLIB provides an easy way to add a dataset in a text
dataset on a distinct range of parameters. The GFLIB ex- format. The text files may be found in both UCI dataset
[6]
amined on different MATLAB versions and computer sys- and LIBSVM dataset [7]. GFLIB is mainly supporting
tems, namely version (R2017b) for Windows and (R2017a) the .txt data type which is found in the same style of UCI
for Mac. dataset only.
This standalone toolbox will offer alternatives for us- The main data structures in the GFLIB Toolbox are
ers/researchers to help them decide on both training and genotype and phenotypes which represents the GF chro-
testing data set with a number of k-folding cross-vali- mosomes. The chromosomes present in GF are considered
dation, mathematics operators, crossover and mutation to be the main structure in the algorithm. The GF chromo-
operators, crossover and mutation rates, kernel types, some consists three-parts: an index number of the gene in
and various number of GF parameters. Furthermore, this a chromosome which represents the father, and the two
toolbox will offer an output option to prevent results in points inside the gene which represents the children.
different formats and figures such as; roc curve, structural Then the GF chromosome structure encodes an en-
complexity, fitness values, and mean square errors. tire population in a single float-number of formats ls.rs,
In other words, the aim of building a standalone su- whereas lc is the left child number and rc is the right child
pervised learning toolbox is to spread GF algorithm in all number. Phenotypes are stored in a structure of a deter-
data set fall within the classification and regression prob- mined number of populations. The ith population pop (i)
lems. consists of chromstr and chromnum, chromstr is formulat-
ed for the operator name and chromnum is formulated for
2. GFLIB Structure the GF encoding number and both represent the lc and the
rc. The root operator and GF number must be scalar. In all
2.1 Previous Version of GF Toolbox Structure of these GF structures, each GF number corresponds to a
particular gene either for the right child chromosome, or
The old version of GFLIB was first introduced having the
the left child chromosome respectively.
toolbox rely completely[4] on GP Toolbox[2]. The toolbox
In general, the main purpose of the encoding and de-
contained GF algorithms for supervised classification and
coding process of GF chromosome is to have an arithme-
regression problems, but it was aligning the structure des-
tic understanding. GF encodes any arithmetic operation
ignated for the GP. At that time, the GF toolbox was lack-
by dividing it to left and right sides. Each side is divided
ing from holding unique encoding and decoding mecha-
into other valid genes to formulate a GF chromosome.
nisms fully functioning as intended for being integrated
The encoding process depends on the number of operands
with GP Toolbox. The development of GF toolbox, there-
the arithmetic operations used. At first, two-operands
fore, was oriented to the optimization and integration of
operators’ term is (e.g. the minus operator) placed at the
the existing GP toolbox. The implementation of GF tool-
first gene, referring to other operators repeatedly to end
box was done using MATLAB and the GP package. The
up with terminals. However, the operator types called by
idea was to encode and decode using the GF tree wherein,
a father gene are; two children (two operands), one child
the GFLIB was built using GF mechanism shown in [5].
(one operand) and no child (terminal).
2.2 Current Version of GFLIB Toolbox Structure In the meantime, to decode a chromosome, take the
first gene which has two divisions (children) with respec-
Although the main GF structure was demonstrated in tive operands; ls child and rs child. Repeatedly, for each
detail in [4, 5], where the paper will be mainly designed to father, a number of children to be called every time until
be highlighted on the structure of GFLIB toolbox only. a kernel function is represented. The decoding/encoding
GFLIB is a research MATLAB project that is essentially process [4, 5, 8, 9] executes the folding father operator (e.g.
intended to offer the user with a complete toolbox without plus) over the ls child (minus) and the rs child (multiply).
the need to know how GF algorithm works on a specific The folding mechanisms develops a new algorithm known
dataset. The recent developed GFLIB toolbox additional- as Genetic Folding algorithm.
ly, will grant researchers an entire control in comparison The three datasets used here for comparative analysis
with different well-known evolutionary algorithms. The includes; Iris dataset (multi-classification problem), a
variety of options which GFLIB presents can be used as a Heart dataset (binary classification problem), and Hous-
very important tool for researchers, students, and experts ing dataset (regression problem). The Iris dataset is a
who are interested in testing their personal dataset.

12 Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.608


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

dataset made by biologist Ronald Fisher, used in 1936 as for regression dataset. The folder to manipulate or add
an example of linear discriminant analysis. There are 50 more dataset to these subdirectories.
samples from every of 3 species of Iris (Iris setosa, Iris • @libsvm, whose functions, discussed in [7] and being
virginica and Iris versicolor). Four features were measured integrated into the toolbox to play the SVM role.
from every sample: the length and the width of the sepals • Binary, multi and regress files, which forms the ba-
and petals, in centimetres. [10] sic use of each problem type respectively.
The second dataset is the Heart dataset (the part obtained The list of figures shown in Table 2 is designed and
from Cleveland Clinic Foundation), using a subset of 14 at- integrated into the GFLIB toolbox for the sake of compar-
tributes. The purpose is to detect heart diseases in a patient. ison with other algorithms and toolboxes.
Its integer value goes from 0 (no presence) to 4. [6]
The last dataset for testing the regression problems is Table 2. List of GFLIB Figures Shown in the Toolbox
“Housing dataset”. The Housing dataset has a median val-
Name Type
ue of the house price along with 13 other parameters that
Population Diversity Fitness distribution vs. Generation
could potentially be related to housing prices. The aim
Accuracy Accuracy value vs. Generation
of the dataset is to predict a linear regression model by
Structure Complexity Tree Depth/size vs. Generation
estimating the median price of owner-occupied homes in
Boston. Tree Structure GP tree structure
GF Chromosome GF Chromosome structure
2.2.2 GFLIB Toolbox Structure
In the developed GFLIB toolbox, the focus was on
The toolbox provides algorithms like SVC, SVR, applying supervised learning to GFLIB toolbox for a re-
and Genetic Folding Algorithm. It provides easy to use al-world problem shown in Table III using LIBSVM as
MATLAB files, which takes in input basic parameters for described in Figure 1.
each algorithm based on the selected file. For example, However, the choice on which particular dataset type to
in regression problems, there is a file (regress.m) to enter be used will be determined by the user referee to it in the
kernel type, number of k-fold, number of population and path, the GF algorithm will run accordingly. Also, once
the maximum number of generations to be considered. the user decides on the GF parameters to run with, the
The list of parameters users can input and uniformed for right GF algorithm (classifier or regression) will run con-
classification and regression problems are shown in below sequentially.
Table 1: The GA Toolbox was built using GF structs (chromo-
somes) for the purpose of implementing the core of GF
Table 1. List of Parameters in GFLIB encoding and decoding mechanisms. Here, the major
functions of the GFLIB Toolbox are outlined:
Name Definition Values
(1) Population representation and initialisation:
Mutprob mutation probability A float value
genpop, initpop
Crossprob crossover probability A float value
The GFLIB Toolbox supports floating-point chromo-
Maxgen max generation An integer value some representation. The floating-point was initialized by
Popsize population size An integer value the Toolbox function, to create a floating-point GF chro-
Type problem type multi,binary,regress mosome, initpop. A genpop is provided to build a vector
Data Dataset *.txt describing the populations and figures statistics.
Kernel Kernel Type rbf,linear,polynomial,gf (2) Fitness assignment: calcfitnes, kernel, kernelvalue
Crossval Crossvalidation An integer value The fitness function transforms the raw objective func-
‘Plus_s’,’Minus_s’, tion equations found, using GF algorithm into non-nega-
Oplist
operators and oper- ‘Plus_v’,’Minus_v’,’Sine’, tive values. However, kernelvalue which will be repeat-
ands ‘Cosine’,’Tanh’,’Log’,
‘x’,’y’
edly used for all individuals in the population, kernel. The
Oplimit length of chromosome An integer value Toolbox supports both libsvm [7] package and the fitrsvm [11]
function in MATLAB. Using both, GFLIB could success-
The main directory of GFLIB contains a set of main fully generate models that are capable of fitting the aba-
purposes of GFLIB .m files, described in details in this lone data set. The result of the libsvm (using the svmtrain
section, and the two following subdirectories; function) was used along with svmpredict, to successfully
• @data which contains the folder of @binary for da- predict the different input parameters. The GF algorithm
taset, @multi for multiclassification dataset and @regress included eight arithmetic operators in the toolbox. The

Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.608 13


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

arithmetic operators shown in Table 1 are either one oper- 3. GF Algorithm Using Generative Models
and operator (sine, cosine, tanh, and log) or two operands
operator (plus, minus). Genetic Folding (GF) [4, 5, 8, 9] is a novel algorithm stimulat-
(3) Genetic Folding operators: crossover, mutation ed by means of folding mechanism, inspired by the RNA
The GFLIB supports two types of operators by dividing sequence. GF can represent an NP problem by a simple
the population size into two-equal sizes. Each half-size array of floating number instead of using a complex tree
will undergo one type of operator. The GFLIB operators structure. First, GF generates an initial population com-
are one-point crossover, two-point crossover, and swap pound of basic mathematics operations randomly. Then,
mutation operators. valid chromosomes (expression) can be evaluated. GF as-
(4) Selection operators: selection signed a fitness value for every chromosome depending on
This function selects a given number of individuals the fitness function being developed. The chromosome is
from the current population according to their fitness and then selected by the roulette wheel. After which the fittest
returns a row structs to their indices. Currently, roulette chromosome will be subjected to the genetic operators in
wheel selection method was conducted for GFLIB toolbox. order to generate a new population in an independent way.
The selection methods particularly, are required to balance In every population, the chromosomes are also subjected
between the quality of solutions and genetic diversity. to a filter to test the validity of the chromosome. The ge-
(5) Performance figures: genpop netic operators used to generate a new population for the
The list of figures included to demonstrate the perfor- next generation. The entire procedure is repeated until the
mance of the GF algorithm is; the ROC curve (only for bi- optimum chromosome (kernel) is achieved.
nary), expression tree, fitness values, population diversity,
accuracy verses complexity, and structure complexity. The
4. Experiments on LIBGF
GFLIB also includes well-known kernel functions in order This paper first shows GFLIB methods work on binary
to differentiate comparisons easily. The file also prints the and multi-classification problems; then carries out a re-
best GF chromosome in two different formats; genes num- gression problem using GFLIB methods. Three datasets
bers and operator string. are chosen as testing data for the two types of experi-
ments. Part of their properties is included in Table III and
Table VI for classification and regression respectively.
Amongst them, the same parameters from k-folding to
operator list are for experiments conducted. Other well-
known kernels are included for the sake of comparison
with GFLIB. However, the list of datasets was used in
both binary, multi-classification, and regression problems
brought from UCI dataset[6].

4.1 LIBGF for Classification Problems


The classification dataset included in GFLIB shown in Ta-
ble 3 includes the respective details.

Table 3. Classification Datasets Used in the GFLIB

Name Type Size


Credit approval Binary 690*15
Statlog German Credit Binary 1000*20
Heart Scale Binary 270*13
Ionosphere Binary 351*34
Sonar Scale Binary 208*60
Spam Binary 4601*57
Iris Scale Multi 150*4
Zoo Multi 101*18

Fig 1. GFLIB life Cycle The list of parameters’ value used in the experiments

14 Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.608


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

for both binary and multiclassification problems is shown rate, and 0.5 a crossover rate. Thus, for each generation,
in table 4. 20 combinations of operators are experimented to form a
valid GF chromosome. The GF operators’ rates are shown
Table 4. List of Classification Parameter Values in Table 6.
In Table 5, the list of regression datasets included in
Name Definition
GFLIB is shown in and associated with a brief description
mutprob 0.1
of the dimensionality.
crossprob 0.5
maxgen 20 Table 5. Regression Datasets Used in the GFLIB
popsize 50
Name Type Size
type Binary, multi
Abalone Regression 4177*8
data In table III
Housing Regression 506*13
kernel GF, rbf,linear,polynomial
MPG Regression 392*6
crossval 10-fold
oplist ‘Plus_s’,’Minus_s’,’Multi_s’,’Plus_v’,’Minus_v’,’x’,’y’ Table 6 shows the list of parameters and values used to
oplimit 20 %length of chromosome run a regression test on Housing dataset:

Fitness Table 6. List of Regression Parameter Values


100

90 Name Definition

80
mutprob 0.1

70
crossprob 0.5
maxgen 20
fitness/accuracy %

60

50
popsize 50

40
type regression

30 maximum:97.297297
data In table VI
average:82.378378

20
median:97.297297 kernel GF, rbf,linear,polynomial
avg-std:50.505797
avg+std:114.250959
10
bestsofar:100.000000 crossval 10-fold

0
oplist ‘Plus_s’,’Minus_s’,’Multi_s’,’Plus_v’,’Minus_v’,’x’,’y’
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
generation oplimit 20 %length of chromosome

Figure 2. Classification Fitness Values The best performance value found was with the MSE
The best chromosome string found using GFLIB for value of 0.000121 in all generations as shown in Figure 3.
the iris dataset is: In Figure 4 and Figure 5, the results conducted using the
GFLIB to demonstrate the variety of results shown using
Plus_s Sine Plus_v X Y Sine Sine Y Y X X
GFLIB toolbox. The population diversity figure, the figure
And the best chromosome GF number formed for the plots in dots the highest and lowest fitness values found
above-mentioned string was: in a population. The structure complexity figure plots the
2.3 4.5 6.7 0.4 0.5 8.9 10.11 0.8 0.9 0.10 0.11 folding depth of the best GF chromosome found in each
The maximum fitness (Accuracy) found using GFLIB generation. The size of each folding counted based on the
in all generations for the iris dataset was 100.00 % number of calling occurred by the first number formulated
in a GF chromosome.
4.2 LIBGF for Regression Problems A GF chromosome structure has been well-defined to
represent a structural folding of a GF chromosome. Then,
For all figure’s types except ROC curve, the experiment the GF chromosome is extracted and arranged as a tree
was tested on running the algorithm for 20 generations structure of real numbers. The GF encoding part of the
with 10 cross validation. The best performance was the toolbox is used to evolve the tree-structure of a program
smallest value of the mean square error found of the ob- whereas the GF decoding part of the toolbox is applied to
jective function obtained over all function evaluations. determine the string of the structural chromosome. Exper-
For 50 population conducted at each combination of a imental results have shown the promise of the developed
half-mutation size, a half-crossover size, 0.1 a mutation approach.

Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.608 15


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

10
-5 Fitness Structure Complexity
16 11
maximum depth:4
14 bestsofar depth:4
10 bestsofar size:11

12

10 9

tree depth/size
8
MSE

7
4

2 maximum:0.000086 6
average:0.000082
median:0.000086
0 avg-std:0.000048
avg+std:0.000116 5
bestsofar:0.000121
-2

-4 4
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
generation generation

Figure 3. Regression Fitness Value c. GF Structure Complixity

100
Population Diversity
Figure 4. GFLIB Toolbox Ran for Iris Multiclassification
90
Dataset
80

70
fitness distribution

60

50

40

30

20

10

0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
generation

a. Population Diversity

Plus_s

Sine Plus_v a. Population Diversity

Minus_s
x y Sine Sine

Minus_s Log
y y x x

y Minus_v Minus_s Sine

x x x Cosine

b. GF Tree Structure

b. GF Tree Structure

16 Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.608


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

the development time of the toolbox.


Structure Complexity
11 In this paper, the GFLIB is being compared with three
well-known kernels. In future researches, I intend to com-
maximum depth:4
bestsofar depth:4
10 bestsofar size:11

pare GFLIB with a GA and GP alone as well. I also intend


9 to compare the toolbox with other kinds of hybrid meth-
ods, such as the hybrid decision tree/instance.
tree depth/size

7
References
[1] Seyedali Mirjalili. Evolutionary Algorithms and Neu-
6
ral Networks Theory and Applications. Springer
5 international Publishing; June 2018.
[2] Sara Silva and Jonas Almeida, “Gplab-a genetic pro-
4
0 1 2 3 4 5 6 7 8 9 10 gramming toolbox for matlab,” In Proc. of the Nordic
generation
MATLAB Conference, pp. 273--278, 2005.
c. Structure Complixity [3] A.J. Chipperfield and P.J. Fleming, “The MATLAB
genetic algorithm toolbox”, IEE Colloquium on
Figure 5. GFLIB Toolbox Ran for Housing Regression Applied Control Techniques Using MATLAB, UK,
Dataset 1995
[4] Mezher, Mohammad and Abbod, Maysam. (2010).
The best GF string found using GFLIB for the iris data-
Genetic Folding: A New Class of Evolutionary Algo-
set is:
rithms. 279-284.
Minus_s Minus_s Log Y Minus_v Minus_s Sine X X X cosine [5] Mohd Mezher, Maysam Abbod. Genetic Folding:
And the best GF number formed for the above-men- An Algorithm for Solving Multiclass SVM Prob-
tioned string was: lems. Applied Soft Computing, Elsiver Journal.
2.3 4.5 6.7 0.4 8.9 10.11 0.7 0.8 0.9 0.10 0.11 41(2):464-472. 2014.
[6] C L Blake, C J Merz. UCI repository of machine
learning databases University of California, Irvine,
5. Conclusion Department of Information and Computer Sciences.
GFLIB toolbox is presented and built using MATLAB, 1998.
for users and researchers who are interested in solving [7] Chang, Chih-Chung and Lin, Chih-Jen. LIBSVM: A
real NP problems. The key feature of this toolbox is the library for support vector machines. ACM Transac-
structure of the GF chromosome, and the encoding and tions on Intelligent Systems and Technology. 2(3):
decoding processes included in the toolbox. In this GFLIB 1-27. 20011.
toolbox, eleven well-known UCI datasets are studied and [8] Mohd Mezher, Maysam Abbod. Genetic Folding:
implemented with their relative performance analysis; A New Class of Evolutionary Algorithms. October
ROC curve, fitness values, structural analysis, tree struc- 2010.
ture, and population diversity. These datasets can be cate- [9]Mohd Mezher, Maysam Abbod. A New Genetic Fold-
ing Algorithm for Regression Problems. Proceedings
gorised into two categories: classification and regression.
- 2012 14th International Conference on Modelling
All figures are comparable with another set of three well-
and Simulation, UKSim. 46-51. 2012.
known kernel functions. GFLIB toolbox of any category
[10] R. A. Fisher (1936). “The use of multiple measure-
allows users to select their parameter choice. Balanced
ments in taxonomic problems”. Annals of Eugenics.
parameters of GF chromosome must be considered, to
7 (2): 179–188.
maintain the genetic diversity within the population of
[11] Statistics and Machine Learning Toolbox Users
candidate solutions throughout generations. But, on the
guide. 2018b,the MathWorks, Inc., Natick, Massa-
other hand, the MATLAB GFLIB files tends to facilitate
chusetts, United States.

Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.608 17


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

Artificial Intelligence Advances


https://ojs.bilpublishing.com/index.php/aia

ARTICLE
Quantum Fast Algorithm Computational Intelligence PT I: SW / HW
Smart Toolkit
Ulyanov S.V.*
State University "Dubna", Universitetskaya Str.19, Dubna, Moscow Region, 141980, Russia

ARTICLE INFO ABSTRACT

Article history A new approach to a circuit implementation design of quantum algorithm


Received: 12 March 2019 gates for quantum massive parallel fast computing implementation is pre-
sented. The main attention is focused on the development of design meth-
Accepted: 18 April 2019 od of fast quantum algorithm operators as superposition, entanglement
Published Online: 30 April 2019 and interference which are in general time-consuming operations due to
the number of products that have to be performed. SW & HW support
Keywords: sophisticated smart toolkit of supercomputing accelerator of quantum
Quantum algorithm gate algorithm simulation is described. The method for performing Grover’s
interference without product operations as Benchmark introduced. The
Superposition background of developed information technology is the "Quantum /
Entanglement Soft Computing Optimizer" (QSCOptKBTM) software based on soft
Interference and quantum computational intelligence toolkit. Quantum genetic and
quantum fuzzy inference algorithm gate design considered. The quantum
Quantum simulator
information technology of imperfect knowledge base self-organization
design of fuzzy robust controllers for the guaranteed achievement of
intelligent autonomous robot the control goal in unpredicted control situ-
ations is described.

 
1. Introduction: Role of Quantum Synergetic intelligent cognitive robotics etc.
Effects in AI and Intelligent Control Models Concrete developments are the cognitive “man-robot”

R.
interactions in collective multi-agent systems, “brain-com-
Feynman and Yu. Manin, independently, puter-device” interface of autism children supporting with
suggested and correctly shown that quantum robots for service use, and so on. These applications are
computing can be effectively applied for simu- examples successful result applications of efficient clas-
lation and searching of solutions of classically intractable sical simulation of quantum control algorithms in the al-
quantum systems problems using quantum programmable gorithmic unsolved problems of classical control systems
computer (as physical devices). Recent research shows robustness in unpredicted control situations.
successful engineering application of end-to-end quantum Related works. Many interesting results are published
computing information technologies (as quantum sophisti- as fundamentals and applications of quantum / classical
cated algorithms and quantum programming) in searching hybrid approach to design of different smart classical or
of solutions of algorithmic unsolved problems in classical quantum dynamic systems. For example, an error mitiga-
dynamic intelligent control systems, artificial intelligence, tion technique and classical post-processing can be con-

*Corresponding Author:
Ulyanov S.V.,
State University "Dubna", Universitetskaya Str.19, Dubna, Moscow Region, 141980, Russia;
Email: ulyanovsv@mail.ru

18 Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.619


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

veniently applied, thus offering a hybrid quantum-clas- importance and in [8] is discussed. One prominent platform
sical algorithm for currently available noisy quantum for constructing a multi-qubit quantum processor involves
processors [1] or Quantum Triple Annealing Minimization superconducting qubits, in which information is stored
(QTAM) algorithm utilizes the framework of simulated in the quantum degrees of freedom of nanofabricated,
annealing, which is a stochastic point-to-point search anharmonic oscillators constructed from superconduct-
method: The quantum gates that act on the quantum states ing circuit elements. The requirements imposed by larger
formulate a quantum circuit with a given circuit height quantum processors have shifted of mindset within the
and depth [2]. A new local fixed-point iteration plus global community, from solely scientific discovery to the devel-
sequence acceleration optimization algorithm for general opment of new, foundational engineering abstractions as-
variational quantum circuit algorithms in [3] is described. sociated with the design, control, and readout of multi-qu-
The basic requirements for universal quantum computing bit quantum systems. The result is the emergence of a
have all been demonstrated with ions and quantum algo- new discipline termed quantum engineering, which serves
rithms using few-ion-qubit systems have been implement- to bridge the basic sciences, mathematics, and computer
ed [4]. Quantum computing is finding a vital application science with fields generally associated with traditional
in providing speed-ups for machine learning problems, engineering [9, 10].
critical in “big data” world. Machine learning already per- Moreover, new synergetic effects defined and extract-
meates many cutting-edge technologies, and may become ed from the measurement of quantum information (that
instrumental in advanced quantum technologies. Aside hidden in classical control states of traditional controllers
from quantum speed-up in data analysis, or classical ma- with time-dependent coefficient gain schedule) are the
chine learning optimization used in quantum experiments, information resource for the increasing of the control sys-
quantum enhancements have also been (theoretically) tem robustness and guarantee the achievement of control
demonstrated for interactive learning tasks, highlighting goal in hazard situations. The background of this syner-
the potential of quantum-enhanced learning agents [5]. In [6] getic effect is the creation of new knowledge from exper-
the system PennyLane as a Python 3 software framework imental response signals of imperfect knowledge bases
for optimization and machine learning of quantum and on unpredicted situations using quantum algorithm of
hybrid quantum / classical computations is introduced. A knowledge self-organization as quantum fuzzy inference.
plugin system makes the framework compatible with any The background of developed information technology is
gate-based quantum simulator or hardware and provided the "Quantum / Soft Computing Optimizer" (QSCOptKB
plugins for Strawberry Fields, Rigetti Forest, Qiskit, and TM) software based on soft and quantum computational
ProjectQ, allowing PennyLane optimizations to be run on intelligence toolkit.
publicly accessible quantum devices provided by Rigetti Algorithmic constraints on mathematical models of
and IBM Q. On the classical front, PennyLane interfaces data processing in classical form of computing (based on
with accelerated machine learning libraries such as Ten- Church-Turing thesis and using background of classical
sorFlow, PyTorch, and auto grad. PennyLane can be used physics laws) are dramatically differs from physical con-
for the optimization of variational quantum eigensolvers, straints on resources limitation in data information pro-
quantum approximate optimization, quantum machine cessing models that based on quantum mechanical models
learning models, and many other applications. The first such as information transmission, information bounds on
industry-based and societal relevant applications will be the extraction of knowledge, amount of quantum acces-
as a quantum accelerator. It is based on the idea that any sible experimental information, quantum Kolmogorov’s
end-application contains multiple parts and the properties complexity, speed-up quantum limit of data processing,
of these parts are better executed by a particular acceler- quantum channel capacity etc. Meaning exploring of the
ator which can be either an FPGA, a GPU or a TPU. The Landauer’s thesis as “Information is physical” has pre-
quantum accelerator added as an additional coprocessor. pared as result the background for changing, clarification
The formal definition of an accelerator is indeed a co-pro- and expanding the Church-Turing thesis, and introduce
cessor linked to the central processor and that executes the R&D idea of quantum computing exploring and quan-
much faster certain parts of the overall application [7]. tum computer development for successful solving many
Limited quantum memory is one of the most important classically algorithmic unsolved (intractable in classical
constraints for near-term quantum devices. Understanding mean) problems.
whether a small quantum computer can simulate a larger The classification of quantum algorithms is demonstrat-
quantum system, or execute an algorithm requiring more ed on Fig. 1.
qubits than available, is both of theoretical and practical

Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.619 19


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

Quantum Fuzzy Control

Design system
Main problem
Global
Quantum Fuzzy Modelling System Solution method optimization Solution method
Minimum
entropy production

Fitness Classical GA Quantum search Quantum


Quantum Algorithms

y
structures algorithms

rar
function approach approach

Lib
Mutation Superposition of solutions Quantum Quantum

m
Changing of probability choice
Decision Making

rith
& oracle measurements operations operators
Searching

Analogies
⊗ Interference

o
Quantum Fourier

GA operators
Alg
transform
Crossover 0 1 0 .… 0 1 1 0 Generation& creation
of entanglement states
Deutsch’s
Grover’s Shor’s ⊗ Controlled-Not Entanglement
Deutsch-Jozsa’s two-qubit gate
Selection
(Reproduction) 0 1 1 .… 0 1 1 0 Generation& creation
of superposition states ⊗ Disentanglement

Quantum Genetic GA operations


Search Algorithm One qubit Superposition
⊗ rotation gate
Robust Knowledge Base Design for Initial position (0,1) Binary code
Fuzzy Controllers on QFI 1
Superposition General Initial position 0 = 
1 0
Expert Classical states (0 ±1) solution
space
decision making
=
1
  0=
0
and   1 2
Figure 1. Classification of Quantum Algorithms and In-
0 1
Solution Solution N
Solution 1
1 space n
space n
terrelations with Quantum Fuzzy Control
1 space 1
2 Search spaces 2
n
2

Quantum algorithms are in general random: decision Figure 2. Interrelations between Soft and Quantum Oper-
making quantum algorithms of Deutch-Jozsa and quan- ators in Genetic and Quantum Algorithms
tum search algorithms (QSA) of Shor and Grover are ex-
From quantum programming a quantum computer point
amples of successful applications of quantum effects and
view there no exist currently the general methodology of
constraints from introduction new classes computational
quantum computing and simulation of dynamic systems
basis quantum operators as superposition, entanglement
but it was developed many proposals of quantum simula-
and interference that are absent in classical computational
tors (see, for example, the large list of quantum simulators
models. These effects given the possibility to introduce
available on [https://quantiki.org/wiki/list-qc-simulators]).
new types of computation as quantum parallel massive
Remark. The purpose of this article is concerned with
computing using superposition operator, operator of en-
the problem of discovering new QAs. Same as D-Wave,
tanglement (super-correlation or quantum oracle) created
processor supercomputing processes in a quantum com-
the possibility of “good” (in general unknown) solution
puter can be described as a synergetic union of hybrid
search and operator of quantum interference help extract
quantum / classical HW, and quantum SW with quantum
searching “good” solutions with maximal amplitude
soft support of quantum programming.
probability . All of these operators are reversible, clas-
Remark. To understand more clearly the fundamental
sical irreversible operator of measurement (as example,
capabilities and limitations of quantum computation we
coin) extract the result of quantum algorithm computing.
are to discover efficient QAs for interesting engineering
Note, that quantum effects that described above absent in
problems as intelligent cognitive control systems.
classical models of computation and demonstrated the ef-
One the most important open problem in computer sci-
fectiveness of quantum constraints in classical models of
ence is to estimate the possibility of quantum speed-up for
computations.
the search of computational problems solution.
Figure 2 demonstrate the computing analogy between
Oracular, or black-box, problems are the first exam-
soft and quantum algorithms and its operators that are
ples of problems that can be solved faster with a quantum
used in quantum soft computing information technology.
computer than with a classical computer. The computer in
the black box model is given access to oracle (or a black
box) that can be queried to acquire information about the
problem. To find the solution to the problem using as few
queries to the oracle as possible is the computation goal
[11-13]
.

1.1 Goal and Problem Solving


This article consider the design possibility a family of
quantum decision-making and search algorithms (QA’s)

20 Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.619


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

(see, Fig. 1) that it is the background of quantum com-


Quantum massive parallel computing
putational intelligence for solving the problems of Big &
Mining data, deep quantum machine learning (based on Quantum KB optimizer SCO

quantum neural network), global optimization in intelli- Answer QC output QAG design Classical input Problem

gent quantum control (using quantum genetic algorithms) ψ fin = ( Interference )( Quantum oracle )  ( Superposition ) ψ initial
etc. (see, in details Pt II).

1.2 Method of Solution and Smart Toolkit


The presented method and relative hardware implements Qualitative
properties Quantum Fourier Problem oriented Hadamard Coding of
function
of function transformation operator transformation
matrix and algorithmic forms of quantum operators that
properties

are used in a QA (entanglement or oracle operators, and Qualitative properties


of function Quantum oracle as black box
interference operator as in second and third steps of QA
implementation) that increasing computational speed- Figure 4. General Structure of QA
up with respect to the corresponding SW realization of
a traditional and a new QSA. A high level structure of INPUT
a generic entanglement block that uses logic gates as Encoder
f UF
analogy elements is described. Method for perform- f→F ; F→UF
ing Grover interference without products is introduced
[14, 15]
. Q U A N T U M A L G O R I T H M A C C E L ATO R
COMPUTING: SW / HW SUPPORT Quantum Block
A. General Structure of Quantum Algorithm
The problem solved by a QA can be stated in the sym-
bolic form: OUTPUT
Input A function f: {0,1}n →{0,1}m Answer Decoder Basis
Vectors
Problem Find a certain property of function f

A given function f is the map of one logical state into


Binary strings Map Table and Complex
level Interpretation Spaces Hilbert space
another and QA estimate qualitative properties of function
f. Figure 5. Scheme Diagram of QA - structure
General description of QA on Fig. 3 is demonstrated
As above mentioned QA estimates (without numerical
(physically the type of operator U F describes the qualita-
computing) the qualitative properties of the function f . Thus
tive properties of the function f ).
with QAs we can study qualitative properties of function
Figure 4 shows the steps of QA that includes almost
f without quantitative estimation of function values.
of described qualitative peculiarities of function f and
For example, Fig. 6 represents the general approach to
physical interpretation of applied quantum operators.
Grover’ QAG design.
In the scheme diagram of Fig. 5 the structure of a QA
is outlined.

Repeated k times

|0> H bit
n . . M
. h . .
. INT . E .
.
A
|0> H bit S
U
UF h R
E
|x> S bit M
E
m . . .
. h N
. . .
. T .
|x> S bit

Input Superposition Entanglement Interference Output

Figure 6. Circuit and Quantum Gate Representation of


Figure 3. General Description of QAG Grover’s QSA

Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.619 21


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

As a termination condition criterion minimum-entropy structures of corresponding superposition and interference


based method is adopted [13]. operators in [12, 13] for different QAs presented.
The structure of a QAG in Fig. 3 in general form de- Elements of the Walsh-Hadamard operator could be
fined as following: obtained as following:

QAG= ( Int ⊗ n I ) ⋅ U F 


h +1
⋅  n H ⊗ m S  (1) (=
−1)
i* j

1  1, if i ∗ j is even
 =
n
H  n/2 
i, j 2 2n / 2  −1, if i ∗ j is odd (2)
Where I is the identity operator; S is equal to I or H and
dependent on the problem description. = Where i 0,1,...,
= 2n , j 0,1,..., 2n . Its elements could be
Fast algorithms design to simulate most of known QAs obtained by the simple replication according to the rule
on classical computers [15-17] and computational intelli- presented in Eq. (2).
gence toolkit is following: 1) Matrix based approach; 2) Interference operators of main QA’s
Model representations of quantum operators in fast QAs; Interference operators for Grover’s algorithm [18, 19] writ-
3) Algorithmic based approach, when matrix elements are ten as a block matrix:
calculated on “demand”; 4) Problem-oriented approach,  1 
 Int Grover  = D n ⊗ I =  n / 2 − n I  ⊗ I =
where we succeeded to run Grover’s algorithm with up i, j
2 
to 64 and more qubits with Shannon entropy calculation  1   1  1 − I , i = j
 −1 + n / 2  ⊗ I ,  n/2  ⊗ I = n/2 
(up to 1024 without termination condition); 5) Quantum  = 2  i j 
2  i≠ j
2  I , i ≠ j , (3)
algorithms with reduced number of operators (entangle-
where= i 0,..., 2n − 1,= j 0,..., 2n − 1 , Dn refers to diffu-
ment-free QA, and so on).
(−1) 1 AND (i = j )
Remark. In this article we describe briefly main blocks
sion operator:
[ Dn ] i , j = [4,8]
. Note that with
[13-17]
in Fig. 6: i) unified operators; ii) problem-oriented 2 n/2
bigger number of qubits, gain coefficient will become
operators; iii) Benchmarks of QA simulation on classical
smaller.
computers; and iv) quantum control algorithms based on
Entanglement operators of main QA’s
quantum fuzzy inference (QFI) and quantum genetic al-
Operators of entanglement in general form are the part
gorithm (QGA) as new types of QSA (see, more in details
of QA and the information about the function (being ana-
Part II of this article).
lyzed) is coded as “input-output” relation. In the general
Let us consider matrix based and problem-oriented
approach for coding binary functions into corresponding
approaches to simulate most of known QAs on classical
entanglement gates arbitrary binary function considered
computers and small quantum computer.
as: f : {0,1} → {0,1} , such that f ( x0 ,..., xn −1 ) = ( y0 ,..., ym −1 )
n m

I. Quantum operator’s description: SW&HW smart


. Firstly irreversible function f transfer into reversible
toolkit support
function F , as following: F : {0,1} → {0,1} , and
m+n m+n

We consider from simulation viewpoint the structure


of quantum operators as superposition, entanglement and F=(
x 0 ,..., xn −1 , y0 ,..., ym −1 )
( x 0 ,..., xn −1 , f ( x 0 ,..., xn −1 ) ⊕ ( y 0 ,..., ym −1 )) ,
interference [14,16,18,19,23-26] in matrix based approach.
Superposition operators of QA’s. where ⊕ denotes addition modulo 2. This transfor-
The superposition operator consists in general form of mation create unitary quantum operator and performs the
the combination of the tensor products Hadamard H op- similar transformation. With reversible function F it is
erators with identity operator I : possible design an entanglement operator matrix accord-
ing to the following rule:
1 1 1  1 0  [U F=
]i , j F ( j B ) i B , i, j ∈  0,..,
1 iff =
=H = 1 −1 , I 0 1 
B B
0;1,..,1;
 
2   .  n + m n + m 

The superposition operator of most QAs can be ex- B denotes binary coding.
M
 0 0 
pressed (see Fig. 3 and Eq. (1)) as: UF =   
 
A diagonal block matrix of the form: is  0 M 2 n −1 
n  m  actually resulted entanglement operator.
Sp =  ⊗ H  ⊗  ⊗ S 
=  i 1= i 1 , Each block= M i , i 0,..., 2n − 1 , can be obtained as fol-
lowing m −1 I , iff F (i , k ) = 0
Where n and m are the numbers of inputs and of 
Mi = ⊗ 
outputs respectively. Numbers of outputs m as well as k = 0 C , iff F (i , k ) = 1
 (4)

22 Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.619


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

And consists of m tensor products of I or of C op- Table 1. Quantum Gate Types for QA’s Structure Design
erators, where C stays for NOT operator. Symbolic Form of QAG:
Note that entanglement operator (4) is a sparse matrix  
h +1

Title Type of Algorithm ( Int ⊗ m I ) ⋅ U  n 


and according to this property, the simulation of entangle-   F 
⋅ H ⊗ m
 
S

 Superposition 
 Interference Entanglement 
ment operation accelerated. Deutsch- m=1, S=H(x=1)
II. QA computing accelerator: SW&HW support Jozsa Int=nH ( n
H ⊗ I ) ⋅ U FD.− J . ⋅ ( n +1H )
Figure 7 shows the structure of intelligent quantum (D. – J.) k=1 h=0
m=n,S=I
computing accelerator. Simon (x=0)Int=nH ( n
H ⊗ n I ) ⋅ U FSim ⋅ ( n H ⊗ n I )
(Sim) k=O(n) h=0
User’s
User’s Control
Control System
System m=n, S=I
Shor (x=0)Int=QFTn ( QFT n ⊗ n I ) ⋅ U FShr ⋅ ( n H ⊗ n I )
(Shr) k=O(Poly(n)) h=0

PC Q.C. Accelerator H.W. m=1, S=H(x=1)


Grover
(Gr)
Int=Dn ( Dn ⊗ I ) ⋅ U FGr ⋅ ( n +1H )
k=1, h=O(2n/2)
Quantum Gate
Software
Software Controller
Package
Package
1.3 Information Analysis of QA and Criterion for
G.A. Acc. H.W. Sup. Ent. Int. Solution of the QSA-termination Problem
•S.C. Optimizer
•Q.S.C. Optimizer
•Q.G.S.Algorithm G.A. Controller Quantum Operators
The communication capacity gives an index of efficiency
•Q.G. Design of a quantum computation [19]. The measure of Shannon
Selection Mutation
•Grover’s Gate information entropy is used for optimization of the termi-
•Shor’s Gate
Crossover
•General purpose nation problem of Grover’s QSA. Information analysis of
Grover’s QSA based on of Eq. (5), gives a lower bound on
necessary amount of entanglement for searching of suc-
Figure 7. Intelligent Quantum Soft Computing Accelera- cess result and of computational time: any QSA that uses
tor Structure
the quantum oracle calls {Os } as I − 2 s s must call the
HW of quantum computing accelerator is based on  1 − Pe 1 
T ≥ +  N
standard silicon element background. oracle at least  2π π log N times to achieve a
QA structure implementation for HW and MatLab is on probability of error Pe [20].
Fig. 8 demonstrated (see, Fig. 23). The information intelligent measure of QA as ℑT ψ ( )
of the state ψ is [12, 21]:

STSh ψ ( ) − STVN ( ψ ) .
ℑT ψ( ) 1−
=
Entanglement

Interference
Superposition

T
Output

Input (6)
r
te
In

With respect to the qubits in T and to the basis


e-
Pr

B= { i1 ⊗  ⊗ in }
computation

Digital computation
The measure (6) is minimal (i.e., 0) when
operators
Intelligent

of Shannon entropy

STSh ( y ) = T and STVN ( y ) = 0 , it is maximal (i.e., 1)


Background of HW
implementation
Stop
when ST ( y ) = ST ( y ) . Thus the intelligence of the QA
Sh VN
o n
Criteri

a state is maximal if the gap between the Shannon and the


von Neumann entropy for the chosen result qubit is mini-
Figure 8. QA Structure Presentation for HW (a) and Mat-
mal.
Lab (b) Implementations
Information QA-intelligent measure (6) and interrela-
Different structures of QA can be realized as shown in tions between information measures in Table 1 are used
Table 1 below. together with the step-by-step natural majorization princi-
ple for solution of QA-termination problem and interrela-
tions between information measures ST ( ψ ) ≥ ST ( ψ ) are
Sh VN

used together with entropic relations of the step-by-step


natural majorization principle for solution of QA-termi-

Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.619 23


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

nation problem [12]. From Eq. (6) we can see that (for pure
states) Quantum
Superposition
Operator
Common
Part
Algorithm Controller Software
Software
 STSh ψ ( ) − STVN ( ψ )  n+1H
= nH ⊗ 1H nH

( )
Deutsch-Jozsa
max ℑT ψ  1 − min  n+1H nH
 T  Grover = nH ⊗ 1H
  Shor nH ⊗ nI = nH ⊗ nI nH
Interference Common
Quantum
Operator Part
Algorithm

( )
 min STSh ψ , STVN ψ ( )=0, (7)
Hardware
Sup. Int.
D.-Jozsa nH ⊗ I = n ⋅ H ⊗ I
H ⊗I
Dn ⊗ I =
Level 3
i.e. from Eq. (6) the principle of Shannon entropy min- n⋅ H H ⊗I 
−1 0 0  0 

0 1 0  0
imum is as follows. Grover  
H ⊗I
n −H  0 0 1  0  H ⊗ I
Figure 9 shows digital block of Shannon entropy min- Level 2 ⊗I     
 
0 0 0  1
imum calculation and the main idea of the termination  


n×n



criterion based on this minimum of entropy [13, 14]. n


H
Level 1 ⇒ (H ⊗ I ) H ⊗I
Shor QFTn ⊗ nI phase =0

Figure 10. Computation of Superposition and Interference


Operators
The superposition state is created by appli-
cation of Hadamard matrix to column vector
 1  1   0 
[1 − 1] =   =   +   = ( 0 − 1 )
T

as  −1 0   −1 . According to


this rule of quantum computing the superposition model-
ing circuit is developed [16].
Figure 11 shows the superposition modeling circuit.

The first operations needed are H|0>, H|0> and


H|1>. Neglecting the factor 1/20.5, it can be
|0> H written:
n .
. .
. . 1 1  1 1 1  1 1 1  0 
(2 qubits)
. UF 1 ⊗ ⊗
|0> H  − 1 0 1 − 1 0 1 − 1 1

Direct product can be performed via AND


H
gates. In fact
|1>
1*1 = 1 ∧ 1 = 1; − 1*1 = − (1 ∧ 1) = − 1; 1* 0 = (1 ∧ 0) = 0
Input Superposition Entanglement

h11=1
1 h21=1

(a) 1 + +
1
1

|0> + -- |0>
h12=1 |h22|=1
0 [1 --1]T 0
0 0
Information stopping
criteria
Measurement Final superposition state
of result
SW additional
functions

Figure 11. Superposition (Qubit) Modeling Circuit


Search space
Intelligent computation
Qubits simulation circuits with tensor product on Fig.
of solutions

12 is shown.
operators

Scheme background for SW implementation

b
1  A 1  A
1 ⊗ A =  A or 0  ⊗ A =  0 
(b)       

2-qubit superposition
Figure 9. Digital Block of Shannon Entropy Minimum 3-qubit superposition
1
Calculation (a) and MatLab (b) Implementations 1 0
1 0 1 0
0 
 
1 1
 
Number of iterations of QA defined during the calcula-
0  1 1 0 1 0
0
1 1 0
=  = 
1 1
 
tion process of minimum entropy search.
1 1 0 1 0
 
1 1 0
0  0 
1
 
The structure of HW implementation of main quantum 0

operators. Note: no multipliers are introduced

Figure 10 shows the structure of superposition and in-


Figure 12. Qubits Simulation Circuits with Tensor Prod-
terference operator simulation. uct

24 Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.619


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

Figure 13 shows the computation of entanglement op-


erators.  1
, if i = f ( x j ) + 1 + 2 n ( j − 1)
gi =  2n / 2
 0, elsewhere
PC

Hardware Accelerator

Quantum Gate

Operators
Quantum
ce
t
n

en

n
tio

re
em
si

rfe
po

gl

te
an
er

In
p

t
En
Su

a c

1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0
        
0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0  0 0 0 0
1 0 0 1 I ⊗I = I ⊗C =  C⊗I = C ⊗C = 
I =   C =   0 0 1 0 0 0 0 1 1 0 0 0 0 1 0 0  0 0 0 0
        
 1   0   0  1 0   0 0 
0 1 1 0 0 0 0 0 0 1 0 1 0  0 0 0 0

Entanglement operators of quantum algorithms: a - Deutsch-Jozsa’s; b – Grover’s; c – Shor’s

Figure 13. The Computation of Entanglement Operators


Figure 14 shows the entanglement creation circuit.

Idea: to avoid encoding steps by acting directly on


entanglement output vector via function f. For n = 2
The output of entanglement can be realized by using
couples of XOR gates:
Superposition Output Entanglement Output

y1 y2 y3 y4 y5 y6 y7 y8

g1 g2 g3 g4 g5 g6 g7 g8

f(x) Mi
f(x) 0 1 0 0
00 I⊗I

00 01 10 11 01 I⊗C

10 C⊗I
Figure 14. The Entanglement Creation Circuit Example of UF
11 C⊗C
Thus it is possible to obtain output of entanglement
G=UF ×Y without calculate matrix product and have only
knowledge of corresponding row of diagonal UF matrix
Figure 15: Equivalent form of Output Vector G
(see, Fig. 13).
Finally output vector G can write as following (Fig. Figure 16 shows the entanglement circuit realization.
15):

Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.619 25


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

Let us consider briefly applications of QAG design ap-


Max333: Maxim analogue switches proach in highly structured QSA; and in AI, informatics,
computer sciences and intelligent control problems (see
Part II).
.

SIMULATION OF QA - COMPUTING ON CLAS-


SICAL COMPUTER
We discuss the general outline of the Grover’s QAs us-
J5 J10 J6 J11 J7 J12 J8 J13
ing the quantum gate (QAG) as
Connectors

( D n ⊗ I ) ⋅U F  ⋅ ( ⊗ n +1H )
h
5V Binary function
QAG G=
r
  (7)
0V
000 001 010 011 100 101 110 111

General method design of QAGs in [13, 14] is developed


Figure 16. Entanglement Circuit Realization and is briefly described.
Figure 17 shows the circuit realization of interference Figure 18a represents QAG of Grover’s algorithm (7)
operator according to the scheme in Fig. 10. as control system, and Fig. 18b describe a general struc-
ture scheme of Grover's QSA (see, Fig. 1 and Table 1) [13].
I-b: Pre – interference Decision-Making Feed-Forward

Initial States Unmarked Qualitative Marked min ( S S h − S v N )


Let us consider the output V of
Properties States
States
Termination

. the entanglement block. Superposition


Entanglement Information Interference POV
0n ⊗ n +1
R.S. Optimization
Measure

V=[v1 v2 …… vi…… v2n+1] H UF Dn ⊗ I



ε u*
Information Quantum Wise Control Measurement

In fact, if Y is the interference 1 Source Oracle Controller Object Process

output vector, its elements yi are


Information
Comparator

Local Control Feed-back

 1 2
n

 n−1 ∑ v2 j−1 − vi , for i odd


Physical
Comparator Answer

2 TL081 OPAMP
yi =  2j=n 1 POV : Positive
Operator-Valued
Global Information Feed-back

 1 v − v , for i even
 2n−1 ∑
j =1
2j i
(a)
(not implemented, being even = -odd)
Grover Quantum Gate
I-c: Interference
h
With:
|0> H bit
 1 
0 =   .
.  0  Basis

n .
. Dn
 0 UF h
1 =    qubits
 1 
 |0> H bit
vi yi ϕ i = c0 0 + c1 1 h
Inti |1> H
ith element H =
1 1

1 bit
2 1 − 1
processing INPUT STEP 1 STEP 2 STEP 3 OUTPUT
1 / 2 n −1 − 1 i = j
 1 2
n unit d ij =  n −1
i≠ j
1/ 2

 n−1 ∑v2 j −1 − vi , for i odd The output is Φ = [(Dn ⊗I) ⋅UF ]h ⋅ ( n+1H)
 2 j =1
yi =  n (b)
1
2

∑v − v , for i even
2n−1 2 j i
 j =1 TL084 OPAMP
Figure 18. General Structure Scheme of Grover QSA
(not implemented, being even = -odd) The Hadamard gates (Step 1) are the basic components
for the superposition operation, the operator U F (Step 2)
performs entanglement operation and Dn (Step 3) is the
Figure 17. Interference Circuit Realization diffusion matrix related to the interference operation. Our
purpose is to realize some classical circuits (i.e. circuits

26 Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.619


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

composed of classical gates AND, NAND, XOR etc.) that


simulate the quantum operations of Grover QSA. To this I-a Entanglement

aim all quantum operators must be expressed in terms


of functions easily and efficiently described by classical
components. When we try to make the HW components
that perform this basic operations according to the classi- I-b

Main board

Analogue Part
cal scheme we encounter two main difficulties. I-c

High-level gate design of Grover’s QSA (Model


based approach) Interference I
In this section we present a new model based HW im-
plementing the functional steps of Grover’s QSA from

Superposed input
Digital Part:
a high-level gate design point of view. According to the Stop Criterion
Minimum of
high-level scheme in Eq. (7) introduced in Fig. 4, the pro- II Shannon entropy
posed circuit can be divided into two main parts. CPLD Board

Part I: (Analogue) Step-by-step calculation of output


values. This part is divided into the following subparts: Figure 20a. Simulink Scheme of 3-qubits Grover Search
System
I-a: Superposition; I-c: Pre-Interference (for vector’s approach);
I-b: Entanglement; I-d: Interference

Part II: (Digital) Entropy evaluation, vector storing for


iterations and output visualization. This part also provides
initial superposition of basis vectors 0 and 1 .
Figure 19 shows a general structure scheme of the HW
realization for the Grover’s QSA-circuits and itself can be Entanglement

Interference
considered as a classical prototype of intelligent control Superposition
quantum system.

Pre-
Pre-Interference

Figure 20b. Pre Prototype Scheme Circuit of Grover’s


Figure 19. A General HW-scheme of the Grover’s QSA QAG
Referring to Fig. 19, pre-interference operation evalu-
Example. The most interesting novelty involves the ates a weighted sum of odd (even) output elements of en-
tanglement, while interference itself uses this contribution
structure of interference: in fact the generic element vi
(interference output) can be written in function of g i (en- in order to provide (by means of difference with g i ) the
tanglement output) as the following respective vi . This simple (but powerful) result in Eq. (8)
has several consequences.
 1 2
n

 n −1 ∑ g 2 j −1 − gi , for i odd Figure 21 shows experimental HW evolution of Gro-


 2 j =1 ver’s quantum search algorithm for three qubits.
vi = 
2n
 1
 2n −1 ∑ g 2 j − gi , for i even
(8)
 j =1

Figures 20a and 20b show the Simulink schematic de-


sign and circuit realization of superposition, entanglement M ain Board

and interference operator’s blocks of the Grover’s QAG.

Entire Board
CPLD Board

Figure 21. HW Realization of Grover QSA

Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.619 27


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

Remark. Regarding to speed-up of computation, a by a classical computer. Similar approach can be used for
great improvement has been provided due to the smaller the realization of quantum fuzzy computing [27].
number of products (only one for each element of the out- Application of Grover’s QAG is classical efficient sim-
put vector) and more precisely 2n+1 against 4n+1 of the ulation process for realization of quantum search compu-
classical approach. Also additions are less than 2 ( 2 + 1)
n n
tation on classical computer (see in details [17]).
instead of 4n+1 ). But the most important fact is that all QUANTUM ALGORITHM ACCELATOR COMPUT-
these operation can be easily implemented in HW with ING: SW SUPPORT: EXAMPLES (matrix approach)
few operational amplifiers ( 2 + 2 ) .
n
The software system into two general sections is divid-
Example. Figure 22a - d shows the experimental ed (see, Fig. 24).
probability evolution of finding each of the database’s
Common Algorithm –
elements (from Iteration #1 - to Iteration #4). At this step functions specific functions
Computational intelligence
(Iteration #2) the probabilities of finding one of the 8 ele- Superposition building blocks Entanglement encoders
ments of the database are comparable. In the following

Quantum Modeling System


Interference operators
Assembly
Problem transformers Problem-
Oriented
Diffusion Result interpreters operators
Unified
operators QFT Algorithm execution scripts

Bra-Ket functions Matlab slide show

Deutsch
Measurement operators

Matlab console program


Entropy calculations
Design D.-M.
Deutsch
New R&D QA
Visualization functions process
Step Deutsch - Jozsa

Grover
Typical Quantum control algorithms
Search
functions State visualizations Shor
QA

Operator visualizations Applications Benchmarks of QA

Figure 22. Experimental Results of 3-qubits HW-imple-


mentation of Grover’s QSA
Figure 24. Structure of QFMS and SW Toolkit
Figure 23 shows the result of entropy analysis for Gro-
The first section involves common functions. The sec-
ver’s QSA according to Eq. (6), =
case n 7,= f ( x0 ) 1.
ond section involves algorithm-specific functions for real-
izing the concrete algorithms.
Figure 25 shows of quantum mechanical representation
in SW of (bra – ket) vectors and calculation of quantum
states as density matrices.

Quantum-mechanics
bra-ket definitions
Density
ation matrix
um oper computing
Quant

Figure 23. Shannon Entropy Simulation of QSA with 7-


operations
Classical

inputs me
nta
t ion

ple
im
SW
In Fig. 22c the probability of finding the second ele- Fidelity

ment of the database begins to increase with respect to the


probabilities of finding the others elements. After some Figure 25. SW Representation of Density Matrix and
other iterations of the algorithm, the difference between Fidelity Calculation
the probability of finding the second element and the Example: Quantum Shor’s Algorithm (Quantum facto-
probabilities to find the others is increased. Finally the rization promise). Figure 26 shows the factorization prob-
probability of extracting the second element of the data- lem. Figure 27 shows the quantum Shor algorithm and its
base is greater than the probabilities of finding any other describing circuit (see Table 1). We can observe UF block
elements. Figures 22b, 22c and 22d show the evolution
that is a diagonal matrix of 22 n × 22 n dimension. Final-
of quantum searching using Grover’s QAG. It is a clear ly output of entanglement is processed by interference
demonstration of how we can perform Grover’s algorithm

28 Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.619


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

block composed of Quantum Fourier Transform (QFT) Example: Command line simulation of the Grover’s
and identity matrix I. The output of entire algorithm is quantum search algorithm The example of the Grover’s
therefore the vector obtained after application of opera- algorithm script is presented in Figs 29 and 30.
n
tor QFTn ⊗ I .

Fast integer numbers Factorization

1. Classical factorization
- 1024 bits: 105 years
- 2048 bits: 5x1015 years
- 4096 bits: 3x1029 years

2. Quantum factorization
•1024 bits: 4.5 min
•2048 bits: 36 min Figure 29. Example of Grover Algorithm Simulation
•4096 bits: 4.8 hours Script (Visualization of the Quantum Operators Sp, Ent,
Int and G = (Int)(Ent)(Sp))

Figure 26. Fast Factorization Problem and Its Solutions

Figure 30. Example of Grover Algorithm Simulation


Script (Visualization of the Input and of the Output Quan-
1 1 1 
tum States)
Entanglement and interference
H =   operators UF, QFT for Shor Algorithm
2 1 − 1

In Fig. 29, the algorithm-related script is presented. It


Figure 27. Quantum Shor’s Algorithm Circuit and Main prepares the superposition (SP), entanglement (ENT) and
Quantum Operators interference (INT) operators of the Grover’s algorithm
Factorization time using matrix and vector approach with 3 q-bits (including the measurement q-bit). Then it
are here reported (see, Fig. 28). assembles operators into the quantum gate G.
Then the script creates an input state in = 001 and
calculates the output state out = G × in . The result of
Matrix approach
this algorithm in Matlab is an allocation of the operator
• 4 to 6 bits ≈7 min
matrices and of the state vectors in the memory. Code
displays the operator matrices in Fig. 29 in 3D visualiza-
• 6 to 7 bits ≈16 min
tion. In this case the vertical axis corresponds to the am-
• > 8 bits Overflow plitudes of the corresponding matrix elements. Indexes of
the elements are marked with the ket notation. Input |in>
Vectorial approach and the output |out> states are demonstrated in Fig. 25. In
• 4 to 8 bits ≈8 sec this case, the vertical axis corresponds to the probability
amplitudes of the state vector components. The horizontal
• 8 to 11 bits ≈11 sec
axis corresponds to the index of the state vector compo-
• 11 to 13 bits ≈ 5 min Simulation program window nent, marked using the ket notation.
The title of the Fig. 30 contains the values of the Shan-
Figure 28. SW Simulation of Shor’s Quantum Factoriza- non and of the von Neumann entropies of the correspond-
tion Algorithm ing visualized states.

Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.619 29


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

Other known QA can be formulated and executed using


similar scripts, and by using the corresponding equations
taken from the previous section.
Simulation of QAs as dynamic control system
In order to simulate behavior of the dynamic systems
with quantum effects, it is possible to represent the QA as
a dynamic system in the form of a block diagram and then
simulate its behavior in time. Figure 31 is an example of a
Simulink diagram of the quantum circuit for calculation of
the fidelity a a of the quantum state and for the calcula-
tion of the density matrix a a of the quantum state. Bra Figure 33. Evolution of Grover’s Quantum Search Algo-
and ket functions are taken from the common library. This rithm: Quantum Simulator on Classical Computer (Matrix
example demonstrates the usage of the common functions Approach)
for the simulation of the QA dynamics.
In Fig. 31, input is provided to the ket function. The Dynamic evolution of successful results of algorithm
output of the ket function is provided to the first input of execution for the first iteration of Grover’s QAG for initial
the matrix multiplier and as a second input of the matrix qubits state 0001 and different answer search is shown
multiplier. Input is also provided to the bra function. The in Fig. 34.
output of the bra function is provided to the second input
of the matrix multiplier and as a first input of the matrix
multiplier. Output of the multiplier is a density matrix of
the input state. Output of the multiplier is the fidelity of
the input state.

Figure 31. Simulink Diagram for the Simulation of the


Arbitrary Quantum Algorithm Figure 34. Grover’s QSA: Algorithm Execution [First
Iteration]
Figure 32 shows Simulink structure of an arbitrary
QA. Such a structure can be used to simulate a number of Figure 35 shows algorithm execution results for Gro-
quantum algorithms in Matlab / Simulink environment. ver’s QSA with different number of iterations for success-
ful results with different searching answer number.

Figure 32. Simulink Diagram for the Simulation of the


Arbitrary QA
Simulation result of Grover’s QSA on Fig. 33 is shown.

Figure 35. Grover’s QA: Step 2. Algorithm Execution


Results

30 Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.619


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

Figure 36 is a 3D dynamic representation of Grover’s formation H (modeling the constructive interference) ap-
QAG probabilities evolution (step 2 of Fig. 33) for differ- plied on the state of the standard computational basis can
ent cases of answer search. be seen as implementing a fair coin tossing. Thus, if the
1 1 1 
H=  
matrix 2 1 −1 is applied to the states of the stan-
2 2
dard basis then H 0 = − 1 , H 1 = 0 and therefore
H 2 acts in measurement process of computational result
as a NOT-operation up to the phase sign. In this case the
measurement basis separated with the computational basis
(according to tensor product).
Figures 38 and 39 shows the measurement result and
final results of entropy dynamic evolution interpretation
of Grover’s QSA for search of successful results with
Figure 36. Grover’s QA: Step 2 [Algorithm Execution 3D different number of marked states (in computational basis
Dynamics: Probabilities]
{ }
0 , 1 ). These results represent the possibility of the
Algorithm execution results of Grover’s QAG (step 2 classical efficient simulation of Grover’s QSA.
of Fig. 33) with different stopping iteration for searching
answers are shown in Fig. 35.
Example: Interpretation of measurement results in
simulation of Grover’s QSA-QAG.
In the case of Grover’s QSA this task is achieved
(according to the results of this section) by preparing
the ancillary qubit of the oracle of the transformation:
1
=
U f : x, a  x, f ( x ) ⊕ a in the state a0 ( 0 − 1 ).
2
The operator I x 0 is computationally equivalent to U f :
 1 1
Uf =
x ⊗ ( 0 − 1 ) I ( x ) ⊗ (0 −1)
 2   x0
2 Figure 38. Interpretation of Measurement Results of QSA
1  1 
= I x 0 ( x ) ⊗ 0 − I x 0 ( x ) ⊗ 1
2    2   
Measurement Measurement
Computation Result Computation Result

and the operator U f is constructed from a controlled


I x 0 and two one qubit Hadamard transformations. Fig-

ure 37 shows the interpretation of results of the Grover


QAG.

Figure 39. Shannon Entropy Dynamics after 31 Steps of


Grover’s QSA
Figure 37. Grover’s QA: Step 2 [Result Interpretation]
Remark. Figure 38 (b) shows the results of computation
Measured basis vector is computed from the tensor on a classical computer and shows two possibilities:
product between the computation qubit results and ancil-
lary measurement qubit. In Grover’s searching process the  
ancillary qubit does not change during the quantum com-  
puting.  0110
= 011 ⊗
 
0 
 
As described above operator U f is constructed from  Result measurement qubit 
and
two Hadamard transformations and the Hadamard trans-

Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.619 31


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

  equal.
As discussed above, the Grover QA only requires two
  variables for storing values of the elements. Its limitation
  in this sense depends only on a computer representation of
 0111
= 011 ⊗

1
 
  the floating-point numbers used for the state vector proba-
.  Result measurement qubit 
bility amplitudes. For a double-precision software realiza-
Figure 38 (b) demonstrates also two searching marked tion of the state vector representation algorithm, the upper
states: reachable limit of q-bit number is approximately 1024 [13].
Figure 40 shows a state vector representation algorithm
  for the Grover QA.
 
 0110 =
011 ⊗ 0 or 1010 =
101 ⊗ 0 
 
 
 measurement qubit measurement qubit 
and
 
 
 
 0111 =
011 ⊗ 1 or 1011 =
101 ⊗ 1 
 
 
 measurement qubit measurement qubit 
A similar situation is shown for two and three search-
ing marked states in Fig. 37 (b). Figure 40. State Vector Representation Algorithm for
Using a random measurement strategy based on a fair Grover’ Quantum Search
coin tossing in the measurement basis { 0 , 1 } one can
independently receive with certainty the searched marked Remark. In Fig. 40, i is an element index, f is an input
states from the measurement basis result. function, vx and va corresponds to the elements’ category,
The measurement results based on a fair coin tossing and v is a temporal variable. The number of variables used
measurement are shown in Fig. 38 (c) and shows accurate for representing the state variable is constant. A constant
results of searching of corresponding marked states. Figure 38 number of variables for state vector representation allow
(c) shows also that for both possibilities in implementing reconsideration of the traditional schema of quantum
a fair coin tossing type of measurement process the search search simulation.
for the answer is successful. Classical gates are used not for the simulation of ap-
Final results of interpretation for Grover’s algorithm propriate quantum operators with strict one-to-one corre-
are shown in Fig. 38. spondence but for the simulation of a quantum step that
Let us describe briefly the main blocks in Fig. 2: i) uni- changes the system state. Matrix product operations are
fied operators; ii) problem-oriented operators; iii) Bench- replaced by arithmetic operations with a fixed number of
marks of QA simulation on classical computers; and iv) parameters irrespective of qubit number.
quantum control algorithms based on quantum fuzzy Figure 41 shows a generalized schema for efficient
inference (QFI) and quantum genetic algorithm (QGA) as simulation of the Grover QA built upon three blocks, a
new types of QSA (see, Part II of this article). superposition block H, a quantum step block UD and a
Let us consider problem-oriented operators description. termination block T.
Problem-oriented approach based on structural pattern of
QA state vector with compressed vector allocation.
Let n be the input number of qubits. In the Grover
algorithm (as mentioned above) half of all 2n+1 elements
of a vector making up its even components always take
values symmetrical to appropriate odd components and,
therefore, need not be computed.
Odd 2n elements can be classified into two categories:
• The set of m elements corresponding to truth points of
input function (or oracle); and Figure 41. Generalized Schema of Simulation for Grover’
n QSA
• The remaining 2 − m elements.
The values of elements of the same category are always Figure 41 also shows an input block and an output

32 Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.619


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

block.
Remark. The UD block includes a U block and a D
block. The input state from the input block is provided to
the superposition block. A superposition of states from the
superposition block is provided to the U block. An output
from the U block is provided to the D block. An output
from the D block is provided to the termination block. If
the termination block terminates the iterations, then the
state is passed to the output block; otherwise, the state
vector is returned to the U block for iteration.
As shown in Fig. 42, the superposition block H for Figure 44 (c). Quantum Step Block for Grover’ Quantum
Grover QSA simulation changes the system state to the Search
state obtained traditionally by using n + 1 times the tensor
The UD block reduces the temporal complexity of the
product of Walsh-Hadamard transformations. In the pro-
quantum algorithm simulation to linear dependence on the
cess shown in Fig. 41, vx:= hc, va:= hc, and vi:= 0, where
number of executed iterations. The UD block uses recal-
hc = 2 - (n+1) / 2 is a table value. n
culated table values dc1 = 2 − m and dc2 = 2 n-1.
Remark. In the U block shown in Fig. 44 (a), vx:= - vx
and vi:= vi + 1. In the D block shown in Fig. 44 (b), v:=
m*vx+dc1*va, v:= v/dc2, vx:= v - vx, and va:= v - va in
the UD block shown in Fig. 44 (c), v:= dc1*va = m*vx,
v:=v/dc2, vx:=v + vx, va:= v - va, and vi:= vi + 1.
The termination block T is general for all QAs, in-
dependently of the operator matrix realization. Block T
Figure 42. Superposition Block for Grover’s QSA provides intelligent termination condition for the search
The quantum step block UD that emulates the entan- process. Thus, the block T controls the number of itera-
glement and interference operators is shown on Figs 43 (a tions through the block UD by providing enough itera-
- c). tion to achieve a high probability of arriving at a correct
answer to the search problem. The block T uses a rule
based on observing the changing of the vector element
values according to two classification categories. The T
block during a number of iterations, watches for values
of elements of the same category monotonically increase
or decrease while values of elements of another category
changed monotonically in reverse direction. If after some
number of iteration the direction is changed, it means that
an extremum point corresponding to a state with maxi-
mum or minimum uncertainty is passed. The process can
Figure 43 (a). Emulation of the Entanglement Operator
Application of Grover’s QSA using direct values of amplitudes instead of considering
Shannon entropy value, thus, significantly reducing the
required number of calculations for determining the mini-
mum uncertainty state that guarantees the high probability
of a correct answer.
The termination algorithm realized in the block T can
be used one or more of five different termination models:
o Model 1: Stop after a predefined number of iterations;
o Model 2: Stop on the first local entropy minimum;
o Model 3: Stop on the lowest entropy within a pre-
defined number of iterations;
o Model 4: Stop on a predefined level of acceptable en-
Figure 44 (b). Emulation of Interference Operator Appli-
cation of Grover’s QSA tropy; and/or
o Model 5: Stop on the acceptable level or lowest

Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.619 33


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

reachable entropy within the predefined number of itera-


tions.
Note that models 1 - 3 do not require the calculation of
an entropy value.
Figures 45 – 47 show the structure of the termination
condition blocks T.

Figure 47 (b). Component POP for the Termination Block


Since time efficiency is one of the major demands on
such termination condition algorithm, each part of the ter-
mination algorithm is represented by a separate module,
and before the termination algorithm starts, links are built
Figure 45. Termination Block for Method 1 between the modules in correspondence to the selected
termination model by initializing the appropriate func-
tions’ calls.
Table 2 shows components for the termination condi-
tion block T for the various models. Flow charts of the
termination condition building blocks are provided in Figs
45 – 50

Table 2. Termination Block Construction

Model T B’ C’
1 A -- --
2 B PUSH --
Figure 46. Component B for the Termination Block
3 C A B
4 D -- --
5 C A E

The entries A, B, PUSH, C, D, E, and PUSH in Table 2


correspond to the flowcharts in Figs 40 – 45 respectively.

Figure 47 (a). Component PUSH for the Termination


Block

Figure 48. Component C for the Termination Block

34 Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.619


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

then vx: = mvx, va:= mva, and vi:= mvi.


The model 3 termination block checks to see that a pre-
defined number of iterations do not exceed (using block A
in Fig. 43):
• If the check is successful, then the termination block
compares the current value of vx with mvx. If mvx is less
than, it sets the value of mvx equal to vx and the value of
mvi equal to vi. If mvx is less using the PUSH block, then
perform the next quantum step;
Figure 49. Component D for the Termination Block • If the check operation fails, then (if needed) the final
value of vx equal to mvx, vi equal to mvi (using the POP
block) and the iterations are stopped.
The model 4, the termination block uses a single com-
ponent block D, shown in Fig. 48. The D block compares
the current Shannon entropy value with a predefined ac-
ceptable level. If the current Shannon entropy is less than
the acceptable level, then the iteration process is stopped;
otherwise, the iterations continue.
The model 5 termination block uses the A block to
check that a predefined number of iterations do not ex-
Figure 50. Component E for the Termination Block ceeded (see, Fig. 45). If the maximum number is exceed-
ed, then the iterations are stopped. Otherwise, the D block
Remark: Peculiarities of QA termination models in
is then used to compare the current value of the Shannon
model 1, only one test after each application of quantum
entropy with the predefined acceptable level. If acceptable
step block UD is needed. This test is performed by block
level is not attained, then the PUSH block is called and the
A. So, the initialization includes assuming A to be T, i.e.,
iterations continue. If the last iteration was performed, the
function calls to T are addressed to block A. Block A is
POP block is called to restore the vx category maximum
shown in Fig. 45 and checks to see if the maximum num-
and appropriate vi number and the iterations are ended.
ber of iterations has been reached, if so, then the simula-
Figure 51 shows measurement of the final amplitudes
tion is terminated, otherwise, the simulation continues.
in the output state to determine the success or failure of
In model 2, the simulation is stopped when the direc-
the search.
tion of modification of categories’ values are changed.
Model 2 uses the comparison of the current value of vx
category with value mvx that represents this category val-
ue obtained in previous iteration:
(i) If vx is greater than mvx, its value is stored in mvx,
the vi value is stored in mvi, and the termination block
proceeding to the next quantum step;
(ii) If vx is less than mvx, it means that the vx maximum
is passed and the process needs to set the current (final)
value of vx: = mvx, vi := mvi, and stop the iteration pro-
cess. So, the process stores the maximum of vx in mvx and Figure 51. Final Measurement Emulation
the appropriate iteration number vi in mvi. Here block B, If |vx| > |va|, then the search was successful; otherwise,
shown in Fig. 46 is used as the main block of the termina- the search was not successful.
tion process. Table 3 lists results of testing the optimized version of
The block PUSH, shown in the Fig. 47 (a) is used for Grover QSA simulator on personal computer with Penti-
performing the comparison and for storing the vx value in um 4 processor at 2GHz.
mvx (case a). A POP block, shown in Fig. 47 (b) is used
for restoring the mvx value (case b). In the PUSH block Table 3 High Probability Answers for Grover QSA
of Fig. 47 (a), if |vx| > |mvx|, then mvx: = vx, mva: = va,
Qbits Iterations Time
mvi: = vi, and the block returns true; otherwise, the block
returns . In the POP block of Fig. 47 (b), if |vx| <= |mvx|, 32 51471 0.007

Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.619 35


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

36 205887 0.018 ness without the cost of a time resource - in on line.


40 823549 0.077 Figure 53 show an ICS with the integration of several
44 3294198 0.367 fuzzy controllers and QFI which allows create a new qual-
ity in management – online self-organization of knowl-
48 13176794 1.385
edge base (KB) [45].
52 52707178 2.267
56 210828712 20.308 Design of robust KB

Fuzzy Controller
QFC : Quantum
60 843314834 81.529 off-line tuning
QFC SCO + QCO
64 3373259064 328.274
F.C. 1
Simulation results QFI Noise
F.C. 2 in model

The theoretical boundary of this approach is not the Control on-line tuning
parameters

number of qubits, but the representation of the float- error


Ref. Control
ing-point numbers. The practical bound is limited by the + -
PID + + object
front side bus frequency of the personal computer. Using Real physical object
m(t)
the above algorithm, a simulation of a 1000 qubit Grover Delay time
in sensor Noise with time-
or simulation model

QSA requires only 96 seconds for 108 iterations. Z-1


dependent statistics Noise
in

Figure 52 shows the simulation result of Grover’s algo-


sensor
Control Situation New more realistic control situations
rithm [problem-oriented approach with compressed vector
allocation] [14]. Figure 53. Structure of robust ICS based on QFI
In general, the structure of a quantum algorithmic gate
(QAG) based on a quantum genetic algorithm (QGA) de-
scribed in (9) in the form:
1000 qubit Grover’s algorithm simulation
QAG = ( Int ⊗ n I ) ⋅ U F 
h +1
( 21000 elements in DB) ⋅ [QGA]  n H ⊗ m S  . (9)

Structure of corresponding QAG on Fig. 54 is shown.

100 000 000


Iterations

In less than 2 minutes

Figure 54. QAG Structure of QFI


Figure 52. Simulation Results of Problem Oriented
Grover’s QSA According to Approach 4 with 1000 Qubit The first part in designing Eq. (9) is the choice of the
(Simulator Window Snapshot) type of the entangled state of operator U F . The basic unit
of such an ICS is the quantum genetic search algorithm
The described method is differed from [23-44].
(QGPA) (see, Fig. 55).
Let us discuss briefly the applications of QAG ap-
proach in design of new types of quantum search algo-
rithm as quantum genetic algorithm.
Quantum fuzzy inference and quantum genetic algo-
rithm: Quantum simulator
Intelligent control systems (ICS) based on the use of
soft computing, fuzzy logic, evolutionary algorithms and
neural networks. Basis of management systems - propor-
tional–integral–derivative (PID) controller, which is used
in 70% of the industrial automation, but often can`t cope
with the task of managing and does not work well in un-
predicted control situations. The use of quantum comput-
ing and quantum search algorithms, as a special example,
Figure 55. Intelligent Self-organizing Quantum Search
quantum fuzzy inference (QFI), allows increasing robust- Algorithm for Intelligent Control Systems

36 Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.619


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

Results of simulation show computing effectiveness of types of quantum correlation are considered: spatial, tem-
robust stability and controllability of (QFI + QGA)-con- poral and spatial-temporal. Each of them contains valu-
troller and new information synergetic effect: from two able quantum information hidden in a KB.
fuzzy controllers with imperfect knowledge bases can Quantum correlation considered as a physical compu-
be created robust intelligent controller (extracted hidden tational resource, which allows increasing the successful
quantum information from classical states is the source of search for solutions of algorithmically unsolvable prob-
value work for controller [45]). Intelligent control systems lems. In our case, the solution of the problem of ensuring
with embedding intelligent QFI-controller can be realized global robustness of functioning of the control object
either on classical or on quantum processors (as an exam- under conditions of unexpected control situations by de-
ple, on D-Wave processor type). signing the optimal structure and laws of changing the
Two classes of quantum evolution (9) are described: PID controller gain factors by classical control methods
quantum genetic algorithm (QGA) and hybrid genetic is an algorithmically unsolvable problem. The solution of
algorithm (HGA). The QFI algorithm for determining this problem is possible based on quantum soft computing
new PID coefficient gain schedule factors K (see Fig.56) technologies [17]. The output parameters of the PID-reg-
consists of such steps as normalization, the formation of a ulators are considered as active information-interacting
quantum bit, after which the optimal structure of a QAC is agents, from which the resulting controlling force of the
selected, the state with the maximum amplitude is select- control object is formed. In a multi-agent system, there
ed, decoding is performed and the output is a new param- is a new synergistic effect arising from the exchange of
eter K. information and knowledge between active agents (swarm
synergetic information effect) [17].
Types and operators of quantum genetic algorithms.
There are several different types of quantum genetic algo-
rithms. All of them are built on a combination of quantum
and classical calculations. Quantum computing includes
quantum genetic operators performing genetic operations
on quantum chromosomes. These operators are called in-
terference gates.
There are several update operators, but Q-gate inter-
ference (rotation) is the most popular [42-44]. The quantum
Figure 56. QFI Algorithm Structure on Line interference operator is denoted as gate U (t ) :
At the input, the QFI obtains coefficients from the  cos(δθ j ) − sin(δθ j ) 
fuzzy controller knowledge bases formed in advance U (t ) =  
based on the KB optimizer on soft calculations.  sin(δθ j ) cos(δθ j )  .
The next step is carried out normalization of the re- Using this operator, the evolution of a population is the
ceived signals [0, 1] by dividing the current values of con- result of a process of unitary transformations. In particular,
trol signals at their maximum values (max k), which are rotations, which approximate the state of chromosomes to
known in advance. the state of the optimal chromosome in the population. The
Formation of quantum bits. The probability density gate enhances or reduces the amplitude of qubits or genes
functions are determined. They are integrated and they in accordance with the chromosome with the maximum
make the probability distribution function. They allow de- fitness function: f ( x1 , x2 , x3 ,..., x j ) (maximum). The best
i
fining the virtual state of the control signals for generating individuals determine the evolution of the quantum state.
a superposition via Hadamard transform of the current We considered the quantum genetic algorithm (QGA) [35, 36],
state of the entered control signals. the gate of rotations and a quantum gate of mutation and a
The law of probability is used: (| 0〉 ) + (|1〉 ) =1 , where crossover operator is added to the HGA between them.
p (| 0〉 ) is the probability of the current real state and Remark. In the classical genetic algorithm (GA), the
p (|1〉 ) is the probability of the current virtual state. The choice operator mimics Darwinian natural selection, im-
superposition of the quantum system "real state - virtual proving populations, promoting individuals with better
=
state" has the form:
ψ
1
2
( p ( 0 ) 0 + 1− p ( 0 ) 1
. ) fitness and “punishing” those who have the worst perfor-
mance. In the QGA, the choice is replaced by changing all
The next step is selection of the type of quantum cor-
individuals to the best. Therefore, when the rotation oper-
relation - constructing operation of entanglement. Three
ator updates the population, the population converges, but

Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.619 37


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

usually the CGA falls into local optima that undergo pre- It can be seen that after about 30 populations, the value
mature convergence. To avoid this, the QGA often include of the fitness function ceases to change. HGA shows the
either a tape measure or an elite selection. For example, following results (see, Fig. 58).
QGA with a selection step is used in an improved K-means
600
clustering algorithm. There are even more extreme ap- 500

Fitness function
proaches, for example, when the QGA includes the selec- 400
300
tion and simulation algorithm for annealing, precluding 200

premature convergence. In other cases, the selection step 100


0
is enabled without resorting to the operators commonly 0 100 200 300 400 500 600

used in GA. Such a case of a semiclassical GA (where the Population

selection (choice) operator tends to maximize its suitabil-


Figure 58. Result of Hybrid Genetic Algorithm
ity through a quantum approach) for example using the
Grover algorithm [14, 18]. Remark. One of the interesting ideas was proposed in
Quantum mutation operator (inversion). In the GA 2004, taking the first steps in implementing the genetic al-
simulation, there is also a quantum version of the classical gorithm on a quantum computer [45]. The author proposed
mutation operator. The gate performs the inter qubit muta- this quantum evolutionary algorithm, which can be called
tion of the j-th qubit, replacing the amplitudes with Pauli's the reduced quantum genetic algorithm (RQGA).
quantum gate. The algorithm consists of the following steps: 1) Initial-
Quantum mutation operator (insertion). This gate re- ization of the superposition of all possible chromosomes;
sembles the biological mechanism for introducing chromo- 2) Evaluation of the fitness function by the operator F; 3)
somes. Chromosome insertion means that a chromosome Using Grover's algorithm; 4) Quantum oracle; 5) Using
segment has been inserted into an unusual position on the of the diffusion operator Grover G; 6) Make an evaluation
same or a different chromosome. The quantum version of of the decision. The search for solutions in RQGA is per-
this genetic mechanism involves a permutation or exchange formed in one operation.
between two randomly selected qubits (left, right). For ex- In this case the matrix form is the result of RQGA ac-
ample, suppose that, given the following chromosome, the tion as following (see, Fig. 59)
first and third qubits are chosen randomly.
The quantum transition operator (classical). A quan-
tum crossover is modeled similar to the classic recombi-
nation algorithm used in GA, but it works with probability
amplitudes. Although the quantum version of the mutation
can be implemented on a quantum computer, there are
theoretical reasons that prevent crossover.
Quantum crossover operator (interference). This quan-
tum operator performs crossover by recombination in ac-
cordance with a criterion based on drawing diagonals. As
a result, all individuals mix with each other, resulting in
progeny. Both (QGA and HQGA) quantum algorithms are
tested on example of the roots searching task of equation Figure 59. The Result of the RQGA Algorithm
5 After action of GQA more than 1000 generation we can
as: f ( x) = | x − 2 + sin( x) | .
QGA resulting performance indicates the following see on Fig. 60 that around 70% spatio-temporal correla-
(see, Fig. 57). tion have best probability choice.
80
600
Choice probability

500 60
Fitness function

400 40
300
200 20

100 0
0 0 1000 2000 3000 4000 5000 6000
0 20 40 60 80 100 120 140 160
Generation
Population
Q-S-T Q-T Q-S

Figure 57. Result of Quantum Genetic Algorithm


Figure 60. The Result of the QGA

38 Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.619


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

Temporal and spatial correlations have similar quality. universal for simply creating simulations of other tasks
After 5000 generations probability value is not changing. based on the prepared project.
QGA after 200 generations the probability choice of spa- Selection of development toolkit. Simulator access is as
tio-temporal correlation decreases to 60% (see, Fig. 61). simple as possible and it is implemented as a non-typical
web application. The diagram of the sequence of the user's
Q-S, 16
work with the system and the interaction of the model, the
presentation and the template are presented on Fig. 62.

Q-T, 24
Q-S-T, 60

Figure 61. The Result of the Quantum Genetic Algorithm


200 Times
The overall strategy for improving the quality of QGA
is to use small improvements in the algorithm. For exam-
ple, including new operators: “quantum disaster”, distur-
bance, or other customized algorithms [17]. But in many
cases, these operators are only useful in highly specific
applications [47-53]. Figure 62. Sequence Diagram of System
Simulator structure and examples of applications Most of the server side work is math. It is necessary
The use of simulators has long been used in various in- to calculate the position of the carriage, the angle of in-
dustries: motor racing, aviation, surgery and many others. clination of the pendulum in space. For this reason, Py-
The development of virtual reality technology and aug- thon and the Django framework, which implements the
mented reality adds the ability to create simulators with model-view-controller (MVC) approach, were chosen
full immersion. as the programming language (or in Django, this is the
Remark. In the development of quantum genetic algo- model-view-template (MVT)). MySQL is used to store all
rithm in this article on the model of the inverted pendu- data, and the architecture has been developed for adding
lum (autonomous robot) was discovered a few problems. Redis to be faster, if the MySQL operation speed is insuf-
Firstly, testing a written algorithm on a robot takes a lot of ficient.
time. Secondly, you may encounter an incorrectly working Figure 63 show Benchmark results of quantum intelli-
HW, and it is rather difficult to identify the malfunction gent control simulation of “cart - pole” system with QGA
itself. Thirdly, the GA is the selection of parameters that (box for the type choice of “Quantum correlation” on Fig.
work best in a particular situation, but it’s quite common 54).
that these parameters were very bad, which makes it diffi- Partial rendering performance of the simulator is shown
cult to set up a dynamically unstable object. in Fig. 63.
Description of the problem. The main goal of the
simulator development is SW testing, educational goals,
and the ability to observe the pendulum's behavior when
using various intelligent control algorithms with different
parameters: using only the PID controller, adding a fuzzy
controller to the ICS, using the GA and neural network,
using QGA. The simulator is interesting because it covers
many areas required for its implementation. There are also
many different ways of development: improvement of the
2D model or even implementation in 3D, control of the
pendulum in on line (changing the parameters of the pen-
dulum, adding various noises), making the simulator more (a) Visualization of Inverted Pendulum Behavior

Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.619 39


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

80
cart motion,sм

30

-20 0 2 4 6 8 10 12 14

-70
Time, sec

PID FC1 FC2 Q-S-T Q-T Q-S

(b) Cart Behavior (Q-S-T – Quantum Spatio-temporal


Correlation; Q-T – Quantum Temporal Correlation;
Q-S – Quantum Spatial Correlation; FC – Fuzzy Con-
troller)

Figure 63. Simulation results of “cart - pole” system with


PID - controller, fuzzy controllers and QFI-QGA-control-
(b)
ler with different quantum correlation
The described method is differed from [47-53].
Figure 64. Autonomous Robot with Inverted Pendulum (a)
Example: Application of quantum computing opti- and Simulation & Experimental Results Comparison for
mizer of knowledge base (QCOPTKBTM) for the case of Unpredicted Control Situation in Cases of PID-controller,
experimental teaching signal from control object Control Fuzzy Controller and QFI-controller (b)
object shown on Fig. 64 a. Structure of robust ICS based
Results of controllers behavior comparison confirm
on QFI is shown on Fig. 53 and on Fig. 54 is shown QAG
the existence of synergetic self-organization effect in the
structure of QFI that used in the simulation and experi-
design process of robust KB on the base of imperfect (non
ment. On Fig. 64 b are demonstrated results of simulation
robust) KB of fuzzy controllers on Fig. 53. In unpredicted
and experimental results comparison. Mathematical mod-
control situation control error is dramatically changing
eling and experimental results are received for the case
and KB responses of fuzzy controllers (FC 1 and FC 2)
of unpredicted control situation and knowledge base of
that designed in learning situations with soft computing
fuzzy controller was designing with SW of QCOPTKBTM
are imperfect and do not can achieve the control goal.
for teaching signal measured directly from control object
Using responses of imperfect KB (as control signals for
(autonomous robot on Fig. 64 a). As model of unpredicted
design the schedule of time dependent coefficient gain in
control situation on Fig. 53 (Box Z -1 ) was the situation
PID-controller on Fig. 53) in Box QFI the robust control
of feedback sensor signal delay on three times.
is formed in on line. This effect is based on the existence
of additional information resource that extracted by QFI
as quantum information hidden in classical states of con-
trol signal as response output of imperfect KB’s on new
control error (QFI algorithm structure on line in Fig. 56).
QGA in Fig. 56 for this case recommended the spatial
quantum correlation as was early received in [54, 55].

2. Discussion
The described design method of ICS based on QAG-ap-
proach let to achieve global robustness in the case of
unpredicted control situations in online using new types
of computational intelligence toolkit as quantum and soft
computing and based on computational resource of classi-
(a) cal computers. The introduced model of QFI is a new type
of quantum search algorithm based on sophisticated struc-
ture of quantum genetic algorithm embedded in his struc-
ture. Such on an approach to the solution of robust control
design problems of classical nonlinear control objects

40 Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.619


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

(in general globally unstable and essentially nonlinear) is iv:1901.09988v1 [quant-ph], 2019.
considered as Benchmark for effective application of de- [2] L. Gyongyosi, Quantum circuit designs for
veloped design information technology of ICS [14, 54-58]. The gate-model quantum computer architectures [J]. arX-
results of simulation and experiment show unconventional iv:1803.02460v1 [quant-ph], 2018.
(for classical Boolean logic) conclusion: from response [3] R. M. Parrish, J. T. Iosue, A. Ozaeta, and P. L. Mc-
of two non-robust imperfect KB of FCs in the structure of Mahon, A Jacobi diagonalization and Anderson accel-
ICS on Fig. 53 with new quantum search algorithm QFI eration algorithm for variational quantum algorithm
possible to design in online robust quantum FC. parameter optimization [J]. arXiv: 1904.03206v1
With RQGA based on reduced Grover’s QSA used [quant-ph],2019.
spatial quantum correlation between two coefficient gain [4] V. Dunjko and H. J. Briegel, Machine learning & ar-
schedules of FCs in Fig. 53 and quantum self-organization tificial intelligence in the quantum domain: a review
of imperfect KBs in online effectively on classical stan- of recent progress [J]. Rep. Prog. Phys. 2018, 81(7):
dard chip realized and described on concrete example. 074001 (67pp)
This synergetic information effect has pure quantum DOI: 10.1088/1361-6633/aab406
nature, used hidden in classical states quantum informa- [5] C.D. Bruzewicz, J. Chiaverini, R. McConnell, and J.
tion as additional information resource and does not have M. Sage, Trapped-ion quantum computing: Progress
classical analogy. and challenges [J]. arXiv: 1904.04178v1 [quant-ph],
2019.
3. Conclusions [6] V. Bergholm, J. Izaac, M. Schuld et all, PennyLane:
Automatic differentiation of hybrid quantum-classi-
o New circuit implementation design method of quan- cal computations [J]. arXiv: 1811.04968v2 [quant-
tum gates for fast classical efficient simulation of QAs is ph], 2019.
developed. Benchmarks of design application as Grover’s [7] K. Bertels, I. Ashraf, R. Nane et all, Quantum com-
QSA and QFI based on QGA demonstrated. Applications puter architecture: Towards full-stack quantum accel-
of QAG approach in intelligent control systems with erators [J]. arXiv: 1903.09575v1 [quant-ph], 2019.
quantum self-organization of imperfect knowledge bases [8] T. Peng, A. W. Harrow, M. Ozols and X. Wu, Sim-
are described on concrete examples. ulating large quantum circuits on a small quantum
o The results demonstrate the effective application pos- computer [J]. arXiv: 1904.00102v1 [quant-ph], 2019.
sibility of end-to-end quantum technologies and quantum [9] P. Krantz, M. Kjaergaard, F. Yan et all, A Quantum
computational intelligence toolkit based on quantum soft engineer’s guide to superconducting qubits [J]. arX-
computing for the solution of intractable classical [59] and iv:1904.06560v1 [quant-ph], 2019.
algorithmically unsolved problems as design of global [10] National Academies of Sciences, Engineering, and
robustness of ICS in unpredicted control situations and Medicine. 2018. Quantum Computing: Progress and
intelligent robotics. Prospects (E. Grumbling and M. Horowitz, Eds) [B].
o Efficient simulation on classical computer quantum The National Academies Press, Washington, DC.
soft computing algorithms, robust fuzzy control based DOI: https://doi.org/10.17226/25196.
on quantum genetic (evolutionary) algorithms and quan- [11] M.A Nielsen and I.L Chuang, Quantum computation
tum fuzzy neural networks (that can realized as modified and quantum information [M]. Cambridge University
Grover’s QSA), AI-problems as quantum gate simulation Press, Cambridge, England, 2000.
approaches and quantum deep learning, quantum optimi- [12] S. Ulyanov, V. Albu and I. Barchatova, Quantum
zation in Part II are considered. algorithmic gates: Information analysis & design
o Thus, positive application results of mutual tech- system in MatLab [M]. LAP Lambert Academic Pub-
nologies based on soft and quantum computing give the lishing, Saarbrücken, 2014.
possibility of application Feymann - Manin thesis to study [13] S. Ulyanov, V. Albu and I. Barchatova, Design IT of
classical physical system as inverse problem “quantum Quantum Algorithmic Gates: Quantum search algo-
control system – classical control object” solve effectively rithm simulation in MatLab [M]. LAP Lambert Aca-
classical intractable and algorithmic unsolved problems. demic Publishing, Saarbrücken, 2014.
[14] S.V. Ulyanov, System and method for control using
References quantum soft computing [P]. US Patent No 7,383,235
B1, 2003; EP PCT 1 083 520 A2, 2001; Efficient
[1] O. Kyriienko, Quantum inverse iteration algo- simulation system of quantum algorithm gates on
rithm for near-term quantum devices [J]. arX- classical computer based on fast algorithm [P]. US

Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.619 41


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

Patent No 2006/0224547 A1, 2006. 168.


[15] S. Ulyanov, Method and hardware architecture for DOI:10.4172/jcsb.1000151
controlling a process or for processing data based on [28] T.M. Forcer, et all Superposition, entanglement and
quantum soft computing (Inventors: Ulyanov S., Riz- quantum computation [J]. Quantum Information and
zotto G.G., Kurawaki I., Amato P. and Porto D.) [P]. Computation, 2002, 2(2): 97-116.
PCT Patent WO 01/67186 A1, 2000. [29] J. Pilch, J. Długopolski, An FPGA-based real quan-
[16] D. M. Porto and S.V. Ulyanov, Hardware implemen- tum computer emulatorю [J]. J. of Computational
tation of fast quantum searching algorithms and its Electronics, 2018. available:
application in quantum soft сomputing and intelligent https://doi.org/10.1007/s10825-018-1287-5
control [C]. In: Proc. World Automation Congress [30] A.J. McCaskey, E.F. Dumitrescu, D. Liakh, M. Chen,
(5th Intern. Symp. on Soft Computing for Industry). W. Feng, T.S. Humble, A language and hardware in-
Seville, Spain, 2004 (paper ISSC I31). dependent approach to quantum–classical computing
[17] S.V.Ulyanov, et al. Quantum information and quan- [J]. Software X 7, 2018, 2: 245-254.
tum computational intelligence: Classically efficient [31] A.R. Colm, R.J. Blake R., B.D. Diego and T. A.
simulation of fast quantum algorithms (SW / HW Ohki, Hardware for dynamic quantum computing [R].
Implementations). [M] Note del Polo, Milan Univ, arXiv:1704.08314v1 [quant-ph], 2017.
2005, 79. [31] Zeng-Bing Chen, Quantum Neural network and
[18] L.K. Grover, A fast quantum mechanical algorithm soft quantum computing [R]. arXiv:1810.05025v1
for database search [P]. US Patent US 6,317,766 B1, [quant-ph], 2018.
2001. [32] J. B. Vega, D. Hangleiter, M. Schwarz, R. Rauss-
[19] S. Bose, L. Rallan, V. Vedral, Communication capac- endorf, and J. Eisert, Architectures for quantum
ity of quantum computation [J]. Phys Rev Lett. 2000, simulation showing a quantum speedup [J]. Physical
(85): 5448-5451. Review, 2018, X 8: 021010.
[20] E. Arikan, An information-theoretic analysis of [33] A.M. Childs, D. Maslov, Y. Nam, N. J. Ross and Y.
Grover’s algorithm [J]. arXiv:quant-ph/0210068v2, Su, Toward the first quantum simulation with quan-
2002. tum speedup [J]. PNAS, 2018. 115(38): 9456-9461
[21] F. Ghisi and S. Ulyanov, The information role of https://doi.org/10.1073/pnas.1801723115
entanglement and interference operators in Shor [34] K. Michielsen, M. Nocon, D. Willsch, F. Jin, T.Lip-
quantum algorithm gate dynamics [J]. J. of Modern pert, H. De Raedt, Benchmarking gate-based quan-
Optics, 2000. 47(12): 2079-2090. tum computers [J.] . Computer Physics Communica-
[22] M. Branciforte, A. Calabrò, D. M. Porto, and S.V. tion, 2017, 220: 44-55.
Ulyanov, Hardware design of main quantum algo- http://dx.doi.org/10.1016/j.cpc.2017.06.011
rithm operators and application in quantum search [35] Patrick J. Coles et all, Quantum algorithm implemen-
algorithm of unstructured large data bases [C]. In: tations for beginners [R]. arXiv:1804.03719v1 [cs.
Proc. of the 7th World Multi-Conference on Systems, ET], 2018
Cybernetics and Informatics (SCI ’2003), Florida, [36] K. A. Britt, F. A. Mohiyaddin, and T. S. Humble,
Orlando, USA, 2003. Quantum accelerators for high-performance com-
[23] J. Niwa, K. Matsumoto, H. Imai, General-purpose puting systems [R]. arXiv:1712.01423v1 [quant-ph],
parallel simulator for quantum computing [J]. Physi- 2017.
cal Review A, 2002. 66(6). [37] Zhao-Yun Chen and Guo-Ping Guo [R], QRunes:
[24] L. Valiant, Quantum computers that can be simulated High-level language for quantum-classical hybrid
classically in polynomial time [C]. In: ACM Proc. programming [R]. arXiv:1901.08340v1 [quant-ph],
STOC’01, Greece, 2001: 114-123. 2019.
[25] L. Valiant, Quantum circuits that can be simulated [38] Y. H. Lee, M. Khalil-Hani, and M. N.Marsono, An
classically in polynomial time [J]. SIAM J. Comput. FPGA-based quantum computing emulation frame-
2002, 31(4):1229-1254. work based on serial-parallel architecture [J]. Intern.
[26] C. Huang, M. Newman, and M. Szegedy, Explicit J. of Reconfigurable Computing, 2016. Vol. 2016,
lower bounds on strong quantum simulation [R]. Article, ID: 5718124.
arXiv:1804.10368v2 [quant-ph], 2018. http://dx.doi.org/10.1155/2016/5718124
[27] A. Rybalov, E. Kagan, A. Rapoport and I. Ben-Gal, [39] F. K. Wilhelm et all, Entwicklungsstand Quanten-
Fuzzy implementation of qubits operators [J]. Com- computer [M]. Federal Office for Information Securi-
puter Science and Systems Biology. 2014. 7(5): 163- ty. Bonn, 2017.

42 Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.619


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

https://www.bsi.bund.de [50] A. Arjmandzadeh, M. Yarahmadi, Quantum genetic


[40] G. D. Paparo, V. Dunjko, A Makmal, M. A. Mar- learning control of quantum ensembles with Hamil-
tin-Delgad, and H. J. Briegel [J], Quantum speedup tonian uncertainties [R], Switzerland, 2017.
for active learning agents. Pysical Review, 2014, X 4: [51] H. Wang, J. Liu, J. Zhi. The improvement of quan-
031002. tum genetic algorithm and its application on func-
[41] Yi-Lin Ju, I-Ming Tsai, and Sy-Yen Kuo, Quantum tion optimization [R]. College of Field Engineering,
circuit design and analysis for database search ap- PLA University of Science and Technology, Nanjing
plications [J]. IEEE Trans. On Circuits and Systems. 210007, China, 2013.
2007, 54(11): 2552-2563. [52] A. Malossini, E. Blanzieri, T. Calarco, QGA: A quan-
[42] M. Suchara, Y. Alexeev, F. Chong, H. Finkel, H. tum genetic algorithm [R]. Technical Report # DIT-
Hoffmann, J. Larson, J. Osborn, and G. Smith, Hy- 04-105. – University of Toronto, 2004.
brid quantum-classical computing architectures [C]. [53] S.V. Ulyanov, K. Takahashi, L.V. Litvintseva and T.
In: Proc. 3rd INTERN. WORKSHOP ON POST- Hagiwara, Design of self-organized intelligent con-
MOORE’S ERA SUPERCOMPUTING (PMES), trol systems based on quantum fuzzy inference: In-
PMES Workshop, Dallas, 2018. telligent system of systems engineering approach [C].
http://j.mp/pmes18 Proc. of IEEE Intern. Conf. SMC’ , Hawaii, USA,
[43] D. Koch, L. Wessing, P. M. Alsing, Introduction to 2005, 4: 3835- 3840.
coding quantum algorithms: A tutorial series using [54] L.V. Litvintseva and S.V. Ulyanov, Quantum fuzzy
Qiskit [R]. arXiv:1903.04359v1 [quant-ph]. 2019 inference for knowledge base design in robust intel-
[44] S.V. Ulyanov, Quantum soft computing in control ligent controllers [J]. J. of Computer and Systems
processes design: Quantum genetic algorithms and Sciences Intern. 2007, 46(6): 908-961.
quantum neural network approaches [C]. In: Proc. [55] S.V. Ulyanov, K. Takahashi, G.G. Rizzotto and I.
WAC (ISSCI’) 2004 (5th Intern. Symp. on Soft Com- Kurawaki, Quantum soft computing: Quantum glob-
puting for Industry), Seville Spain, 2004, 17: 99-104. al optimization and quantum learning processes –
[45] S.V. Ulyanov, Self-organizing quantum robust con- Application in AI, informatics and intelligent control
trol methods and systems for situations with uncer- processes [C]. In: Proc. of the 7th World Multi-Con-
tainty and risk [P]. Patent US 8788450 B2, 2014. ference on Systems, Cybernetics and Informatics,
[46] P. Chandra Shill, F. Amin, K. Murase, Parameter op- (SCI ’2003), Florida, Orlando, USA, 2003.
timization based on quantum genetic algorithms for [56] S. Ulyanov, F. Ghisi, V. Ulyanov, I. Kurawaki and L.
fuzzy logic controller [R]. Department of System De- Litvintseva [M] Simulation of Quantum Algorithms
sign Engineering University of Fukui, 3-9-1 Bunkyo, on Classical Computers, Universita degli Studi di
Fukui-910-8507, 2011. Milano, Polo Didattico e di Ricerca di Crema, Note
[47] P. Chandra Shill, B. Sarker, M. Chowdhury Urmi, K. del Polo, 2000, 32.
Murase. Quantum fuzzy controller for inverted pen- [57] D.M. Porto, S.V. Ulyanov, K. Takahashi, and I.S.
dulum system based on quantum genetic optimiza- Ulyanov, Hardware implementation of fast quantum
tion [J] Intern. J. of Advanced Research in Computer searching algorithms and its applications in quantum
Science, 2012. soft computing and intelligent control [C]. Proc.
[48] R. Lahoz-Beltra, Quantum genetic algorithms for Word Automation Congress (WAC’2004), Seville,
computer scientists [J]. Computers. – 2016, 5: 24. Spain, 2004.
[49] Cheng-Wen Lee, Bing-Yi Lin. Applications of the [58] Ch. H. Papadimitriou and J. Tsitsiklis, Intractable
chaotic quantum genetic algorithm with support vec- problems in control theory [J]. SIAM J. Control and
tor regression in load forecasting [M], Switzerland, Optimization, 1986. 24 (4): 639-654.
2017.

Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.619 43


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

Artificial Intelligence Advances


https://ojs.bilpublishing.com/index.php/aia

REVIEW
Architecture of a Commercialized Search Engine Using Mobile Agents
Falah Al-akashi*
Faculty of Engineering, University of Kufa, IRAQ

ARTICLE INFO ABSTRACT

Article history Shopping Search Engine (SSE) implies a unique challenge for validating
Received: 26 March 2019 distinct items available online in market place. For sellers, having a user
finding relevant search results on top is very difficult. Buyers tend to click
Accepted: 25 April 2019 on and buy from the listings which appear first. Search engine optimiza-
Published Online: 30 April 2019 tion devotes that goal to influence such challenges. In current shopping
search platforms, lots of irrelevant items retrieved from their indices; e.g.
Keywords: retrieving accessories of exact items rather than retrieving the items itself,
Product Search regardless the price of item were considered or not. Also, users tend to
move from shoppers to another searching for appropriate items where the
Industrial Information Retrieval time is crucial for consumers. In our proposal, we exploit the drawbacks
Ecommerce of current shopping search engines, and the main goal of this research is
Market Space to combine and merge multiple search results retrieved from some highly
professional shopping sellers in the commercial market. Experimental
results showed that our approach is more efficient and robust for retriev-
ing a complete list of desired and relevant items with respect to all query
space.
CCS CONCEPTS
Information systems - Commercial-specific retrieval

 
1. Introduction propagated to the users immediately [10]. This superiority

T
comes from the advantage of the structure of online prod-
raditional information retrievals provide services uct catalogues with clearly identified characteristics, such
to help users locate content on the World Wide
as price, description, and features. Similar to traditional
Web (WWW). Most information retrievals assist
information retrieval algorithms, shopping search algo-
most users to find generally accessible data, but others
rithm is kept secret for dealing with idea of the algorithm.
focus on particular data that are available privately. Users
Recently, shopping searchers provide some valuable
turn to search algorithms to find useful and high-relevant
information. Shopper’s information retrievals are different results that are simply beyond the algorithms of general
from general-purpose Web information retrievals if the searchers. However, it is fair to validate and assess quality
user tends to search for commercial items. Unlike a tradi- of information since there are no quality standards or test-
tional web archive, a marketplace such as eBay sees rapid ing algorithms [13]. Retrieval algorithms vary in different
change to that document collection, with approximately models and there are few models to evaluate its quality.
20% of the collection changing every day. Also unlike a One of the drawbacks of current shopping engines is that
web archive, changes in the document collection must be they do not support users in finding the specific items

*Corresponding Author:
Falah Al-akashi,
Faculty of Engineering, University of Kufa, IRAQ;
Email: falahh.alakaishi@uokufa.edu.iq

44 Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.688


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

respecting the desired properties in one-click. Alterna- gine. For instance, when a user queries a system by “Cam-
tively, they allow buyers to use one keyword to describe era Sony blue 400$” or “laptop hp core duo Red $900”,
the category that contains all items in the same properties. the commercial searchers allow short query string; e.g. “2
Therefore, the buyer tends to jump from category to cate- keywords” or even though they do not deal with the price.
gory and from page to page in order to find the appropri- More concretely, recent shopping search results include
ate items or products. If buyers cannot find the relevant several irrelevant items, such as the accessories that are
product in short period, they will go to another shopper not related to the specific item.
to enhance their search. However, shopping searchers, as Our model uses some of major shopping services:
general web searchers, look for optimization in the com- “Walmart”, “Amazon”, “UsedOttawa”, “eBay”, “buy”,
mercialized websites using search models. Information “FUTURESHOP”, “BESTBUY”, “Zellers”, “Shopping”,
in some economics literature assumed that search cost “Overstock”, and “Karmaloop” to manipulate with dif-
in some products has extremely reduced to zero since ferent kind of items. The proposed system uses a ranking
consumers are able to utilize professional search tools schema showed by sorting and filtering for reordering
which are free of charge and easily to find and compare the relevant items depending on the extracted parameters
products on the Internet [14, 15]. Existing literature on buy- (adding taxes and shipping, and then ranking the final
ing search behaviour finds that using search tools to look costs). Generally, shopping search engine, as general
for items versus its price dominates other search models search engines, composites from four essential parts:
[16]
. Often, sellers focus on maximizing the traffic that crawling, indexing, searching, and presenting the results
[24]
comes via search engines to their searcher models. On the . Figure 1 below shows the architecture of our system.
other hand, e-commerce is another business channel that
developed and improved very fast; and consequently, the
strategy has been improved numerous strong commercial
organizations. Individual customer level has been rarely
examined this is due to the use of the real-time persistent
successful commerce [18]. However, due these difficulties,
the obstacles are not merely affected the commercial
searchers; but also the well-known classical or traditional
searchers. The current general search engines, e.g. Goo-
gle, MSN,.., etc. retrieve results from stores based on the
Figure 1. The Anatomy of Our Search Approach
indexed keywords or metadata meanwhile there is no sub-
stantial parameters, features, or arguments to rank them.
2.1 Crawling the Collection
In the commercial searchers, for example, each search
agent ranks its results locally based on the available items Generally, crawler or scraper is an algorithm that able to
stored in its index and there is no way to retrieve and download web content by following hyper-links within
compare the results available in other stores. these web pages to download the remote contents. The
crawled contents can be in the same or in another domain.
2. Our Overall Approach The scrapping continues and expires until reaching a par-
As a solution to the mentioned constraints or obstacles, we ticular depth e.g. no external links or the number of levels
aim in this approach to address it by merging results from inside the link structure. Current search engines fail to
different stores and index them in a real-time index using index the content of shopping websites correctly due to
a herein proposed ranking algorithm. The index algorithm the content mobility that change frequently every second.
uses comparable strategy for filtering and ranking items. The Web is changing more and more and the content is
Our approach is programmed to give a model to commer- dynamic by nature and include a lot of client-side and cli-
cialized sellers which aim to improve their products with ent-server interactivity [19]. We use our scratch algorithm
high-level customer service. Making the matched items on for scrapping the specified shoppers as mentioned, in
the top of the list is the main goal of most product search which we used “REST HTTP” to crawl our shopping sell-
models. We aimed to design a model that compromises ers. The retrieved data was aggregated in an xml file for
and addresses the drawbacks of approaches and retrieves parsing.
the desired items from industrial sellers by manipulating
2.2 Parsing the Data
them as a stream of features. Manipulating the precise
query string is the main challenge for industrial search en- Our system is designed to convenient to the user through

Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.688 45


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

using some features; such as selecting a specific seller, se- are stored in vector space in an XML file to merge them
lect all sellers, and there is no restriction in query length. with the other results from other sellers. Sorting is used
As we mentioned, the proposal approach implies the eight to reorder all items by users preferences e.g. “price”; in
largest Canadian shopping sellers. The users are able to which, the list of items is ordered from the highest price to
select the all sellers in one step, or cancel any of them. the desired price up to the cheapest item. The final result
The query field is corporate to receive the description of is presented to the user includes the full matching items
items from the user by defining them as a set of keywords. that satisfy to the user’s query.
The query might involve full description to reduce the
effect of ambiguity or the interference with other items 2.3 Filtering the Results
that shared similar descriptions for different senses of the Reducing the relevant results in the searching list is the
keywords. The description might start with the name of second challenge in all product sellers. Developing re-
item, and then more precise description, e.g. the price, liable filtering services for concerning some issues is
color, internal components, and so forth (e.g. “Laptop Dell a serious and challenging problem [21]. Often, agents in
Core Duo blue $700”). One of the basic reasons why the commercial shoppers use criteria that are relevant to users
first term is referred to the name of item is that it uses for by viewing only the product that exploited the goal of
crawling and for connecting all shopping sellers. Means, that algorithm. For instance, in price metric, users may
the purpose of search each seller is to collect all pages pick products exclusively under $100 by excluding all
and documents related to that item. The following are products over that price. Due to the historically poor of
REST HTTP requests and regular expressions are used to search results, users often browse many sellers to find a
scrap the corresponding sellers and to parsing the retrieval desired product (In some cases, users turn some search
items: functions only if they cannot locate what they seeking).
Thereafter, attributes are extracted from each document Often, filtering is very important for users who have little
by splitting the document into their items. Unfortunately, knowledge about the product. Filtering is more useful
each seller has its own strategy that differs from others when there are many different arguments involved to a
for storing the attributes in the documents; that means the product. Shoppers often use a tool to persuade and in-
documents in all sellers are unstructured. To overcome fluence a purchaser using a global filtering tool for their
this difficulty, our algorithm used the prominent tags for range of watches. Commercial product listings augment
extracting and parsing items. Filtering, sometimes called common filtering features, e.g. 'price', to enforce many us-
sifting, is used to filter out each document separately by ers by their familiar tools. Moreover, they use other more
discarding all items that did not match the user’s que- concise filters, e.g. ‘Color’; ‘Class’, ‘Sex’, or ‘Age’. Some
ry keywords. Unfortunately, most search engine sellers shopping searchers use filtering to classify items into
merge all items related to specific keywords in the same classes or categories, e.g. “Electronics”, “Books”, “Arts”,
category, e.g. “hp keyboard”, which means they merge etc. However, we compromised our search approach to
“hp computers” with “hp accessories”. This also means, use filtering attribute.
search engines were not able to distinguish between ‘hp’
as a singular word or as part of another word; all the parts 2.4 Sorting the Results
in the same category were combined in one step. Hence,
we used filtering algorithm to discard all irrelevant acces- Sorting the results according to some properties is also
sories and items from the final list. Price attribute is used important for all product search engines.
to filtering out all items that imply price more than that Sorting algorithm takes Web page content and creates
queried by a user, e.g., “Toyota white Camry $10000” keywords searching that enable online users to find pages
means discard all items that are not satisfy that properties. they're looking for [20, 21]. Changing the relevancy of any
Some filters were applied depending on the items and the item listing where the users can impose which strate-
query string, such as: color, type, size, sex, model, and gy they want the items to be involved. For example, in
so forth. The system recommended users to querying the “price”, users prefer to list the items based on price from
system using a price rather than without a price. Often, low to high. Moving items with a certain feature on the
our algorithm uses a function that successfully applies top of page will help users who are not sure what they
to all items in order to filter the accessories related to the looking for. Reducing number of items in the product
specified price. If the user does not specify the price, the listing and moving some items from place to place is the
result probably returns some related accessories (2%) if main strategy for most information retrieval algorithms.
they existed in that category. Thereafter, all resulted items The “eBay” and “Amazon” shoppers, for instance, pro-

46 Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.688


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

vide many criterions to sort search items, such as “sort ments and results must fit the commercial conditions for
by price” to sort in ranged amount (e.g., from $ to $) or comparing different prices available for similar items in
by size (e.g., from inch to inch). Consumers, in some- different stores. Often, public sellers are put their rele-
how, find it is helpful when using sorting parameters, vancy in a global domain and impose independent users
e.g. 'Bestselling', 'Publication date' or 'Average Customer to perform the judgments. Human relevance judgment is
Review'. Some shopping search engines do not scale well popular for evaluating the sellers in market [23]. In this sec-
to support users for sorting the resulted items; hence, they tion, we will do our judgment with crowdsourcing and we
mix the items without paying attention to the users’ needs. will discuss our experimental results obtained from sell-
This leads to some effort needed to find specific items. ers. When we have retrieved tens of results for each query,
and we have completed judging hundreds of queries, we
3. Query Processing are able to compute the metrics and make comparisons be-
tween seller’s algorithms. Information retrieval algorithms
A more complex mining task is that of determining user
usually use F-measure, precision, recall, and NDCG (Nor-
intent rather than simply disambiguating the query string
[10] malized Discounted Cumulative Gain) for computing the
. Most information retrieval algorithms use queries
accuracy of results [24]. They are mouthful, but they are
made up of a few keywords or short phrases. Other
realistic-sense measures. More contrast, assuming that
non-textual searchers allow users to impose queries in
four-point scores, we assign “zero” for irrelevant items,
more exotic forms, e.g. hummed tunes or pictures. In
“one” for partially relevant, “two” for relevant, and “three”
whichever form, user tends to provide search algorithm
for high relevant. Considering that a query is judged by a
with some feature to reduce the amount of possible items.
particular value, and the first four results that the searcher
Approximation of the user’s intentions in typical queries
returns are evaluated as relevant, irrelevant, high rele-
is another problem needs to be addressed. Due to some
vant, and relevant. The cumulative gain is summed as
words in a query have many synonyms, query may for-
“7” score. That means, the results are evaluated gradually
ward to different possibilities, even within one category.
based on the proposed ranked relevancy. More concretely,
Furthermore, the users may not have good attention of
researchers showed that the goal of search engines is to
what they are looking for. With these realistic challeng-
return high relevant results at the top of the first page. The
es, we assume a scalable product technique with high
Discounted Gain issues this assessment consideration,
efficient resources which uses support vector machine to
which means, if the 3rd result is “high relevant”, the rank
compute the similarity between the results of seller vector
is first. But, when the 1st result is “relevant”, the rank is
and the user’s vector. First, the items were classified into
“third”. However, the final rank after four points is (3.5=
the correct level to significantly normalize on the search
2 / 1 + 0 / 2 + 3 / 3 + 2 / 4). The DCG at a particular rank
volume. Secondly, a vector similarity computation with
position p is defined as:
weighting technique proposed by [12], and finally, the pro-
cess ends with a ranking based on some criterions (brand, reli
DCG p = ∑ ip=1
color, price, etc.) to ensure the ranked similar items lo- log 2 (i + 1) (2)
cated not just based on a particular class or category; but
also, across similar aspects. The term in vector space W is The average performance of ranking algorithm can be
defined as follows: obtained by the Normalized DCG (nDCG) values for all
If item k does not exist in document di wik=zero, if queries; that is, a perfect ranking can be produced 1.0 and
item k exists in document di wik > zero (wik denotes the other values can be rounded on the interval between 0.0
weight of a term k in document di). The coefficient simi- and 1.0 cross-query comparable.
larity between di and dj is defined as follows: Table 1 shows the ranking of 20 queries run in all

Ssimilarity
= ( di , q j ) ∑ n
k =1 wik w jk
∧ Pqj
shoppers concurrently. The results showed the Cumulative
Gain values to represent the accuracy of relevant items.
di q j (1) The result of each seller is very important for comprising
and for influencing the users to create their preference. All
Where di and qj are the weighted values and |di| is the
sellers discarded the attribute “price” in the query if it is
length of the document’s vector di, P is the mean value of
implied. All results are higher and lower than the queried
price (greater than “0” and less than or equal to “p”).
price; whereas results in our approach respect all query at-
4. Experimental Evaluation tributes, e.g. price, color, type... etc. The left value denotes
the number of irrelevant items retrieved incorrectly; whilst
In terms of evaluation, we aim to meet the user’s require- the value on the right-side represents the total number

Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.688 47


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

of retrieved items (relevant plus irrelevant). Some fields Chair blue


21/46 37/96 48/48 -- 0/1 4/76
were empty because there were not results returned by a $200
Toyota Camry
seller for those queries, or sellers were not able to process 10000$
2/2 40/40 48/48 -- 1/1 0/32
a stream of keywords in the query field. Shoes women
10/47 11/48 24/78 -- 15/15 0/97
$160
Table 1. The Accuracy of Twenty Queries Run in 11 Printer HP
1/2 48/48 7/7 -- 0/2 1/107
laser $250
Shoppers
Book Ecom-
17/36 48/48 11/33 -- 15/15 0/2
Wal- Used merce new $80
Query Amazon eBay Buy Best Buy
Mart Ottawa Transport
35/120 27/27 4/4 -- 15/15 0/27
Laptop dell medical chair
16/159 33/44 7/15 48/50 0/4 0/15
$900 Shirt short girl 46/48 14/37 7/19 15/15 1/54
Camera Sony
Laptop HP
blue LCD 2/12 0/3 3/10 4/5 -- 13/13 3/3 17/52 40/40 -- 0/1 1/23
core duo 900$
$300
Table wood
Chair blue
21/46 8/48 27/51 28/50 -- 1/1 half-moon 29/51 2/40 40/40 -- 15/15 1/17
$200
$200
Toyota Camry
87/300 17/17 11/50 Hat newsboy
10000$ 2/2 0/40 0/1 -- 15/15 0/64
black wool
Shoes women
10/47 14/48 24/78 6/46 -- 4/4 Tricycle red
$160 10/11 21/40 40/40 -- 15/15 7/22
$50
Printer HP
½ 14/15 7/7 32/50 -- 0/4 Toyota tire
laser $250 10/10 28/40 2/2 -- 15/15 0/11
$500
Book Ecom-
merce new 17/36 48/48 11/33 1/1 -- 11/11 Coat gray 150$ 29/31 17/40 2/18 -- 15/15 7/34
$80 Drill hammer 3/5 7/40 6/6 -- 15/15 0/59
Transport Washer dryer
35/120 27/27 0/59 2/2 -- 16/16 40/41 7/40 -- -- 15/15 3/33
medical chair $700
Shirt short
46/48 14/37 4/50 2/10 -- 5/5 GPS $200 13/72 11/48 -- -- 2/15 10/140
girl
Laptop HP Coat women
3/3 17/52 17/52 39/41 0/2 wool gray 8/15 1/40 5/6 -- 15/15 0/13
core duo 900$
100$
Table wood
half-moon 29/51 0/1 11/29 12/14 2/2 10/10 Stroller safari
$200 double jogging 1/5 0/5 40/40 -- 15/15 0/22
200$
Hat newsboy
2/2 -- 8/8 12/23 -- 27/27 Average Error
black wool 298/718 352/779 413/507 7/19 201/220 35/882
Rate
Tricycle red 41.5% 45.1% 81.4% 36.8% 91.3% 3.9 %
10/11 0/4 9/80 1/9 4/17 20/20 Accuracy
$50
Toyota tire
15/16 48/48 17/21 1/1 19/25 3/3 As stated in the table, the attribute “price” were not
$500
processed correctly and discarded by all sellers. Accord-
Coat gray
150$
29/31 33/33 14/50 28/28 -- 12/12 ing to our evaluation based on the average error rate
Drill hammer mentioned by Cumulative Gain metric, “ebay.com” seller
7/16 46/48 15/41 43/50 -- 26/26
50$~75$ was ranked first due to it has a lower error rate “27.1%”.
Washer dryer
$700
8/16 10/11 1/17 6/17 3/6 14/14 Moreover, globally, it was categorized as a major seller
[1]
GPS $200 13/72 22/48 12/50 28/50 14/25 8/11
. Likewise, “futureshop.ca” has a highest error rate,
Coat women showing that the seller was not able to influence their
wool gray 8/15 -- 9/50 -- -- 13/13 users on their query search and probably the reason why
100$ the seller shut down later. Finally, the error rate for our
Stroller safari
double jog- 1/5 0/5 2/50 0/6 -- 14/14
system was “3.9%”, that means the precision value for our
ging 200$ search model is “96.1%”. Although, merging results pro-
Average Error 273/708 334/520 285/1051 310/470 53/129 197/206 cess is more complicated than individual results process,
Rate 38.5% 64.2% 27.1% 65.9% 41% 95.6%
our model improved the results and currently functioned
Kar- Future
Query Overstore Shopping
maloop
Zellers
Shop
SAMA along with the engineering reasons for working in this
Laptop dell way. Figures 2 and 3 showed our experimental results in a
16/159 14/47 48/48 -- 0/2 0/43
$900 different period of time using two metrics.
Camera Sony
blue LCD 2/12 2/3 48/48 -- 3/3 0/6
$300

48 Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.688


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

Figure 5. Our Searching Results for a Query “Laptop Dell


Figure 2. Discounted Cumulative Gain in a Different Core $800”
Period of Time

Figure 6: Our Searching Results for a Query “TOYO-


Figure 3. Average Precision in Different Periods of Time TA Camry LE $8000”
Figure 7 shows the output of running a query “Laptop
5. Experimental Running Dell Core $800” at “ebay.com” to represent a top seller.
Several shopping search engines are available publically The resulting list includes different items with all non-rel-
[4]
. Our shopping approach was designed to generalize evant accessories, e.g. items: memories, chargers, bags…
the information of items, including a screen shot of item, etc. and prices lower and higher than the queried price.
title, price in the desired range, and full description of
items (snippets). If the user clicks on the desired item,
he/she will transform to the actual item available in that
seller. Many features involved in our approach; that is, a
user is able to navigate in the frontend using forward and
backward. The following figures 4, 5, and 6 represent our
visual search system for running three query strings and
the corresponding results:

Figure 7. The Visual Search for a Query “Laptop Dell


Core $800” Runs at Ebay.com Website

6. Related Works
Online commercialized product recommendations have
Figure 4. Our Searching Results for a Query “Chair Blue been explored by several traditional models. For instance,
$200” the “Yoda” approach combined parametric filtering with

Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.688 49


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

content-based query for satisfying product recommen- why users tend to use Internet shoppers on Web is the
dations [12]. Genetic algorithms have been used for fast convenience, saving money, and saving time. Regarding
online recommendation models by combining data from these reasons, online shopping must employ more effi-
user navigation patterns. For instance, Online Purchase cient tools for helping users to get what they needs. In
Environment system (HOPE) is a helpful model used data our study, we can also conclude that all selected sellers
mining to generate suggestions for predicating both the process their items but with the common drawbacks sum-
user’s query and the content of items [9]. Tagged fields marized as: First, the same items are available in different
with products were used to organize them into a hierarchi- sellers; consequently make users to navigate from seller to
cal structure, and then, a nearest neighbour algorithm was another looking for suitable attributes of items e.g. “price”,
used to find related products using customers purchased “color”, “type”, etc. Second: Sellers are conflict to isolate
history. Lots of anthologies were used to build smarter the accessories from the actual items. Third: Filtering and
seller models. The most successful models of antholo- sorting are weak for most sellers. However, our approach
gies were able to map a query into a more realistic form is not merely merge items retrieved from several stores;
using domain-specific terminology. This will transform but rather, it merges, filters, and reranks items using some
unstructured query into a more productive structured one features mentioned previously in this proposed article.
that returns more relevant results [8]. In content-based in-
formation retrieval models, anthologies have been used References
to increase the relevancy and the performance of the re-
[1] Liyi Zhang, Mingzhu Zhu, and Wei Huang. “A
trieved results. An anthologist relatively novel application
Framework for an Ontology-Based E-Commerce
helped the interaction between the product space and the Product Information Retrieval System”, School of
query space [7]. Other researchers were proposed that us- Information Management, Wuhan University – Chi-
ing linguistic anthologies in productive models has been na, Journal of Computers, 2009, 4(6).
found to be more effective in Web-based retrieval [6]. Ini- [2] Rainer Olbrich, Carsten D.. Schultz “Search Engine
tially, the components were firstly encoded into a wordy- Marketing and Click Fraud”. Department of Business
sense lexical semantic graph. Another study [5] showed that Administration and Economics, Research Paper, the
significant influence factors on store satisfaction have lit- Chair of Marketing, 2004.
tle in common with others that impel shoppers to remain [3] James Christopher. “E-commerce: Comparison of
loyal to one store. Image based Search Engine is another online Shopping Trends and Preferences against a se-
platform proposed to deal with images from the large da- lected survey of women”. Research Paper, the Chair
tabase for online shopping specially for fashion shopping. of Marketing, 2004.
Researches [17] showed that their model helps user in find- [4] Sandip Sen, Partha Sarathi Dutta, and Sandip Deb-
ing the object/material available on online shopping sites. nath. “CHAYANI: a shopper's assistant”. Journal of
They showed things/objects/materials that were highly Intelligent Systems, 2005.
related to a non-textual-query by reducing the space accu- https://doi.org /10.1515/JISYS.2005.14.1.3
racy. [5] Mario J. Miranda, László Kónya, Inka Havrila,
"Shoppers' satisfaction levels are not the only key to
7. Conclusion store loyalty". Marketing Intelligence & Planning,
In this contribution, we outline the architecture of our 2005, 23(2): 220-232
https://doi.org/10.1108/02634500510589958
shopping searcher model which is a part of SAMA search
[6] Borgo, S., Guarino, N., Masolo, C. and Vetere, G.
engine[1, 2]. Our approach built from scratch to overcome
“Using a large linguistic ontology for internet based
the problem of traditional seller search engines. Based on
retrieval of object-oriented components”. In proceed-
the information collected from a small sample of other
ings of the International Conference on Software
study, the best elements of ecommerce do not guarantee
Engineering and Knowledge engineering, 1997.
that consumers will visit a particular seller or remain loy-
[7] Bryan, D. and Gershman, A. Opportunistic explora-
al. The well-established sellers; e.g. “Amazon” and “eBay”
tion of large consumer product paces. In Proceedings
are already invested significant resources to understand
of the First ACM Conference on Electronic Com-
what consumers need and desire. It might be useful to
merce, ACM Press, New York: NY, 1999: 41-47.
emulate these established sellers since they have been and
[8] Guarino, N., Masolo, C., Vetere, G., OntoSeek.
continue to be highly successful as they obtain high marks “Content-Based Access to the Web”. IEEE Intelligent
for customer satisfaction. Systems”, 1999: 70-80.
According to the Net Smart survey, the most reason

50 Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.688


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

[9] TONG SUN & ANDR´E TRUDEL. “An implement- [17] Sumit Mene, Arshey Dhangekar, Abhijeet Gurav,
ed e-commerce shopping system which makes per- Harshal R. Randad, Rupali Tornekar and Vijay Gaik-
sonal recommendations”. Jodrey School of Computer wad. (2017). International Journal on Recent and In-
Science, Acadia University, 2010. novation Trends in Computing and Communication
[10] Andrew Trotman, Jon Degenhardt, and Surya ISSN: 2321-8169, 2017, 5(12): 62 – 66.
Kallumadi. “The Architecture of eBay Search”. In [18] Bernard J. Jansen and Paulo R. Molina. “The effec-
Proceedings of ACM SIGIR Workshop on eCom- tiveness of Web search engines for retrieving rele-
merce, SIGIR, 2017. vant ecommerce links “. Information Processing and
[11] Fan Yang, Ajinkya Kale, Yury Bubnov, Leon Stein, Management, 2006, 42: 1075–1098.
Qiaosong Wang, Hadi Kiapour, and Robinson Pira- [19] Cristian Duda, Gianni Frey, Donald Kossmann, and
muthu. ” Visual Search at eBay”. In Proceeding of Chong Zhou. “AJAXSearch: Crawling, Indexing and
KDD conference, ACM. ISBN, 2017, 978-1-4503- Searching Web 2.0 Applications”, 2008. ACM 978-1-
4887. 60558-306-8/08/08.
[12] Kevin Lin, Huei-fang Yang, Jen-hao Hsiao, and Chu- [20] Franklin, Curt. (2002). “How Internet Search En-
song Chen. “Deep Learning of Binary Hash Codes gines Work”, 2002.
for Fast Image Retrieval”. In IEEE Conference on https://www.howstuffworks.com
Computer Vision and Pattern Recognition Workshops [21] Trilok Gupta and Archana Sharma. (2014).”Anatomy
(CVPRW), 2015: 27–35. of Web Search Engines”. International Journal of
[13] Chen Hao, Tao Chuanqi, and Jerry Gao. “A Quality Advanced Research in Computer and Communica-
Evaluation Approach to Search Engines of Shopping tion Engineering, 2014, 3(4).
Platforms”. IEEE Third International Conference on [22] Animesh Animesh, Vandana Ramachandran, and
Big Data Computing Service and Applications (Big- Siva Viswanathan. “Quality Uncertainty and Adverse
DataService), 2017. Selection in Sponsored Search Markets”. Decision
[14] Nanda Kumar, Karl R. Lang and Qian Peng. “Con- and Information Technologies, University of Mary-
sumer Search Behavior in Online Shopping Environ- land, 2010.
ments”. In Proceedings of the 38th Hawaii Interna- [23] Ellen M. Voorhees. “Variations in relevance judg-
tional Conference on System Sciences, 2005. ments and the measurement of retrieval effective-
[15] Yuan Weijing. “End-User Searching Behavior in In- ness”. Information Processing and Management
formation Retrieval: A Longitudinal Study.” Journal Journal, 2000: 697-716.
of the American Society for Information Science, [24] Al-akashi, F. “Using Wikipedia Knowledge and
1997,48 (3): 218-234. Query Types in a New Indexing Approach for Web
[16] Ravi Sen, Subhajyoti Bandyopadhyay, James Search Engines. PhD Thesis, University of Ottawa,
D. Hess, and Jeevan Jaisingh. “PRICING PAID 2014.
PLACEMENTS ON SEARCH ENGINES”. Journal http://dx.doi.org/10.20381/ruor-6304
of Electronic Commerce Research, 2008, 9(1).

Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.688 51


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

Artificial Intelligence Advances


https://ojs.bilpublishing.com/index.php/aia

ARTICLE
A Novel Dataset For Intelligent Indoor Object Detection Systems
Mouna Afif1* Riadh Ayachi1 Yahia Said2 Edwige Pissaloux3 Mohamed Atri1
1.Laboratory of Electronics and Microelectronics (EμE), Faculty of Sciences of Monastir, University of Monastir, Tunisia
2. Electrical Engineering Department, College of Engineering, Northern Border University, Arar, Saudi Arabia
3. LITIS, EA4108& CNRS FR 3638, University of Rouen Normandy Rouen, France

ARTICLE INFO ABSTRACT

Article history Indoor Scene understanding and indoor objects detection is a complex
Received: 26 February 2019 high-level task for automated systems applied to natural environments.
Indeed, such a task requires huge annotated indoor images to train and
Accepted: 18 March 2019 test intelligent computer vision applications. One of the challenging ques-
Published Online: 30 April 2019 tions is to adopt and to enhance technologies to assist indoor navigation
for visually impaired people (VIP) and thus improve their daily life qual-
Keywords: ity. This paper presents a new labeled indoor object dataset elaborated
Indoor object detection and recognition with a goal of indoor object detection (useful for indoor localization and
navigation tasks). This dataset consists of 8000 indoor images containing
Indoor image dataset 16 different indoor landmark objects and classes. The originality of the
Visually Impaired People (VIP) annotations comes from two new facts taken into account: (1) the spatial
Idoor navigation relationships between objects present in the scene and (2) actions possible
to apply to those objects (relationships between VIP and an object).This
collected dataset presents many specifications and strengths as it pres-
ents various data under various lighting conditions and complex image
background to ensure more robustness when training and testing objects
detectors. The proposed dataset, ready for use, provides 16 vital indoor
object classes in order to contribute for indoor assistance navigation for
VIP.

 
1. Introduction using the appropriate data set and ad-hoc learning technol-

I
ogy, in the assistive device intelligence.
ndoor object detection and recognition is an import- The proposed annotated dataset was build using raw
ant challenging task used in several autonomous and images of NAVIIS [4]. The selected images were manually
intelligent systems (e.g. in autonomous robots; hu- annotated using the graphical image annotation tool La-
manoid robots; mobility assistive devices for people with belImg [1]. An object is identified with its bounding box
visual impairments, VIP). However, the VIPs are not able (bbox) and the associated annotation is its class name and
to see the landmarks or indoor objects Therefore, an assis- its coordinates in the image.
tive device must indicate the presence of such information Autonomous indoor navigation, based on visual cues,
to VIP during indoor navigation. is still a very challenging and open question. The indoor
A possible approach to indoor navigation passes scene exploration cannot be assisted using satellites nav-
through the integration of target objects recognition, thus igation (GPS, Galileo, etc.). Indoor object detection and

*Corresponding Author:
Mouna Afif,
Laboratory of Electronics and Microelectronics (EμE), Faculty of Sciences of Monastir, University of Monastir, Tunisia;
Email: *mouna.afif@outlook.fr

52 Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.925


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

recognition for wearable real-time systems specific char- The rest of the paper is organized as follows:
acteristics which should be taken into account such as Section 2 outlines the current state of art on existing in-
lighting conditions, the similarity between objects within door datasets used for objects detection and classification.
the same category (having the same or similar shapes), Section 3 addresses the inter-class end intra-class rela-
image blur, etc. It must minimize the error, the recogni- tionships between objects of an indoor scene.
tion time (time complexity) and the volume of calculation Section 4 overviews the indoor object detection and
(spatial complexity). Therefore, the training dataset is of recognition (IODR) dataset.
paramount importance as it can be highly used to train ob- Section 5provides the principle of images annotation
ject detectors in order to detect multiple indoor objects to using the software LabelImg.
efficiently assist the VIP’s indoor navigation. Section 6 presents the dataset description and its possi-
The carefully designed and the labeled indoor image ble uses while the section 7 concludes the paper.
dataset can be used for training and testing of Deep Con-
volutional Neural Networks (DCNN) thus to come up 2. Related work
with new applications to help people with visual impair-
There are several types of graphical tools for images an-
ments to navigate freely in indoor scenes.
notation. LabeBox [16] is a software platform dedicated
An automated system work will be more reliable and
for enterprise to train machine learning applications. This
efficient if a (high-level) knowledge about the different
software tool can be used with on-premise or hosted data.
class presented in the image scene is provided; the anno-
It is paying for more than 5000 images. Especially, it is
tated dataset is a mean for such knowledge source.
used for image segmentation and classification for text,
However, the existing datasets do not contribute to the
video and audio annotation. LabelMe [17] is an online soft-
“indoor object” detection since they present either a total
ware graphic tool used for image segmentation. RectaLa-
indoor scene or same indoor objects in order to perform
bel [18] presents another platform for image annotation with
their classification (e.g. TUW Object Instance Recognition
polygons and bounding boxes but is used only for macOS.
Dataset [2] or MIT [3]).
Collecting and annotating large-scale datasets present a
This work focuses therefore on collecting different in-
challenging key contribution in order to train and test de-
door images and their annotations give indoor object class
tectors to robustifies object detection tasks as they directly
its position in the images. Its goal is to present a new la-
influence the quality and the performance of object recog-
beled indoor object dataset (or indoor landmarks).
nition.
Traditional approach of scene understanding and object
In [5] authors present a new large-scale synthetic dataset
recognition aims to provide a simple annotation of the ob-
with 500K images physically reconstructed from realistic
ject (object class and its position in the image), while the
3D indoor scenes. Different illuminations of scenes are
proposed annotation pay attention to the relationship be-
considered. This dataset can be recommended for three
tween objects presented in the scene and includes actions
computer vision tasks such as object boundary detection,
possible to apply to objects, actions which can be per-
semantic segmentation, and surface prediction.
formed by the VIP on these objects (object’s affordances).
In [6] McComarc et al. introduce an indoor dataset
This approach requires to identify/recognize (via a
named SceneNet RGB-D which expands the previous
physical parse) the relationship between objects present in
dataset SceneNet [7] providing a different photo-realistic
an indoor scene, e.g. a relative (spatial) position of objects
rendering of indoor scenes. This dataset offers perfect
(table and chair) in a living room in order to be able to
per-pixel labeling which helps in indoor scene understand-
move around there or to apply an action (e.g. to sit down
ing. It can be used in many computer vision tasks like
on the chair or move the chair spatial position or remove
depth estimation, optical flow calculation, 3D reconstruc-
it from the scene, etc.).
tion, and image segmentation.
Therefore, as far as VIP autonomous mobility, it is
Indoor scene understanding presents a central interest
necessary to propose a specific indoor scene’s objects
to many computer vision applications including assistive
annotation in order to distinguish those objects in the sur-
human comparison, monitoring systems, and robotics.
rounding environments.
However, real-world data presents a default in the major-
The object classes presented in this paper takes into ac-
ity of these tasks. In [8] Silbermanet al. present an image
count the scene global (spatial) coherence; moreover, the
segmentation approach to interpret objects surfaces and to
indoor landmarks specific for independent mobility of VIP
build relation support between objects present in the in-
are also considered (e.g. the confirmation of the current
door RGBD scene. In their approach authors incorporate
progress on the straight line).
geometric shapes of objects to better define the indoor

Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.925 53


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

scene. Based on their results, the proposed 3D indoor dataset containing 100 indoor scenes. All images intro-
scene approach leads to better objects segmentation. duced presents a reconstruction into triangle meshes and
A semantic scene completion is presented in [9]. Authors having per-vertex, per-pixel annotations. When collecting
present a model that predicts object volume occupancy and annotating the presented dataset, our aim is to perform
and the object category (scene labeling) from a single an indoor detection system based on deep CNN model,
depth image of a 3D scene. It should be stressed that the while SeneNN dataset treats semantic segmentation prob-
knowledge of the objects' identities present in the scene lem.
helps to better identify the scene. This paper introduces a new indoor object detection
Song et al. [10] present a new deep CNN for semantic and recognition approach: relationships between objects
scene completion named “SSCNet”. This deep convo- of a scene are also considered as possible labeling. Our
lutional model uses CNN for producing 3D voxel repre- aim from this work is to provide a ready annotated dataset
sentation of scene object and their volumetric semantic that will be used for training a deep CNN model to per-
labels. form a system used for indoor object detection for this we
In [11] authors propose a model used for repairing 3D choose to use the graphical tool LabelImg [1] as an annota-
shapes constructed from multi-view RGB dataset. These tion tool.
categories of techniques aim to obtain a semantic label for
the object present in the scene [12, 13]. 3. Object Affordances as New Elements for
In [14] Zhang et al. show that the training model with CNN Efficient Classification and Detection
a synthetic dataset improves the results obtained in the
Indoor space presents an important difference comparing
computer vision task as it better distinguishes the object
to other spaces as it is composed of several objects (e.g.
boundaries and the object surface.
doors, corridors, stairs, elevator, sign, etc) and the human
Qi et al. [15] present a human-centric method to synthe-
being can directly interact with. Therefore, the possibility
size 3D scene layouts (in particular they take the case of
of interaction may be property important for their recogni-
rooms). They propose an algorithm which generates 2D
tion and this property should conveniently annotate.
map of indoor images. The proposed algorithm can be
The affordances related to object are spatially and tem-
included in many tasks such as 3D reconstruction, robot
porally invariant so it is very efficient to object location
mapping, and 3D labeling.
and classification by the CNN.
Public benchmarks greatly support scientists by provid-
Figure 1 presents the wide intra-class variation between
ing datasets that can be used for their algorithms.
doors in the same dataset. Doors present many shapes,
A new indoor object dataset was introduced in [19]. Au-
many poses, and many colors. Some doors are in the
thors present a fully labeled indoor dataset to train and test
wood, some are on glass and others on iron. Annotations
deep learning models. All images presented in this dataset
were done on different doors poses, some opened, and
present one indoor object extracted from its surrounding
others are closed. All these figure case makes the present-
environments which makes it recommended for classifica-
ed dataset suitable and robust for building and training
tion tasks and not for indoor object detection problem.
new indoor object applications based on deep learning
The MC Indoor 20000 dataset is used for indoor ob-
techniques.
ject classification and recognition. It includes more than
20000indoor images containing 3 indoor objects landmark
The biggest strength presented on this dataset is that it
(door, sign, stairs). The MC Indoor 20000 dataset presents
provides many challenging conditions in order to perform
many challenging situations as images rotation, intra-class
a robust model training to deal with different indoor envi-
variation and images variation.
ronments belonging to various establishments.
Xiao et al. [24] proposed an extensive database named
“Scene UNderstanding” (SUN) containing 899 scene cat-
egories with over 130519 images. Their dataset presents
various categories as indoor urban and nature categories.
Several RGB-D datasets have been introduced for the
last few years facilitating the implementation of computer
vision applications. However, those datasets present a lack
of comprehensive labels on these RGB-D datasets.
Figure 1. intra-class variation
In [20] Hua et al. introduce a new RGB-D dataset con-
taining 100 scenes named SeneNN. It’s an RGB-D indoor

54 Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.925


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

4. Indoor Object Detection and Recognition the same object class. Figures 3, 4 and 5 illustrate all these
(IODR) Dataset Overview situations.

4.1 Dataset Collection


The proposed dataset is composed of many categories
and indoor object landmarks. The indoor object detection
and recognition dataset is composed of 8000 indoor im- Figure 3. Different lighting conditions for the same object
ages captured under different light conditions (day, night,
blurred images). Some examples of the collected images
are presented in figure 2.
It should be stressed that the collected images come
from the dataset of NAVIIS project [4]. We selected images
presenting different lighting conditions to obtain a very
robust dataset presenting many situations. Images resolu-
tions of the dataset are various as 1616 x 1232 and 4592 x
3448. Figure 4. Different intra-class lighting conditions,
poses, and point of view

Figure 5. Blurred images presented in the dataset

5. labelingand Annotation via LabelImg Tool


LabelImg [1] is a software tool used for annotating im-
Figure 2. Dataset Images subset
ages. LabelImg is a graphical software annotation tool.
This software is written based on python background
4.2 Dataset Statistics and uses Qt as a graphical interface. It is the widely used
The dataset is composed of 8000 indoor images with in- amongst other tools. LabelImg saves annotations in the.
door scenes taken under different conditions. This dataset xml pascal voc [23] format. It is a software platform for
presenting 16 indoor objects landmark. Classes presented developers to easily annotate images in order to train and
in the defined dataset “Indoor object detection and rec- test deep learning models. It labels objects by putting
ognition” (IODR) are (door, light, light switch, smoke them in bounding boxes and by providing their x and y
detector, chair, fire extinguisher, sign, window, heating, coordinates. During the detection and the recognition
electricity box, stairs, table, security button, trash can, ele- part, any deep learning model requires knowing the ac-
vator and notice table). tual objects present in the indoor image. All the required
The present contribution provides new labeled indoor information are present via the annotation.xml file which
object dataset freely available for research community. provides indoor object class ID, Indoor object class name,
This proposed annotated dataset can be highly recom- bounding box presented by their x and y coordinates,
mended for training powerful deep learning models to height, and width.
perform accurate object detection and object recognition. The indoor image labels present rich pixel with visual
The proposed dataset is presented and labeled in order information in addition to the bounding box containing
to help persons with visual impairments in their mobility the indoor object. Image annotation is a labor-intensive
in unfamiliar indoor environments like clinics, school, and error-prone technique. As an example of the most
hospitals, and universities and so on. Our collected data- popular annotated datasets are ImageNet [21], Ms COCO [22]
set provides many challenging cases as different lighting and PASCAL VOC [23].
conditions, blurred images, and variable point of view of Figure 6 presents an example of an original image and

Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.925 55


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

its corresponding annotation. present various lighting conditions that include multi-level
The original images included in our data set are select- of objects brightness. Objects presented in this dataset are
ed with respect of two main issues with respect of VIP landmarks and vital for visually impaired people indoor
mobility: (1) providing the most relevant indoor objects navigation. All the images of the dataset are in .jpg for-
and landmarks, and (2) providing a good annotation to mat.
better understand the indoor scene. The second aspect is This indoor dataset is original as:
important as the relationship between the object in a given (1) It enhances the training and testing of deep learning
scene are paramount for establishing of the journey path. models as it provides 16 indoor objects landmark.
(2) It includes objects specific for the VIP navigation.
(3) It provides various data under different lighting
conditions and multiple images background to ensure ro-
bustness in indoor object detection when training.
(4) Provides a fully labeled data ready for use by the
scientific community to develop their systems for indoor
navigation systems.
(5) This dataset can be included with other indoor data-
sets in order to ensure robustness and efficiency of the
Figure 6. Annotated Image Example developed deep learning models for indoor robotic navi-
In table1 we present all the indoor object classes with gation systems.
all they class names and ID to ensure a better scene under- 6.2 Recommendations
standing of the indoor images present in the dataset. We
present 16 main objects remarkable landmarks for indoor The indoor object dataset presented in this paper is a ready
navigation that can be provided in any indoor scene. We data that can be directly used for researchers in computer
note that in this work, we are introducing a new indoor vision field to develop new deep convolutional neural
multi-class dataset that not previously studied. networks (DCNN) that can be included in many indoor
robotic navigation systems.
6. Dataset Value These database step-in new applications towards help-
ing a large category of people who are partially sighted
6.1 Dataset Description
and blind persons.
The paper describes a new indoor object dataset that will
be used to evaluate performances of human-assistance 7. Conclusion
navigation systems in indoor scenes. The data collected This paper presents a new fully labeled dataset for indoor
Table 1. Indoor objects names and ID

Indoor
Object

security but-
Class Name table chair smoke detector fire extinguisher trash can door electricity box
ton

Indoor
Object

Class Name heating stairs notice table Sign window light switch light Elevator

56 Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.925


| Volume 01 | Issue 01 | April 2019

object detection and recognition. The proposed dataset is Image.InarXiv, 2016.


original and can be adopted for the design of autonomous [10] S. Song and J. Xiao. Deep sliding shapes for amod-
robotic navigation systems and for visually impaired al 3D object detection in rgb-d images. In CVPR,
people (VIP) assistances. The originality of the proposed 2016.
dataset comes from the inclusion of new characteristics [11] D. Thanh Nguyen, B.-S. Hua, K. Tran, Q.-H. Pham,
of a 3D scene not considered so far, namely objects af- and S.- K. Yeung. A field model for repairing 3D
fordance. Such new training data will improve object shapes. In CVPR, 2016.
recognition and may be used for autonomous navigation [12] S. Gupta, P. Arbelaez, and J. Malik. Perceptual or-
systems. ganization and recognition of indoor scenes from
The IODR presents 8000 images of different resolution RGB-D images.In CVPR, 2013.
with 16 indoor landmark objects. [13] K. Lai, L. Bo, and D. Fox. Unsupervised feature
Future work on VIP mobility assistance will use the learning for 3D scene labeling.In ICRA.IEEE, 2014.
proposed dataset for training and testing of deep convolu- [14] Yi Zhang, WeichaoQiu, Qi Chen, Xiaolin Hu,
tional neural networks (DCNN) which will be integrated and Alan Yuille.Unrealstereo: A synthetic dataset
into an embedded platform. for analyzing stereo vision. arXiv preprint arX-
iv:1612.04647, 2016.
[15] Siyuan Qi, Yixin Zhu, Siyuan Huang, Chenfanfu
Jiang, and Song-Chun Zhu.Humancentric indoor
scene synthesis using stochastic grammar.In IEEE
Conference on Computer Vision and Pattern Recog-
References
nition (CVPR), 2018: 5899–5908.
[1] https://github.com/tzutalin/labelImg accessed: 23- [16] https://github.com/Labelbox/Labelbox
08-2018 [17] https://github.com/wkentaro/labelme
[2] https://repo.acin.tuwien.ac.at/tmp/permanent/data- [18] https://rectlabel.com/
set_index.php [19] Bashiri, F. S., LaRose, E., Peissig, P., &Tafti, A.
[3] Quattoni, A., &Torralba, A. Recognizing indoor P.. MCIndoor20000: A fully-labeled image data-
scenes.In 2009 IEEE Conference on Computer Vi- set to advance indoor objects detection. Data in
sion and Pattern Recognition, IEEE, 2009: 413-420. brief, 2018, 17: 71-75.
[4] http://www.navvis.lmt.ei.tum.de/dataset/accessed: [20] Hua, B. S., Pham, Q. H., Nguyen, D. T., Tran, M. K.,
21-07-2018 Yu, L. F., &Yeung, S. K.. Scenenn: A scene mesh-
[5] Yinda Zhang, Shuran Song, ErsinYumer, Mano- es dataset with annotations. In 3D Vision (3DV),
lisSavva, Joon-Young Lee, HailinJin, and Thomas IEEE, Fourth International Conference on 2016: 92-
Funkhouser.Physically-based rendering for indoor 101.
scene understanding using convolutional neural [21] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Sath-
networks. In IEEE Conference on Computer Vision eesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M.
and Pattern Recognition (CVPR), 2017: 5057 – S. Bernstein, A. C. Berg, and F. Li, “Imagenet large
5065. scale visual recognition challenge,” CoRR, vol.
[6] John McCormac, AnkurHanda, Stefan Leutenegger, abs/1409.0575, 2014. [Online].
and Andrew J Davison.SceneNet RGB-D: 5M pho- Available: http://arxiv.org/abs/1409.0575
torealistic images of synthetic indoor trajectories [22] T. Lin, M. Maire, S. J. Belongie, L. D. Bourdev,
with ground truth. In International Conference on R. B. Girshick, J. Hays, P. Perona, D. Ramanan,
Computer Vision (ICCV): 2697–2706. P. Dollar, and C. L. ´ Zitnick, “Microsoft COCO:
[7] A. Handa, V. Patr ˘ aucean, V. Badrinarayanan, S. common objects in context,” Computing Research
Stent, ˘ and R. Cipolla. SceneNet: Understanding Repository, vol. abs/1405.0312, 2014. [Online].
Real World Indoor Scenes With Synthetic Data. Available: http://arxiv.org/abs/1405.0312
arXiv preprint arXiv:1511.07041, 2015. [23] M. Everingham, S. M. A. Eslami, L. Van Gool, C. K.
[8] N. Silberman, D. Hoiem, P. Kohli, and R. Fergus. I. Williams, J. Winn, and A. Zisserman, “The pascal
Indoor segmentation and support inference from visual object classes challenge: A retrospective,” In-
rgbd images. In European Conference on Computer ternational Journal of Computer Vision, 2015, 111:
Vision, Springer, 2012: 746–760. 98–136. [Online].
[9] A. Z. A. X. C. M. S. T. F. Shuran Song, Fisher Yu. Available: https://doi.org/10.1007/s11263-014-0733-5
Semantic Scene Completion from a Single Depth [24] XIAO, Jianxiong, HAYS, James, EHINGER, Krista

Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.925 57


Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019

A., et al. Sun database: Large-scale scene recogni- [25] “Who: Vision impairment and blindness,” http://
tion from abbey to zoo. In : 2010 IEEE Computer www.who.int/ mediacentre/factsheets/fs282/en/, ac-
Society Conference on Computer Vision and Pattern cessed: 2017-12-08.
Recognition. IEEE, 2010: 3485-3492.

58 Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.925


Author Guidelines
This document provides some guidelines to authors for submission in order to work towards a seamless submission
process. While complete adherence to the following guidelines is not enforced, authors should note that following
through with the guidelines will be helpful in expediting the copyediting and proofreading processes, and allow for
improved readability during the review process.

Ⅰ. Format

● Program: Microsoft Word (preferred)


● Font: Times New Roman
● Size: 12
● Style: Normal
● Paragraph: Justified
● Required Documents

Ⅱ. Cover Letter

All articles should include a cover letter as a separate document.


The cover letter should include:
● Names and affiliation of author(s)
The corresponding author should be identified.
Eg. Department, University, Province/City/State, Postal Code, Country
● A brief description of the novelty and importance of the findings detailed in the paper
Declaration
v Conflict of Interest
Examples of conflicts of interest include (but are not limited to):
● Research grants
● Honoria
● Employment or consultation
● Project sponsors
● Author’s position on advisory boards or board of directors/management relationships
● Multiple affiliation
● Other financial relationships/support
● Informed Consent
This section confirms that written consent was obtained from all participants prior to the study.
● Ethical Approval
Eg. The paper received the ethical approval of XXX Ethics Committee.
● Trial Registration
Eg. Name of Trial Registry: Trial Registration Number
● Contributorship
The role(s) that each author undertook should be reflected in this section. This section affirms that each credited author
has had a significant contribution to the article.
1. Main Manuscript
2. Reference List
3. Supplementary Data/Information
Supplementary figures, small tables, text etc.
As supplementary data/information is not copyedited/proofread, kindly ensure that the section is free from errors, and is
presented clearly.

Ⅲ. Abstract

A general introduction to the research topic of the paper should be provided, along with a brief summary of its main
results and implications. Kindly ensure the abstract is self-contained and remains readable to a wider audience. The
abstract should also be kept to a maximum of 200 words.
Authors should also include 5-8 keywords after the abstract, separated by a semi-colon, avoiding the words already used
in the title of the article.
Abstract and keywords should be reflected as font size 14.

Ⅳ. Title

The title should not exceed 50 words. Authors are encouraged to keep their titles succinct and relevant.
Titles should be reflected as font size 26, and in bold type.

Ⅳ. Section Headings

Section headings, sub-headings, and sub-subheadings should be differentiated by font size.


Section Headings: Font size 22, bold type
Sub-Headings: Font size 16, bold type
Sub-Subheadings: Font size 14, bold type
Main Manuscript Outline

Ⅴ. Introduction

The introduction should highlight the significance of the research conducted, in particular, in relation to current state of
research in the field. A clear research objective should be conveyed within a single sentence.

Ⅵ. Methodology/Methods

In this section, the methods used to obtain the results in the paper should be clearly elucidated. This allows readers to be
able to replicate the study in the future. Authors should ensure that any references made to other research or experiments
should be clearly cited.

Ⅶ. Results

In this section, the results of experiments conducted should be detailed. The results should not be discussed at length in
this section. Alternatively, Results and Discussion can also be combined to a single section.

Ⅷ. Discussion

In this section, the results of the experiments conducted can be discussed in detail. Authors should discuss the direct and
indirect implications of their findings, and also discuss if the results obtain reflect the current state of research in the field.
Applications for the research should be discussed in this section. Suggestions for future research can also be discussed in
this section.

Ⅸ. Conclusion

This section offers closure for the paper. An effective conclusion will need to sum up the principal findings of the papers,
and its implications for further research.

Ⅹ. References

References should be included as a separate page from the main manuscript. For parts of the manuscript that have
referenced a particular source, a superscript (ie. [x]) should be included next to the referenced text.
[x] refers to the allocated number of the source under the Reference List (eg. [1], [2], [3])
In the References section, the corresponding source should be referenced as:
[x] Author(s). Article Title [Publication Type]. Journal Name, Vol. No., Issue No.: Page numbers. (DOI number)

Ⅺ. Glossary of Publication Type

J = Journal/Magazine
M = Monograph/Book
C = (Article) Collection
D = Dissertation/Thesis
P = Patent
S = Standards
N = Newspapers
R = Reports
Kindly note that the order of appearance of the referenced source should follow its order of appearance in the main manu-
script.
Graphs, Figures, Tables, and Equations
Graphs, figures and tables should be labelled closely below it and aligned to the center. Each data presentation type
should be labelled as Graph, Figure, or Table, and its sequence should be in running order, separate from each other.
Equations should be aligned to the left, and numbered with in running order with its number in parenthesis (aligned
right).

Ⅻ. Others

Conflicts of interest, acknowledgements, and publication ethics should also be declared in the final version of the manu-
script. Instructions have been provided as its counterpart under Cover Letter.
Artificial Intelligence Advances
Aims and Scope

Artificial Intelligence Advances publishes original research papers that offers professional review and publica-
tion to freely disseminate research findings in all areas of Basic and Applied Computational Intelligence includ-
ing Cognitive Aspects of Artificial Intelligence (AI), Constraint Processing, High–Level Computer Vision,
Common Sense Reasoning and more. The Journal focuses on innovations of research methods at all stages and is
committed to providing theoretical and practical experience for all those who are involved in these fields.

Artificial Intelligence Advances aims to discover innovative methods, theories and studies in its field by publish-
ing original articles, case studies and comprehensive reviews.

The scope of the papers in this journal includes, but is not limited to:

● Planning and Theories of Action ● Intelligent Interfaces


● Heuristic Search ● Cognitive Aspects of AI
● High-Level Computer Vision ● Common Sense Reasoning
● Multiagent Systems ● AI and Philosophy
● Machine Learning ● Automated Reasoning and Interface
● Intelligent Robotics ● Reasoning Under Uncertainty
● Knowledge Representation

Bilingual Publishing Co. (BPC)


Tel:+65 65881289

E-mail:contact@bilpublishing.com

Website:www.bilpublishing.com
About the Publisher
Bilingual Publishing Co(BPC) is an international publisher of online, open access and scholarly peer-reviewed
journals covering a wide range of academic disciplines including science, technology, medicine, engineering,educa-
tion and social science. Reflecting the latest research from a broad sweep of subjects, our content is accessible world-
wide – both in print and online.

BPC aims to provide an analytics as well as platform for information exchange and discussion that help organizations
and professionals in advancing society for the betterment of mankind. SP hopes to be indexed by well-known databas-
es in order to expand its reach to the science community, and eventually grow to be a reputable publisher recognized
by scholars and researchers around the world.

BPC adopts the Open Journal Systems, see on http://ojs.s-p.sg

Database Inclusion

National Library, Singapore Asia & Pacific area Science China National Knowledge Creative Commons
Citation Index Infrastructure

Google Scholar Crossref J-Gate My Science Work

National Library of Singapore

NLB manages the National Library, 26 Public Libraries and the National Archives.

NLB promotes reading, learning and information literacy by providing a trusted, accessible and globally-connected
library and information service through the National Library and a comprehensive network of Public Libraries. By
forging strategic partnerships to cultivate knowledge sharing, the libraries also encourage appreciation and awareness
of Singapore’s history through their wide range of programmes and collection on Singapore and regional content. The
National Archives of Singapore oversees the collection, preservation and management of public and private archival
records, including government files, private memoirs, maps, photographs, oral history interviews and audio-visual
materials.

Established on 1 September 1995 as a statutory board, NLB is an agency under the Ministry of Communications and
Information (MCI).

You might also like