Professional Documents
Culture Documents
ABSTRACT
There are different reasons for which people need an artificial of locomotion such as a
virtual keyboard. The number of people, who need to move around with the help of
some article means, because of an illness. Moreover, implementing a controlling system
in it enables them to move without the help of another person is very helpful. The idea of
eye controls of great use to not only the future of natural input but more importantly the
handicapped and disabled. Camera is capturing the image of eye movement. First detect
pupil center position of eye. Then the different variation on pupil position get different
command set for virtual keyboard. The signals pass the motor driver to interface with the
virtual keyboard itself. The motor driver will control both speed and direction to enable
the virtual keyboard to move forward, left, right and stop.
INTRODUCTION
Nowadays personal computer systems are carrying a huge part in our everyday lives as they are
used in areas such as work, education and enjoyment. What all these applications have in
common is that the use of personal computers is mostly based on the input method via keyboard
and mouse. While this is not a problem for a healthy individual, this may be an insurmountable
bound for people with limited freedom of movement of their limbs. In these cases it would be
preferable to use input methods which are based on more abilities of the region such as eye
movements. To enable such substitute input methods a system was made which follows a low-
price approach to control a mouse cursor on a computer system. The eye tracker is based on
images recorded by a mutated webcam to acquire the eye movements. These eye movements are
then graphed to a computer screen to position a mouse cursor accordingly. The movement of
mouse by automatically adjusting the position of eyesight. Camera is used to capture the image
of eye movement. In general, any digital image processing algorithm consists of three stages:
input, processor and output. In the input stage image is captured by a camera. It sent to a
particular system to focus on a pixel of image that’s gives, its output as a processed image.
Embedded system is combination of hardware and software.An embedded system can be an
independent system or it can be a part of a large system. An embedded system is a
microcontroller or microprocessor based system which is designed to perform a specific task. For
example, a fire alarm is an embedded system; it will sense only smoke Python is a high-level
language. This means that Python code is written in largely recognizable English, providing the
Pi with commands in a manner that is quick to learn and easy to follow. This is in marked
contrast to low-level languages, like assembler, which are closer to how the computer ―thinks‖
but almost impossible for a human to follow without experience.
As the computer technologies are growing rapidly, the importance of human computer
interaction becomes highly notable. Some persons who are disabled cannot be able to use the
computers. Eye ball movement control mainly used for disabled people. Incorporating this eye
controlling system with the computers will make them to work without the help of other
individual. Human-Computer Interface (HCI) is focused on use of computer technology to
provide interface between the computer and the human. There is a need for finding the suitable
technology that makes the effective communication between human and computer. Human
computer interaction plays the important role .Thus there is a need to find a method that spreads
an alternate way for making communication between the human and computer to the individuals
those who have impairments and give them an equivalent space to be an element of Information
Society [1-5].
In recent years, the human computer interfaces are attracting the attention of various researchers
across the globe. Human computer interface is an implementation of the vision-based system for
eye movement detection for the disabled people. In the proposed system, we have included the
face detection, face tracking, eye detection and interpretation of a sequence of eye blinks in real
time for controlling a non-intrusive human computer interface. Conventional method of
interaction with the computer with the mouse is replaced with the human eye movements. This
technique will help the paralyzed person, physically challenged people especially person without
hands to compute efficiently and with the ease of use. Firstly, camera captures the image and
focuses on the eye in the image using OpenCV code for pupil detection. This results the center
position of the human eye (pupil). Then the center position of the pupil is taken as a reference
and based on that the human or the user will control the cursor by moving left and right [6-9].
Existing System
• Matlab detect the iris and control curser. Eye movement-controlled wheel chair
is existing one that controls the wheel chair by monitoring eye movement.In
matlab is difficult to predict the Centroid of eye so we go for OpenCV
• we are instructing mouse cursor to change its location based on eye ball
movement, in this application using OPENCV we will connect to webcam and
then extract each frame from the webcam and pass to OPENCV to detect eye
balls location. Once eye ball location detected then we can extract x and y
coordinates of eye balls from OPENCV and then using python pyautogui API
we can instruct mouse to change its current location to given eyeballs X and Y
Coordinates. Below is the example to move mouse in python.
Proposed System
Advantages
• High accuracy
• physically handicapped people can operate computers
REQUIREMENT ANALYSIS
The project involved analyzing the design of few applications so as to make the
application more users friendly. To do so, it was really important to keep the navigations from
one screen to the other well ordered and at the same time reducing the amount of typing the user
needs to do. In order to make the application more accessible, the browser version had to be
chosen so that it is compatible with most of the Browsers.
REQUIREMENT SPECIFICATION
Functional Requirements
Software Requirements
For developing the application the following are the Software Requirements:
1. Python
1. Windows
Hardware Requirements
For developing the application the following are the Hardware Requirements:
Machine learning (ML)
Machine learning is the scientific study of algorithms and statistical models that computer
systems use to perform a specific task without using explicit instructions, relying on patterns
and inference instead. It is seen as a subset of artificial intelligence. Machine learning algorithms
build a mathematical model based on sample data, known as "training data", in order to make
predictions or decisions without being explicitly programmed to perform the task. Machine
learning algorithms are used in a wide variety of applications, such as email
filtering and computer vision, where it is difficult or infeasible to develop a conventional
algorithm for effectively performing the task.
Machine learning is closely related to computational statistics, which focuses on making
predictions using computers. The study of mathematical optimization delivers methods, theory
and application domains to the field of machine learning. Data mining is a field of study within
machine learning, and focuses on exploratory data analysis through unsupervised learning. In its
application across business problems, machine learning is also referred to as predictive analytics.
The name machine learning was coined in 1959 by Arthur Samuel. Tom M. Mitchell provided a
widely quoted, more formal definition of the algorithms studied in the machine learning field: "A
computer program is said to learn from experience E with respect to some class of tasks T and
performance measure P if its performance at tasks in T, as measured by P, improves with
experience E. This definition of the tasks in which machine learning is concerned offers a
fundamentally operational definition rather than defining the field in cognitive terms. This
follows Alan Turing's proposal in his paper "Computing Machinery and Intelligence, in which
the question Can machines think?" is replaced with the question "Can machines do what we (as
thinking entities) can do In Turing's proposal the various characteristics that could be possessed
by a thinking machine and the various implications in constructing one are exposed.
Supervised learning
Supervised learning algorithms build a mathematical model of a set of data that contains both the
inputs and the desired outputs. The data is known as training data, and consists of a set of
training examples. Each training example has one or more inputs and the desired output, also
known as a supervisory signal. In the mathematical model, each training example is represented
by an array or vector, sometimes called a feature vector, and the training data is represented by
a matrix. Through iterative optimization of an objective function, supervised learning algorithms
learn a function that can be used to predict the output associated with new inputs. An optimal
function will allow the algorithm to correctly determine the output for inputs that were not a part
of the training data. An algorithm that improves the accuracy of its outputs or predictions over
time is said to have learned to perform that task.
Supervised learning algorithms include classification and regression. Classification algorithms
are used when the outputs are restricted to a limited set of values, and regression algorithms are
used when the outputs may have any numerical value within a range. Similarity learning is an
area of supervised machine learning closely related to regression and classification, but the goal
is to learn from examples using a similarity function that measures how similar or related two
objects are. It has applications in ranking, recommendation systems, visual identity tracking, face
verification, and speaker verification.
Supervised learning can be grouped further in two categories of algorithms:
1.Classification
2.Regression
Unsupervised learning
Unsupervised learning algorithms take a set of data that contains only inputs, and find structure
in the data, like grouping or clustering of data points. The algorithms, therefore, learn from test
data that has not been labeled, classified or categorized. Instead of responding to feedback,
unsupervised learning algorithms identify commonalities in the data and react based on the
presence or absence of such commonalities in each new piece of data. A central application of
unsupervised learning is in the field of density estimation in statistics, though unsupervised
learning encompasses other domains involving summarizing and explaining data features.
Cluster analysis is the assignment of a set of observations into subsets (called clusters) so that
observations within the same cluster are similar according to one or more predestinated criteria,
while observations drawn from different clusters are dissimilar. Different clustering techniques
make different assumptions on the structure of the data, often defined by some similarity
metric and evaluated, for example, by internal compactness, or the similarity between members
of the same cluster, and separation, the difference between clusters. Other methods are based
on estimated density and graph connectivity.
It can be further classifieds into two categories of algorithms:
3.Clustering
4.Association
Reinforcement learning
Reinforcement learning is an area of machine learning concerned with how software
agents ought to take actions in an environment so as to maximize some notion of cumulative
reward. Due to its generality, the field is studied in many other disciplines, such as game
theory, control theory, operations research, information theory, simulation-based
optimization, multi-agent systems, swarm intelligence, statistics and genetic algorithms. In
machine learning, the environment is typically represented as a Markov Decision
Process (MDP). Many reinforcement learning algorithms use dynamic
programming techniques. Reinforcement learning algorithms do not assume knowledge of an
exact mathematical model of the MDP, and are used when exact models are infeasible.
Reinforcement learning algorithms are used in autonomous vehicles or in learning to play a game
against a human opponent.
Prerequisites
Before learning machine learning, you must have the basic knowledge of followings so that you
can easily understand the concepts of machine learning:
Linear Models
Logistic Regression
Support Vector Machines
Non-linear Models
K-Nearest Neighbours
Kernel SVM
Naïve Bayes
Decision Tree Classification
Random Forest Classification
Logistic Regression in Machine Learning
Logistic regression is one of the most popular Machine Learning algorithms, which comes under
the Supervised Learning technique. It is used for predicting the categorical dependent variable
using a given set of independent variables.
Logistic regression predicts the output of a categorical dependent variable. Therefore the
outcome must be a categorical or discrete value. It can be either Yes or No, 0 or 1, true or False,
etc. but instead of giving the exact value as 0 and 1, it gives the probabilistic values which lie
between 0 and 1.
Logistic Regression is much similar to the Linear Regression except that how they are used.
Linear Regression is used for solving Regression problems, whereas Logistic regression is used
for solving the classification problems.
In Logistic regression, instead of fitting a regression line, we fit an "S" shaped logistic function,
which predicts two maximum values (0 or 1).
The curve from the logistic function indicates the likelihood of something such as whether the
cells are cancerous or not, a mouse is obese or not based on its weight, etc.
Logistic Regression is a significant machine learning algorithm because it has the ability to
provide probabilities and classify new data using continuous and discrete datasets.
Logistic Regression can be used to classify the observations using different types of data and can
easily determine the most effective variables used for the classification. The below image is
showing the logistic function:
Binomial: In binomial Logistic regression, there can be only two possible types of the dependent
variables, such as 0 or 1, Pass or Fail, etc.
Multinomial: In multinomial Logistic regression, there can be 3 or more possible unordered
types of the dependent variable, such as "cat", "dogs", or "sheep"
Ordinal: In ordinal Logistic regression, there can be 3 or more possible ordered types of
dependent variables, such as "low", "Medium", or "High".
K-Nearest Neighbor (KNN) Algorithm for Machine Learning
K-Nearest Neighbor is one of the simplest Machine Learning algorithms based on Supervised
Learning technique. K-NN algorithm assumes the similarity between the new case/data and
available cases and put the new case into the category that is most similar to the available
categories. K-NN algorithm stores all the available data and classifies a new data point based on
the similarity. This means when new data appears then it can be easily classified into a well suite
category by using K- NN algorithm.
K-NN algorithm can be used for Regression as well as for Classification but mostly it is used for
the Classification problems.
K-NN is a non-parametric algorithm, which means it does not make any assumption on
underlying data. It is also called a lazy learner algorithm because it does not learn from the
training set immediately instead it stores the dataset and at the time of classification, it performs
an action on the dataset. KNN algorithm at the training phase just stores the dataset and when it
gets new data, then it classifies that data into a category that is much similar to the new data.
Example: Suppose, we have an image of a creature that looks similar to cat and dog, but we want
to know either it is a cat or dog. So for this identification, we can use the KNN algorithm, as it
works on a similarity measure. Our KNN model will find the similar features of the new data set
to the cats and dogs images and based on the most similar features it will put it in either cat or
dog category.
import numpy as nm
import pandas as pd
data_set= pd.read_csv('user_data.csv')
x= data_set.iloc[:, [2,3]].values
y= data_set.iloc[:, 4].values
st_x= StandardScaler()
x_train= st_x.fit_transform(x_train)
x_test= st_x.transform(x_test)
classifier= LogisticRegression(random_state=0)
classifier.fit(x_train, y_train)
warm_start=False)
y_pred= classifier.predict(x_test)
cm= confusion_matrix()
Visualizing the training set result
mtp.xlim(x1.min(), x1.max())
mtp.ylim(x2.min(), x2.max())
for i, j in enumerate(nm.unique(y_set)):
mtp.xlabel('Age')
mtp.ylabel('Estimated Salary')
mtp.legend()
mtp.show()
mtp package is used to plot the data analysis in prediction using the required model and preview
the data
Applications:
Agriculture
Anatomy
Adaptive websites
Affective computing
Banking
Bioinformatics
Brain–machine interfaces
Cheminformatics
Citizen science
Computer networks
Computer vision
Credit-card fraud detection
Data quality
DNA sequence classification
Economics
Financial market analysis [59]
General game playing
Handwriting recognition
Information retrieval
Insurance
Internet fraud detection
Linguistics
Machine learning control
Machine perception
Machine translation
Marketing
Medical diagnosis
Natural language processing
Natural language understanding
Online advertising
Optimization
Recommender systems
Robot locomotion
Search engines
Sentiment analysis
Sequence mining
Software engineering
Speech recognition
Structural health monitoring
Syntactic pattern recognition
Telecommunication
Theorem proving
Time series forecasting
User behavior analytics
INTRODUCTION TO THE DEEP LEARNING
Deep learning
Deep learning architectures such as deep neural networks, deep belief networks, recurrent neural
networks and convolutional neural networks have been applied to fields including computer
vision, speech recognition, natural language processing, audio recognition, social network
filtering, machine translation, bioinformatics, drug design, medical image analysis, material
inspection and board game programs, where they have produced results comparable to and in
some cases surpassing human expert performance.
Deep learning is a subset of machine learning in artificial intelligence (AI) that has networks
capable of learning unsupervised from data that is unstructured or unlabeled. Also known as
deep neural learning or deep neural network.
CNN is a feed forward neural network that is generally used for Image recognition and object
classification. ... A Recurrent Neural Network looks something like this: In RNN, the previous
states is fed as input to the current state of the network. RNN can be used in NLP, Time Series
Prediction, Machine Translation, etc.
Convolutional Neural Network is one of the main categories to do image classification and
image recognition in neural networks. Scene labeling, objects detections, and face recognition,
etc., are some of the areas where convolutional neural networks are widely used.
CNN takes an image as input, which is classified and process under a certain category such as
dog, cat, lion, tiger, etc. The computer sees an image as an array of pixels and depends on the
resolution of the image. Based on image resolution, it will see as h * w * d, where h= height w=
width and d= dimension. For example, An RGB image is 6 * 6 * 3 array of the matrix, and the
grayscale image is 4 * 4 * 1 array of the matrix.
In CNN, each input image will pass through a sequence of convolution layers along with
pooling, fully connected layers, filters (Also known as kernels). After that, we will apply the
Soft-max function to classify an object with probabilistic values 0 and 1.
Convolution Layer
Convolution layer is the first layer to extract features from an input image. By learning image
features using a small square of input data, the convolutional layer preserves the relationship
between pixels. It is a mathematical operation which takes two inputs such as image matrix and a
kernel or filter.
Strides
Stride is the number of pixels which are shift over the input matrix. When the stride is equaled to
1, then we move the filters to 1 pixel at a time and similarly, if the stride is equaled to 2, then we
move the filters to 2 pixels at a time. The following figure shows that the convolution would
work with a stride of 2.
Padding
Padding plays a crucial role in building the convolutional neural network. If the image will get
shrink and if we will take a neural network with 100's of layers on it, it will give us a small
image after filtered in the end.
Pooling Layer
Pooling layer plays an important role in pre-processing of an image. Pooling layer reduces the
number of parameters when the images are too large. Pooling is "downscaling" of the image
obtained from the previous layers. It can be compared to shrinking an image to reduce its pixel
density. Spatial pooling is also called downsampling or subsampling, which reduces the
dimensionality of each map but retains the important information.
max
average
sum
The fully connected layer is a layer in which the input from the other layers will be flattened into
a vector and sent. It will transform the output into the desired number of classes by the network.
A recurrent neural network (RNN) is a kind of artificial neural network mainly used in speech
recognition and natural language processing (NLP). RNN is used in deep learning and in the
development of models that imitate the activity of neurons in the human brain.
Recurrent Networks are designed to recognize patterns in sequences of data, such as text,
genomes, handwriting, the spoken word, and numerical time series data emanating from sensors,
stock markets, and government agencies.
A recurrent neural network looks similar to a traditional neural network except that a memory-
state is added to the neurons. The computation is to include a simple memory.
The recurrent neural network is a type of deep learning-oriented algorithm, which follows a
sequential approach. In neural networks, we always assume that each input and output is
dependent on all other layers. These types of neural networks are called recurrent because they
sequentially perform mathematical computations.
Applications:
Python was conceived in the late 1980s as a successor to the ABC language. Python 2.0, released
in 2000, introduced features like list comprehensions and a garbage collection system capable of
collecting reference cycles. Python 3.0, released in 2008, was a major revision of the language
that is not completely backward-compatible, and much Python 2 code does not run unmodified
on Python 3.
The Python 2 language, i.e. Python 2.7.x, was officially discontinued on 1 January 2020 (first
planned for 2015) after which security patches and other improvements will not be released for
it. With Python 2's end-of-life, only Python 3.5.xand later are supported.
Python do?:
Why Python?:
Python works on different platforms (Windows, Mac, Linux, Raspberry Pi, etc).
Python has a simple syntax similar to the English language.
Python has syntax that allows developers to write programs with fewer lines than some
other programming languages.
Python runs on an interpreter system, meaning that code can be executed as soon as it is
written. This means that prototyping can be very quick.
Python can be treated in a procedural way, an object-orientated way or a functional way.
Python compared to other programming languages
Python was designed for readability, and has some similarities to the English language
with influence from mathematics.
Python uses new lines to complete a command, as opposed to other programming
languages which often use semicolons or parentheses.
Python relies on indentation, using whitespace, to define scope; such as the scope of
loops, functions and classes. Other programming languages often use curly-brackets for
this purpose.
Windows Based
It is highly unlikely that your Windows system shipped with Python already installed. Windows
systems typically do not. Fortunately, installing does not involve much more than downloading
the Python installer from the python.org website and running it. Let’s take a look at how to
install Python 3 on Windows:
If your system has a 32-bit processor, then you should choose the 32-bit installer.
On a 64-bit system, either installer will actually work for most purposes. The 32-bit
version will generally use less memory, but the 64-bit version performs better for
applications with intensive computation.
If you’re unsure which version to pick, go with the 64-bit version.
Note: Remember that if you get this choice “wrong” and would like to switch to another version
of Python, you can just uninstall Python and then re-install it by downloading another installer
from python.org.
Important: You want to be sure to check the box that says Add Python 3.x to PATH as shown
to ensure that the interpreter will be placed in your execution path.
Then just click Install Now. That should be all there is to it. A few minutes later you should
have a working Python 3 installation on your system.
Mac OS based
While current versions of macOS (previously known as “Mac OS X”) include a version of
Python 2, it is likely out of date by a few months. Also, this tutorial series uses Python 3, so let’s
get you upgraded to that.
The best way we found to install Python 3 on macOS is through the Homebrew package
manager. This approach is also recommended by community guides like The Hitchhiker’s Guide
to Python.
At this point, you’re likely waiting for the command line developer tools to finish installing, and
that’s going to take a few minutes. Time to grab a coffee or tea!
1. Confirm the “The software was installed” dialog from the developer tools installer.
2. Back in the terminal, hit Enter to continue with the Homebrew installation.
3. Homebrew asks you to enter your password so it can finalize the installation. Enter your
user account password and hit Enter to continue.
4. Depending on your internet connection, Homebrew will take a few minutes to download
its required files. Once the installation is complete, you’ll end up back at the command
prompt in your terminal window.
Whew! Now that the Homebrew package manager is set up, let’s continue on with installing
Python 3 on your system.
You can make sure everything went correctly by testing if Python can be accessed from the
terminal:
Assuming everything went well and you saw the output from Pip in your command prompt
window…congratulations! You just installed Python on your system, and you’re all set to
continue with the next section in this tutorial.
Numpy
NumPy is a Python package which stands for 'Numerical Python'. It is the core library for
scientific computing, which contains a powerful n-dimensional array object, provide tools for
integrating C, C++ etc. It is also useful in linear algebra, random number capability etc.
Pandas
Pandas is a high-level data manipulation tool developed by Wes McKinney. It is built on the
Numpy package and its key data structure is called the DataFrame. DataFrames allow you to
store and manipulate tabular data in rows of observations and columns of variables.
Keras
Keras is a high-level neural networks API, written in Python and capable of running on top
of TensorFlow, CNTK, or Theano. Use Keras if you need a deep learning library that:
Allows for easy and fast prototyping (through user friendliness, modularity, and
extensibility).
Sklearn
Scikit-learn is a free machine learning library for Python. It features various algorithms like
support vector machine, random forests, and k-neighbours, and it also supports Python
numerical and scientific libraries like NumPy and SciPy.
Scipy
SciPy is an open-source Python library which is used to solve scientific and mathematical
problems. It is built on the NumPy extension and allows the user to manipulate and visualize
data with a wide range of high-level commands.
Tensorflow
TensorFlow is a Python library for fast numerical computing created and released by Google.
It is a foundation library that can be used to create Deep Learning models directly or by using
wrapper libraries that simplify the process built on top of TensorFlow.
Django
Django is a high-level Python Web framework that encourages rapid development and clean,
pragmatic design. Built by experienced developers, it takes care of much of the hassle of
Web development, so you can focus on writing your app without needing to reinvent the
wheel. It's free and open source.
Pyodbc
pyodbc is an open source Python module that makes accessing ODBC databases simple. It
implements the DB API 2.0 specification but is packed with even more Pythonic
convenience. Precompiled binary wheels are provided for most Python versions on Windows
and macOS. On other operating systems this will build from source.
Matplotlib
Matplotlib is an amazing visualization library in Python for 2D plots of arrays. Matplotlib is
a multi-platform data visualization library built on NumPy arrays and designed to work with
the broader SciPy stack. It was introduced by John Hunter in the year 2002.
Opencv
OpenCV-Python is a library of Python bindings designed to solve computer vision
problems. Python is a general purpose programming language started by Guido van Rossum
that became very popular very quickly, mainly because of its simplicity and code readability.
Nltk
Natural Language Processing with Python NLTK is one of the leading platforms for working
with human language data and Python, the module NLTK is used for natural language
processing. NLTK is literally an acronym for Natural Language Toolkit. In this article you
will learn how to tokenize data (by words and sentences).
SQLAIchemy
SQLAlchemy is a library that facilitates the communication between Python programs and
databases. Most of the times, this library is used as an Object Relational Mapper (ORM) tool
that translates Python classes to tables on relational databases and automatically converts
function calls to SQL statements.
Urllib
urllib is a Python module that can be used for opening URLs. It defines functions and classes
to help in URL actions. With Python you can also access and retrieve data from the internet
like XML, HTML, JSON, etc. You can also use Python to work with this data directly.
Installation of packages:
Syntax for installation of packages via cmd terminal using the basic
If ok then
Check the list of packages installed and then install required by following cmds
Open cv:
OpenCV was started at Intel in 1999 by Gary Bradsky and the first release came out in 2000.
Vadim Pisarevsky joined Gary Bradsky to manage Intel’s Russian software OpenCV team. In
2005, OpenCV was used on Stanley, the vehicle who won 2005 DARPA Grand Challenge. Later
its active development continued under the support of Willow Garage, with Gary Bradsky and
Vadim Pisarevsky leading the project. Right now, OpenCV supports a lot of algorithms related to
Computer Vision and Machine Learning and it is expanding day-by-day. Currently OpenCV
supports a wide variety of programming languages like C++, Python, Java etc and is available on
different platforms including Windows, Linux, OS X, Android, iOS etc. Also, interfaces based
on CUDA and OpenCL are also under active development for high-speed GPU operations.
OpenCV-Python is the Python API of OpenCV. It combines the best qualities of OpenCV C++
API and Python language.
Since OpenCV is an open source initiative, all are welcome to make contributions to this library.
And it is same for this tutorial also. So, if you find any mistake in this tutorial (whether it be a
small spelling mistake or a big error in code or concepts, whatever), feel free to correct it. 1.1.
Introduction to OpenCV 7 OpenCV-Python Tutorials Documentation, Release 1 And that will be
a good task for fresher’s who begin to contribute to open source projects. Just fork the OpenCV
in github, make necessary corrections and send a pull request to OpenCV.
OpenCV developers will check your pull request, give you important feedback and once it passes
the approval of the reviewer, it will be merged to OpenCV. Then you become a open source
contributor. Similar is the case with other tutorials, documentation etc. As new modules are
added to OpenCV-Python, this tutorial will have to be expanded. So those who knows about
particular algorithm can write up a tutorial which includes a basic theory of the algorithm and a
code showing basic usage of the algorithm and submit it to OpenCV. Remember, we together
can make this project a great success!!! Contributors Below is the list of contributors who
submitted tutorials to OpenCV-Python.
Additional Resources
4. OpenCV Documentation
5. OpenCV Forum
Install OpenCV-Python in Windows
We will learn to setup OpenCV-Python in your Windows system. Below steps are tested in a
Windows 7-64 bit machine with Visual Studio 2010 and Visual Studio 2012. The screenshots
shows VS2012.
1. Below Python packages are to be downloaded and installed to their default locations.
1.1. Python-2.7.x.
1.2. Numpy.
1.3. Matplotlib (Matplotlib is optional, but recommended since we use it a lot in our tutorials).
2. Install all packages into their default locations. Python will be installed to C:/Python27/.
3. After installation, open Python IDLE. Enter import numpy and make sure Numpy is working
fine.
4. Download latest OpenCV release from source forge site and double-click to extract it.
1. Python 3.6.8.x
2. Numpy
3. Matplotlib (Matplotlib is optional, but recommended since we use it a lot in our tutorials.)
4. Download OpenCV source. It can be from Source forge (for official release version) or from
Github (for latest source).
7.2. Click on Browse Build... and locate the build folder we created.
7.4. It will open a new window to select the compiler. Choose appropriate compiler
(here, Visual Studio 11) and click Finish.
8. You will see all the fields are marked in red. Click on the WITH field to expand it. It decides
what extra features you need. So mark appropriate fields. See the below image:
9. Now click on BUILD field to expand it. First few fields configure the build method. See the
below image:
10. Remaining fields specify what modules are to be built. Since GPU modules are not yet
supported by Open CV Python, you can completely avoid it to save time (But if you work with
them, keep it there). See the image below:
11. Now click on ENABLE field to expand it. Make sure ENABLE_SOLUTION_FOLDERS is
unchecked (Solution folders are not supported by Visual Studio Express edition). See the image
below:
12. Also make sure that in the PYTHON field, everything is filled. (Ignore
PYTHON_DEBUG_LIBRARY). See image below:
14. Now go to our opencv/build folder. There you will find OpenCV.sln file. Open it with Visual
Studio.
16. In the solution explorer, right-click on the Solution (or ALL_BUILD) and build it. It will
take some time to finish.
17. Again, right-click on INSTALL and build it. Now OpenCV-Python will be installed.
18. Open Python IDLE and enter import cv2. If no error, it is installed correctly
Use the function cv2.imread () to read an image. The image should be in the working directory
or a full path of image should be given. Second argument is a flag which specifies the way image
should be read.
cv2.IMREAD_COLOR : Loads a color image. Any transparency of image will be neglected. It is
the default flag.
import numpy as np
import cv2
img = cv2.imread('messi5.jpg',0)
Warning: Even if the image path is wrong, it won’t throw any error, but print img will give you
None
Display an image Use the function cv2.imshow() to display an image in a window. The window
automatically fits to the image size. First argument is a window name which is a string. second
argument is our image. You can create as many windows as you wish, but with different window
names.
cv2.imshow('image’, mg)
cv2.waitKey(0)
cv2.destroyAllWindows()
Write an image
Use the function cv2.imwrite () to save an image. First argument is the file name, second
argument is the image you want to save.
cv2.imwrite('messigray.png',img)
This will save the image in PNG format in the working directory
Below program loads an image in gray scale, displays it, save the image if you press ‘s’ and exit,
or simply exit without saving if you press ESC key.
import numpy as np
import cv2
img = cv2.imread('messi5.jpg',0)
cv2.imshow('image’, mg)
k = cv2.waitKey(0)
cv2.destroyAllWindows()
cv2.imwrite('messigray.png',img)
cv2.destroyAllWindows()
1 import cv2
2 cv2.__version__
RELATED WORK
There are two components to the human visual line-of-sight: pose of human head and the
orientation of the eye within their sockets. Investigated these two aspects but will concentrate on
the eye gaze estimation in this concpt. The present of novel approach called the ―one-circle‖
algorithm for measuring the eye gaze using a monocular image that zooms in on only one eye of
a person. Observing that the iris contour is a circle, Estimate the normal direction of this iris
circle, considered as the eye gaze, from its elliptical image. From basic projective geometry, an
ellipse can be back-projected into space onto two circles of different orientations. However, by
using a geometric constraint, namely, that the distance between the eyeball’s center and the two
eye corners should be equalto each other, the correct solution can be disambiguated. This allows
us to obtain a higher resolution image of the iris with a zoom-in camera, thereby achieving
higher accuracies in the estimation. A general approach that combines head pose determination
with eye gaze estimation is also proposed. The searching of the eye gaze is guided by the head
pose information. The robustness of our gaze determination approach was verified statistically
by the extensive experiments on synthetic and real image data. The two key contributions in this
concept are that show the possibility of finding the unique eye gaze direction from a single image
of one eye and that one can obtain better accuracy as a consequence of this.
The first technique is proposed to estimate the 3-D eye gaze directly. In this technique, the
cornea of the eyeball is modelled as a convex mirror. Via the properties of convex mirror, a
simple method is proposed to estimate the 3-D optic axis of the eye. The visual axis, which is the
true 3-D gaze direction of the user, can be determined subsequently after knowing the angle
deviation between the visual axis and optic axis by a simple calibration procedure. Therefore, the
gaze point on an object in the scene can be obtained by simply intersecting the estimated 3-D
gaze direction with the object. In addition, a dynamic computational head compensation model is
developed to automatically update the gaze mapping function whenever the head moves. Hence,
the eye gaze can be estimated under natural head movement. Furthermore, it minimizes the
calibration procedure to only one time for a new individual. The advantage of the proposed
techniques over the current state of the art eye gaze trackers is that it can estimate the eye gaze of
the user accurately under natural head movement, without need to perform the gaze calibration
every time before using it. Our proposed methods will improve the usability of the eye gaze
tracking technology, and believe that it represents an important step for the eye tracker to be
accepted as a natural computer input device.
In general, the visible image-based eye-gaze tracking system is heavily dependent on the
accuracy of the iris center (IC) localization. In this paper, we propose a novel IC localization
method based on the fact that the elliptical shape (ES) of the iris varies according to the rotation
of the eyeball. We use the spherical model of the human eyeball and estimate the radius of the
iris from the frontal and uprightview image of the eye. By projecting the eyeball rotated in pitch
and yaw onto the 2-D plane, a certain number of the ESs of the iris and their corresponding IC
locations are generated and registered as a database (DB). Finally, the location of IC is detected
by matching the ES of the iris of the input eye image with the ES candidates in the DB.
Moreover, combined with facial landmark points-based image rectification, the proposed IC
localization method can successfully operate under natural head movement. Experimental results
in terms of the IC localization and gaze tracking show that the proposed method achieves
superior performance compared with conventional ones.
Students' eye movements during debugging were recorded by an eye tracker to investigate
whether and how high and low performance students act differently during debugging. Thirty-
eight computer science undergraduates were asked to debug two C programs. The path of
students' gaze while following program codes was subjected to sequential analysis to reveal
significant sequences of areas examined. These significant gaze path sequences were then
compared to those of students with different debugging performances. The results show that,
when debugging, high-performance students traced programs in a more logical manner, whereas
low-performance students tended to stick to a line-by-line sequence and were unable to quickly
derive the program's higher-level logic. Low-performance students also often jumped directly to
certain suspected statements to find bugs, without following the program's logic. They also often
needed to trace back to prior statements to recall information, and spent more time on manual
computation. Based on the research results, adaptive instructional strategies and materials can be
developed for students of different performance levels, to improve associated cognitive activities
during debugging, which can foster learning during debugging and programming.
Real-time driver distraction detection is the core to many distraction countermeasures and
fundamental for constructing a driver-centered driver assistance system. While data driven
methods demonstrate promising detection performance, a particular challenge is how to reduce
the considerable cost for collecting labeled data. This paper explored semi-supervised methods
for driver distraction detection in real driving conditions to alleviate the cost of labelling training
data. Laplacian support vector machine and semi-supervised extreme learning machine were
evaluated using eye and head movements to classify two driver states: attentive and cognitively
distracted. With the additional unlabeled data, the semi-supervised learning methods improved
the detection performance (G-mean) by 0.0245, on average, over all subjects, as compared with
the traditional supervised methods. As unlabeled training data can be collected from drivers’
naturalistic driving records with little extra resource, semi-supervised methods, which utilize
both labeled and unlabeled data, can enhance the efficiency of model development in terms of
time and cost.
CONCLUSION
• First detect pupil center position of eye. Then the different variation on pupil
position get different command set for virtual keyboard. The signals pass the
motor driver to interface with the virtual keyboard itself. The motor driver will
control both speed and direction to enable the virtual keyboard to move forward,
left, right and stop.
REFERENCES
[1].Brooks,R.E.(1997) ―Towards a theory of the cognitive processes in computer
programming,‖ Int. J. Man-Mach. Studies, vol. 9, pp. 737–751.
[2].Cheng- Chih Wu, Ting-Yun Hou(2015)‖Tracking Students’ Cognitive Processes During
Program Debugging‖—An Eye-Movement Approach, IEEE.
[3].Ehrlich,K. and Soloway,E.(1983) ―Cognitive strategies and looping constructs: An
empirical study,‖ Commun. ACM, vol. 26, no. 11, pp. 853–860.
[4].Eric Sung and Jian-Gang Wang (2002)―Study on Eye Gaze Estimation‖, IEEE, VOL. 32,
NO. 3, JUNE .
[5].Murphy,L. (2008)―Debugging: The good, the bad, and the quirky—A qualitative analysis of
novices' strategies,‖ SIGCSE Bull., vol. 40, no. 1, pp. 163–167.
[6].QiangJi and Zhiwei Zhu (2007)‖Novel Eye Gaze Tracking Techniques Under Natural Head
Movement‖, Senior Member, IEEE, VOL. 54, NO. 12, DECEMBER .
[7].Rajlich,V. and Xu,S.(2004) ―Cognitive process during program debugging,‖ in Proc. 3rd
IEEE ICCI, pp. 176– 182.
[8].Renumol,V.(2009) ―Classification of cognitive difficulties of students to learn computer
programming,‖ Indian Inst. Technol. India.
[9].Seung-Jin Baek and Young-Hyun Kim(2013)‖Eyeball Model-based Iris Center Localization
for Visible Imagebased Eye-Gaze Tracking Systems‖,IEEE.