Professional Documents
Culture Documents
Faculty of Engineering
Department of Electronics and
Communication Engineering
Chapter 1
ALS ………………………………………………………………………………….…………………………………………………005
Chapter 2
Chapter 3
Chapter 4
5.2 Building OpenCV from Source Using CMake, Using the Command Line ……………………….091
5.4 Image Watch: viewing in-memory images in the Visual Studio debugger ………………………………….097
Chapter 6
Chapter 7
Source …………………………………………………………………..……………………………………………………........194
Introduction
Introduction
___________________________________________________________________
Introduction
It’s a field where it’s more than enough to study its very basics and start
making creative projects based upon them, and by the time you keep getting in
depth you’ll find that results and applications are getting more and more complex
but more and more fun and useful.
1
Introduction
___________________________________________________________________
In this book we’re going to show how to help people with ALS (amyotrophic
lateral sclerosis) who mainly lost the ability to move or speak, we’ll equip them
with special glasses attached with a camera reading the motion of the eye and
we’ll use our own software to process the stream of images taken from eye
motion and create our new communication window so that patient can speak
with the surroundings, use his computer or mobile phone, explore the internet,
and almost do everything a hale guy can do concerning computer and
communication field.
To get through that course you need to build a background about image
processing basics specially dealing with important programming languages like
C++, so that you get to know how to make a program, define variables, do
arithmetic and logic operations, use conditional states, and create classes and
functions, and divide your own program into three separated files header files, …
etc.
You also need to study openCV which is an open source computer vision and
machine learning software library. OpenCV was built to provide a common
infrastructure for computer vision applications and to accelerate the use of
machine perception in the commercial products. Being a BSD-licensed product,
OpenCV makes it easy for businesses to utilize and modify the code.
OpenCV course shows you how to modify the nature of image, how to load
videos saved on your computer, or taken from a live capture device, or live from a
website, and stream it into your own program so that you can do live processing.
You’ll learn how to get DFT (Discrete Fourier Transform) of an image so that you
2
Introduction
can process it more easily, how to separate a colored image into three gray-scale
images representing red, blue, and green components of the original image, how
to merge them again into one single image or merge two of them into one image
with three channels. OpenCV course includes the way you setup OpenCV into
your visual Studio program, how to add libraries and separated classes that have
special uses in image processing field. You’ll learn how to calibrate camera.
Camera calibration is very vital for any image processing application is real life,
also principles of how to detect specific objects and determine the 3 main
dimensions (x, y, z) for Aruco markers you insert to your program.
The third branch you need to study before you can start developing software
of the Eye-Writer is openFrameworks which is an open source C++ toolkit
designed to assist the creative process by providing a simple and intuitive
framework for experimentation.
3
Introduction
First of all you need to have either Code::Blocks or Visual studio as an IDE
(Integrated development environment) . They both help you to create your own
C++ project and of course include special libraries like openCV, also have no worry
about paying money as Code::Blocks is totally free to download and run and it’s
available as an open-source IDE.
4
Chapter 1
ALS
Chapter1
Chapter1
ALS
or more years.
Age. Although the disease can strike at any age, symptoms most commonly develop
between the ages of 55 and 75.
Gender. Men are slightly more likely than women to develop ALS. However, as we age the
difference between men and women disappears.
Race and ethnicity. Most likely to develop the disease are Caucasians and non-Hispanics.
Some studies suggest that military veterans are about 1.5 to 2 times more likely to
develop ALS. Although the reason for this is unclear, possible risk factors for veterans
include exposure to lead, pesticides, and other environmental toxins. ALS is recognized
as a service-connected disease by the U.S. Department of Veterans Affairs.
Sporadic ALS
The majority of ALS cases (90 percent or more) are considered sporadic. This means
the disease seems to occur at random with no clearly associated risk factors and no
family history of the disease. Although family members of people with sporadic ALS are
at an increased risk for the disease, the overall risk is very low and most will not develop
ALS.
Familial (Genetic) ALS
About 5 to 10 percent of all ALS cases are familial, which means that an individual
inherits the disease from his or her parents. The familial form of ALS usually only
requires one parent to carry the gene responsible for the disease. Mutations in more
than a dozen genes have been found to cause familial ALS. About 25 to 40 percent of
all familial cases (and a small percentage of sporadic cases) are caused by a defect in a
gene known as “chromosome 9 open reading frame 72,” or C9ORF72. Interestingly, the
same mutation can be associated with atrophy of frontal-temporal lobes of the brain
causing frontal-temporal lobe dementia. Some individuals carrying this mutation may
show signs of both motor neuron and dementia symptoms (ALS-FTD). Another 12 to 20
percent of familial cases result from mutations in the gene that provides instructions for
the production of the enzyme copper-zinc superoxide dismutase 1 (SOD1).
______________________________________________________________________
6
Chapter1
And that’s list of other celebrities who got diagnosed by the same disease:
________________________________________________________________________________
7
Chapter1
Ben Byer – American playwright and subject of the film Indestructible, documenting his life post-
diagnosis
Jeff Capel II – American collegiate and professional basketball coach[1]
Paul Cellucci – politician and diplomat; 69th Governor of Massachusetts and U.S. Ambassador to
Canada
Ezzard Charles – boxer; former world heavyweight champion
Leonard Cheshire – notable RAF pilot and charity worker
Marián Čišovský – Slovak football player[2]
Dwight Clark – American football player[3]
Preston Cloud – eminent American earth scientist
Sid Collins – radio personality; radio voice of the Indianapolis 500
Luca Coscioni – Italian researcher, political activist and advocate for euthanasia
Ronnie Corbett – British comedian and actor
Neale Daniher – former AFL player (Essendon) & coach (Melbourne)
Dennis Day – singer, comedian, actor
Dieter Dengler – Vietnam era Air Force pilot who escaped from Laotian POW camp
Michael Donnelly – Gulf War veteran
Peter Doohan – Australian tennis player
Ann Downer – Author of books for children and teenagers
Constantinos Apostolou Doxiadis – Greek architect, urban planner and visionary
John Drury – longtime ABC7 Chicago news anchor
Bruce Edwards – PGA Tour caddie for golfer Tom Watson
Jenifer Estess – theatre producer; star of HBO documentary Three Sisters, subject of HBO
film Jennifer; founding member of Project ALS
Hal Finney – computer scientist
Jay S. Fishman – Chairman of the Board and former CEO of The Travelers Companies
Roberto Fontanarrosa – Argentine cartoonist
Pete Frates – former Boston College baseball star, founder and inspiration behind the viral ALS
Ice Bucket Challenge (Summer 2014)
Steven Gey- law professor and expert on the separation of church and state and freedom of
speech; former on-air analyst for ABC during the 2000 presidential recount
Lou Gehrig – baseball player, after whom the disease is commonly referred
Richard Glatzer – writer and director; director of Still Alice
Steve Gleason – American football player for the New Orleans Saints 2000-2007
Jérôme Golmard – French tennis player
Stanislav Gross – former Prime Minister of the Czech Republic
Marc Harrison – designer
Pro Hart – Australian painter
________________________________________________________________________________
8
Chapter1
________________________________________________________________________________
9
Chapter1
Michael Schwartz – key conservative political strategist in the U.S. Congress; American “right to
life” advocate; chief of staff to U.S. Senator Tom Coburn, M.D. (R-Okla.)
Morrie Schwartz – educator
Raúl Sendic – Uruguayan Marxist and leader of the Tupamaros
Sam Shepard – American actor and playwright[5]
Gianluca Signorini – Italian football player
Lane Smith – actor
Konrad Spindler – archaeologist, involved in the analysis of the Ötzi glacier mummy
Jon Stone – creator of Sesame Street
Maxwell D. Taylor – former chairman of the Joint Chiefs of Staff
Orlando Thomas- NFL safety for the Minnesota Vikings
Kevin Turner – NFL fullback for the New England Patriots and Philadelphia Eagles
Roy Walford – gerontologist and life extensionist
Henry A. Wallace – 33rd Vice President of the United States to Franklin D. Roosevelt
Charlie Wedemeyer – former athlete and coach; motivational speaker
Doddie Weir – former Scottish rugby union player[6]
Joost van der Westhuizen – former South African Rugby Union player;
former Supersport commentator[7]
Michael Zaslow – soap actor
Mao Zedong – Chairman of the Chinese Communist Party
Per Villand – Norwegian Biologist, Doctor of Science
Catherine G. Wolf – American psychologist and expert in human-computer interaction
Aditya Sarvankar – Author[8]
___________________________________________________________________
10
Chapter 2
Overview of the Eye-Writer
Chapter2
Chapter 2
Eye-Writer project is the most simple and inexpensive eye-tracking system. Obviously, there are
numerous ways to make eye-tracking system. Many of these designs, especially those produced for
academic research projects (Open Eyes), have already been published openly on the internet. There are
also commercial products available, costing in the range of $20,000 US or more that are specifically
designed to enable people with ALS to communicate using their eyes. We are not in the business of re-
inventing these systems. This project is an attempt to address a gap in the development of low-end eye-
tracking systems, in other words to make a super-cheap, eye-tracker that could be made by almost
anyone, almost anywhere. Eye-Writer system has several specific design limitations that were meant to
emphasize low-cost and ease of construction over other aspects of performance, robustness and
appearance. The specific parts and tools you use to build your own “Eye-Writer” will depend on your
ability, location, financial resources and creativity.
_____________________________________________________________________________________
11
Chapter 3
Hardware and interface of the Eye-writer
Chapter3
Chapter 3
2. The fabrication and assembly of the system should require only common hand tools
3. Whenever possible components and parts should be available for purchase locally versus online
6. The camera should not auto-iris (or auto-iris should be disabled in the camera’s driver).
Beyond that it’s up to you. This instruction set details a solderless variation of the Eye-Writer that uses a
hacked PS3 Eye and a pair of glasses and suggests other possible Eye-Writer configurations.
12
Chapter3
NTSC
3.2.1 Parts:
The following part and materials list details the components and tools we used to make a solderless Eye-
Writer:
1x IR sensitive Camera (without auto-iris), We will use PS3 Eye. ($39.95 US)
(Using this camera system removes the need for an additional video capture card)
1x camera-lens mount, you can use the lens mount that comes with the PS3, but it is glue together and
difficult to separate
(This is the cheap one, but it requires some modification to match PS3 through-hole footprint),
13
Chapter3
2x IR LEDs ($1.99)
Tape
1x 8mm camera lens, Fixed IRIS Lens Set for Webcams and Security/CCTV Cameras (6-Lens Pack)
($14.91)
Cheaper DIY version of IR filter include cutting a piece of film out of a floppy disk or using unexposed and
developed photographic film
3.2.2 Tools
Scissors
Screws
Drill
_____________________________________________________________________________________
14
Chapter3
Shrink tube
Dremel
A video capture card (if not using a PS3 Eye), no need to it as we are using PS3 eye.
15
Chapter3
_____________________________________________________________________________________
16
Chapter3
_____________________________________________________________________________________
17
Chapter3
_____________________________________________________________________________________
18
Chapter3
_____________________________________________________________________________________
_____________________________________________________________________________________
19
Chapter3
_____________________________________________________________________________________
_____________________________________________________________________________________
20
Chapter 3
A couple of glasses
The camera arm needs to hold the camera rigidly in front of one eye, but also be flexible, movable and
easy to manufacture.
The best material we have found in terms of rigidity, flexibility, machinability, cost and weight is 9-gauge
aluminum wire. This type of wire is often used as a support structure inside clay and plaster sculptures
and can be found at art supply, hardware, and craft stores. It often has a plastic coating over the
aluminum, which is helpful in our case in order to electrically isolate the camera arm from the camera
circuit board. If the wire you use is not insulated, you can wrap it with a few strips of electrical tape.
To machine aluminum wire you can use a pair of tin snips or simply bend the wire back and forth
repeatedly until it breaks.
_____________________________________________________________________________________
21
Chapter3
The easiest way to attach the camera arm to the glasses frame is to simply use wire ties to secure the
wire along the arm of the glasses. We use around 6-8 small wire ties.
A more elaborate method involves using aluminum electrical connectors. These can be found at
hardware stores. This method requires a drill, a tap, a tap handle and takes about 20 minutes.
1) Drill two appropriately-sized holes for tapping into the aluminum connector. The size of the hole
will depend on the size of the screw you intend to use. We used a 4-40 screw. To create a hole
ready for a 4-40 tap you would use a #43 drill bit (3/32nd). You can use the tap and die chart
linked below as a reference is you intend to use a different-sized screw.
2) Use the holes in the aluminum connector as a reference to mark where the holes need to be
drilled in your glasses frame and then drill two holes that would accommodate a 4-40 screw (a
#38 drill bit).
3) Using the tap handle and a 4-40 tap, you should tap the aluminum connector. Remember to
move slowly, use machine oil (or olive oil if you don’t have professional machine lubricants) and
clean the hole before trying to insert the screw
______________________________________________________________________________
22
Chapter3
______________________________________________________________________________
4) You can now assemble the pieces. Use washers, lock washers and lock-tite in order to securely
attach the connector to the frame. The connector we used has a flathead set screw that allows
us to screw down the aluminum camera arm and secure it tightly to the glasses.
______________________________________________________________________________
23
Chapter3
______________________________________________________________________________
__________________________________________________________________________________
24
Chapter3
__________________________________________________________________________________
__________________________________________________________________________________
25
Chapter3
__________________________________________________________________________________
Step4: Hack the PS3 Eye
There are a number of videos onine that explain how to take apart the PS3 Eye and remove the IR
blocking filter, and how to install a visible light filter using a floppy disk.
These videos document the process of hacking the PS3 Eye pretty thoroughly. But, In our case we
need to use a lens with a shorter focal length than the one provided with the PS3, so some extra
hacking is in order. To recap and expand on how to mod the PS3 Eye for use with the Eye-Writer
software:
2. Crack open the case using a small flat head screw driver
3. Unscrew the screws that mount the camera circuit board to the plastic housing
5. Either
a) Throw away the PS3 Eye lens mount and lens and use one of the lens mounts linked to in
the parts list (and the 8mm lens from our lens pack) or,
b) If you want to repurpose the PS3 lens mount you need to dig the IR light filter (as shown in
the video).
6. If you want to use the original PS3-native lens mount, you will need to separate the PS3-native
lens from the mount, which is attached with some industrial glue. To do this you need to scratch
away the glue around the outside lip of the mount. This is hard to do and requires some patience
and some luck. You will need to scratch and try to turn the lens to unscrew it. Keep repeating this
process until the lens separates and can be unscrewed. If you destroy the lens (which happened to
us about half the time) you will be forced to use one of the lens mounts linked to in the parts list.
8. If you have successfully separated the PS3 lens from the PS3 lens mount then just screw the PS3
lens mount back onto the camera circuit board. If you have not succeeded in separating the lens
from the mount, then screw the new lens mount on the camera circuit board.
__________________________________________________________________________________
26
Chapter3
_________________________________________________________________________________
Watch the video and look at the included photos for more tricks and information on how to
successfully hack the PS3 Eye.
_____________________________________________________________________________________
27
Chapter3
____________________________________________________________________________
_____________________________________________________________________________________
28
Chapter3
_____________________________________________________________________________________
_____________________________________________________________________________________
29
Chapter3
_____________________________________________________________________________________
_____________________________________________________________________________________
30
Chapter3
_____________________________________________________________________________________
To attach your newly modified PS3 Eye camera to the wire armature you will need wire ties and
some type of small, insulated substrate that will provide more rigidity to the camera/armature
assembly. For our prototype we used a small (3 in x 1 in) piece of hard rubber. You can also use a
piece of wood, half of a pop cycle stick, or any other sturdy, insulated material.
1. Put the rigid substrate in between the camera and the wire armature. The camera should be
pointed toward the eye of the glasses.
You can put a small length of double sided tape between the rigid substrate and the wire armature
to make putting the three pieces together and to ensure a more secure assembly. The camera
should still be adjustable in terms of pitch and may require occasional adjustments (or even re-
assembly) between uses.
__________________________________________________________________________________
31
Chapter3
_________________________________________________________________________________
__________________________________________________________________________________
32
Chapter3
__________________________________________________________________________________
__________________________________________________________________________________
33
Chapter3
__________________________________________________________________________________
__________________________________________________________________________________
34
Chapter3
__________________________________________________________________________________
__________________________________________________________________________________
35
Chapter3
__________________________________________________________________________________
__________________________________________________________________________________
36
Chapter3
__________________________________________________________________________________
__________________________________________________________________________________
37
Chapter3
__________________________________________________________________________________
When you illuminate the eye with IR light and observe it through an IR sensitive camera with a
visible light filter, the iris of the eye turns completely white and the pupil stands out as a high-
contrast black dot. This makes tracking the eye much easier. In order to provide some IR
illumination, we have made a quick and dirty IR LED circuit using alligator clips, IR LEDs and a 2x AAA
battery holder.
The circuit is a simple 3 volt series circuit with two IR LEDs and a power supply (See the napkin
circuit drawing below for more details). Connect an alligator clip, preferably a red one, to the power
lead from the battery holder. Connect the other end of the alligator clip to the positive leg of one of
the IR LEDs. Connect another alligator clip, preferably white or yellow, to the negative leg of the
same IR LED and the positive leg of the second IR LED. Finally, connect an alligator clip, preferably
black, to the negative leg of the second IR LED. The other end of the black alligator clip should be
connected to the negative lead from the battery holder.
You can test to see if the IR LEDs are working by looking at them using most typical point and shoot
cameras. If they are sensitive to IR light, you should see a soft glow coming from both LEDs.
Wrap up the excess cable, wire tie the alligator clips to the arm of the glasses and the camera
armature. You can use wire ties to attach the alligator clips to the front of the camera. Bend the IR
LEDs so they are pointing in the same direction as the camera, bent in toward the eye. Make sure
the LED legs are not touching each other or any part of the camera circuit board. You can use
electrical tape to help keep all metal components electrically isolated from one another. You will
likely have to adjust the IR LEDs once you are looking at the eye in the Eye-Writer software in order
to get a strong illumination that removes shadows created by the eyelid, lashes and camera frame.
The Eye-Writer software is two parts: an eye-tracking software designed for use with our low-cost
glasses, and software designed for dealing with eye movements.
The software for both parts has been developed using openFrameworks, a cross platform C++ library
for creative development. In order to compile and develop the Eye-Writer source code, you will
need to download 47penFrameworks (pre release v0.06). Documentation, setup guides and more
information can be found at http://openframeworks.cc .
__________________________________________________________________________________
38
Chapter3
__________________________________________________________________________________
In order to use the PS3 eye you will need to download a driver/component and install it.
For a mac you will need to download the quick-time component here and put it in your-
hardDrive//Library/QuickTime.
Alternatively, if you plan to use another type of NTSC camera, you will need a video capture card.
We have successfully used the Pinnaccle Dazzle USB DVD recorder .
To use this device you will need to install a PC driver or use VideoGlide. This software does require a
user-license which costs roughly $25.00 dollars
The eye-tracking software detects and tracks the position of a pupil from an incoming camera or
video image, and uses a calibration sequence to map the tracked eye/pupil coordinates to positions
on a computer screen or projection. Note that we use the GSL (gnu scientific library) for calibration,
which is GPL, thus the eye tracking source code is GPL.
The pupil tracking relies upon a clear and dark image of the pupil. The glasses we designed use near-
infrared LEDs to illuminate the eye and create a dark pupil effect. This makes the pupil much more
distinguishable and, thus, easier to track. The camera setting part of the software is designed so the
image can be adjusted with brightness and contrast to get an optimal image of the eye.
The calibration part of the software displays a sequence of points on the screen and records the
position of the pupil at each point. It is designed so that a person wearing the glasses should focus
on each point as it is displayed. When the sequence is finished, the two sets of data are used to
interpolate where subsequent eye positions are located in relation to the screen.
The eye-drawing software is designed to work with the Eye-Writer tracking software as well as
commercial eye-trackers such as the Tobii eye tracker. It is currently a separate application from the
eye-tracker.
__________________________________________________________________________________
39
Chapter3
__________________________________________________________________________________
The tool allows you to draw, manipulate and style a tag using a time-based interface so that
triggering buttons or creating points for drawing is achieved by focusing on the position for a given
amount of time. Tags and tag data can also be uploaded via FTP and HTTP Post.
The Eye-Writer interface can be used to create drawings on screen, or using a small projector, you
can create drawings on the wall in a hospital room. We have also used the Eye-Writer software in
conjunction with a special version of the Laser Tag software to project Eye-Tags at large scale in
public space.
In order to create on-screen drawing you will simply need to follow the steps featured in the
previous step. This will work with both: the Tobii system software as well as the Eye-Writer
hardware and software suite.
__________________________________________________________________________________
40
Chapter3
__________________________________________________________________________________
In order to create drawing using the Eye-Writer hardware/software or the Eye-Writer software/Tobii
system, you will need a digital projector and a projection screen or surface. You will need to
calibrate the Eye-Writer user to the projection surface. We have experimented with using a regular
bed sheet as a projection screen successfully.
To do remote projection you will need two computers, both connected to the internet, a mobile
broadcast system connected to one computer and the Laser Tag OSC receive software (super-beta).
We’ll also use Sprint wireless broadband cards to create wireless remote connection between the
two computers.
We are no currently supporting the Laser Tag OSC receive software, but you can download it and
hack around with it. In order to project the GML (graffiti markup language) tags you will need to
drag the GML data into /bin/data and rename the file tempt.gml. It’s predicted that the full Eye-
Writer send/receive software will be released soon with instructions.
__________________________________________________________________________________
41
Chapter4
Theoretical background about C++
Capter4
___________________________________________________________________________
Chapter4
____________________________________________________________________________________
42
Chapter4
____________________________________________________________________________________
____________________________________________________________________________________
43
Chabpter4
Step 5: Open the CodeBlocks IDE and Select Create new project
____________________________________________________________________________________
44
Chapter4
____________________________________________________________________________________
Step 7: On the next screen select the language which you want to program C or C++
Step 8: Select the project location click next and select the compiler to use . In our case we are going to
use GNU GCC Compiler
____________________________________________________________________________________
45
Chapter4
____________________________________________________________________________________
Step9: Write the program and Build and Run.
Structure of a program is Probably the best way to start learning a programming language is by writing a
program. Therefore, here is our first program:
#include <iostream>
#include <string>
int main () {
_____________________________________________________________________________________
46
Chapter4
_____________________________________________________________________________________
return 0;
This is a comment line. All lines beginning with two slash signs (//) are considered comments and do not
have any effect on the behavior of the program. The programmer can use them to include short
explanations or observations within the source code itself. In this case, the line is a brief description of
what our program is.
#include
Lines beginning with a hash sign (#) are directives for the preprocessor. They are not regular code lines
with expressions but indications for the compiler’s preprocessor. In this case the directive #include tells
the preprocessor to include the iostream standard file. This specific file (iostream) includes the
declarations of the basic standard input-output library in C++, and it is included because its functionality
is going to be used later in the program.
All the elements of the standard C++ library are declared within what is called a namespace, the
namespace with the name std. So in order to access its functionality we declare with this expression that
we will be using these entities. This line is very frequent in C++ programs that use the standard library,
and in fact it will be included in most of the source codes included in these tutorials.
Int main ()
This line corresponds to the beginning of the definition of the main function. The main function is the
point by where all C++ programs start their execution, independently of its location within the source
code. It does not matter whether there are other functions with other names defined before or after it –
the instructions contained within this function’s definition will always be the first ones to be executed in
any C++ program. For that same reason, it is essential that all C++ programs have a main function. The
word main is followed in the code by a pair of parentheses (()). That is because it is a function
declaration: In C++, what differentiates a function declaration from other types of expressions are these
parentheses that follow its name. Optionally, these parentheses may enclose a list of parameters within
them. Right after these parentheses we can find the body of the main function enclosed in braces ({}).
_____________________________________________________________________________________
47
Chapter4
_____________________________________________________________________________________
What is contained within these braces is what the function does when it is executed.
This line is a C++ statement. A statement is a simple or compound expression that can actually produce
some effect. In fact, this statement performs the only action that generates a visible effect in our first
program. Cout represents the standard output stream in C++, and the meaning of the entire statement
is to insert a sequence of characters (in this case the Hello World sequence of characters) into the
standard output stream (which usually is the screen). Cout is declared in the iostream standard file
within the std namespace, so that’s why we needed to include that specific file and to declare that we
were going to use this specific namespace earlier in our code. Notice that the statement ends with a
semicolon character (;). This character is used to mark the end of the statement and in fact it must be
included at the end of all expression statements in all C++ programs (one of the most common syntax
errors is indeed to forget to include some semicolon after a statement).
Return 0;
The return statement causes the main function to finish. Return may be followed by a return code (in
our example is followed by the return code 0). A return code of 0 for the main function is generally
interpreted as the program worked as expected without any errors during its execution. This is the most
usual way to end a C++ console program.
_____________________________________________________________________________________
48
Chapter4
_____________________________________________________________________________________
In order to use a variable in C++, we must first declare it specifying which data type we want it to be. The
syntax to declare a new variable is to write the specifier of the desired data type (like int, bool, float…)
followed by a valid variable identifier. For example
int a;
float mynumber;
To see what variable declarations look like in action within a program, we are going to see the C++ code
of the example about your mental memory proposed at the beginning of this section:
#include <iostream>
_____________________________________________________________________________________
49
Chapter4
_____________________________________________________________________________________
#include <string>
int main ()
// declaring variables:
int a, b;
int result;
// process:
a = 5;
b = 2;
a = a + 1;
result = a – b;
return 0;
Executed program: 4
_____________________________________________________________________________________
50
Chapter4
_____________________________________________________________________________________
For example, if we want to declare an int variable called a initialized with a value of 0 at the moment in
which it is declared, we could write:
int a = 0;
// my first string
#include <iostream>
#include <string>
int main ()
return 0;
4.7 Operators
A = 5;
This statement assigns to variable a (the lvalue) the value contained in variable b (the rvalue). The value
that was stored until this moment in a is not considered at all in this operation, and in fact that value is
lost.
_____________________________________________________________________________________
51
Chapter4
_____________________________________________________________________________________
For example, let us have a look at the following code – I have included the evolution of the content
stored in the variables as comments:
// assignment operator
#include <iostream>
#include <string>
int main ()
int a, b;
// a:?, b:?
a = 10;
// a:10, b:?
b = 4;
// a:10, b:4
a = b;
// a:4, b:4
b = 7;
// a:4, b:7
cout << a;
_____________________________________________________________________________________
52
Chapter4
_____________________________________________________________________________________
cout << b;
return 0;
+ addition
- subtraction
* multiplication
/ division
% modulus
4.7.3 Compound assignment (+=, -=, *=, /=, %=, >>=, <<=, &=, ^=, |=)
When we want to modify the value of a variable by performing an operation on the value currently
stored in that variable we can use compound assignment operators:
_____________________________________________________________________________________
53
Chapter4
_____________________________________________________________________________________
For example:
#include <iostream>
#include <string>
int main ()
int a,
b=3;
a = b;
a+=2;
_____________________________________________________________________________________
54
Chapter4
_____________________________________________________________________________________
// equivalent to a=a+2
cout << a;
return 0;
Executed program: 5
Shortening even more some expressions, the increase operator (++) and the decrease operator (--)
increase or reduce by one the value stored in a variable.
c++;
c+=1;
c=c+1;
are all equivalent in their functionality: the three of them increase by one the value of c.
In order to evaluate a comparison between two expressions we can use the relational and equality
operators. The result of a relational operation is a Boolean value that can only be true or false, according
to its Boolean result.
== Equal to
!= Not equal to
_____________________________________________________________________________________
55
Chapter4
_____________________________________________________________________________________
The logical operators && and || are used when evaluating two expressions to obtain a single relational
result. The operator && corresponds with Boolean logical operation AND. This operation results true if
both its two operands are true, and false otherwise. The following panel shows the result of operator
&& evaluating the expression a && b:
The conditional operator evaluates an expression returning a value if that expression is true and a
different one if the expression is evaluated as false. Its format is:
If condition is true the expression will return result1, if it is not it will return result2.
7==5 ? 4 : 3
7==5+2 ? 4 : 3
_____________________________________________________________________________________
56
Chapter4
_____________________________________________________________________________________
5>3 ? a : b
a>b ? a : b
The if keyword is used to execute a statement or block only if a condition is fulfilled. Its form is:
For example:
if (x == 100)
For example:
if (x == 100)
else
The if + else structures can be concatenated with the intention of verifying a range of values. The
following example shows its use telling if the value currently stored in x is positive, negative or none of
them (i.e. zero): if (x > 0) cout << “x is positive”; else if (x < 0) cout << “x is negative”; else cout << “x is
0”;
_____________________________________________________________________________________
57
Chapter4
_____________________________________________________________________________________
Loops have as purpose to repeat a statement a certain number of times or while a condition is fulfilled.
and its functionality is simply to repeat statement while the condition set in expression is true. For
example, we are going to make a program to countdown using a while-loop:
#include <iostream>
#include <string>
int main ()
int n;
cin >> n;
while (n>0) {
--n;
_____________________________________________________________________________________
58
Chapter4
_____________________________________________________________________________________
return 0;
8, 7, 6, 5, 4, 3, 2, 1, FIRE!
Its functionality is exactly the same as the while loop, except that condition in the do-while loop is
evaluated after the execution of statement instead of before, granting at least one execution of
statement even if condition is never fulfilled. For example, the following example program echoes any
number you enter until you enter 0.
// number echoer
#include <iostream>
int main ()
unsigned long n;
do {
cin >> n;
} while (n != 0);
_____________________________________________________________________________________
59
Chapter4
_____________________________________________________________________________________
return 0;
You entered: 0
for (initialization; condition; increase) statement; The C++ Language Tutorial 37 © cplusplus.com 2008.
All rights reserved and its main function is to repeat statement while condition remains true, like the
while loop. But in addition, the for loop provides specific locations to contain an initialization statement
and an increase statement. So this loop is specially designed to perform a repetitive action with a
counter which is initialized and increased on each iteration.
#include <iostream>
int main ()
_____________________________________________________________________________________
60
Chapter4
_____________________________________________________________________________________
return 0;
Using break we can leave a loop even if the condition for its end is not fulfilled. It can be used to end an
infinite loop, or to force it to end before its natural end. For example, we are going to stop the count
down before its natural end (maybe because of an engine check failure?):
#include <iostream>
int main ()
int n;
if (n==3) {
_____________________________________________________________________________________
61
Chapter4
_____________________________________________________________________________________
break;
return 0;
The continue statement causes the program to skip the rest of the loop in the current iteration as if the
end of the statement block had been reached, causing it to jump to the start of the following iteration.
For example, we are going to skip the number 5 in our countdown:
#include <iostream>
int main ()
if (n==5)
continue;
_____________________________________________________________________________________
62
Chapter4
_____________________________________________________________________________________
return 0;
goto allows to make an absolute jump to another point in the program. You should use this feature with
caution since its execution causes an unconditional jump ignoring any type of nesting limitations. The
destination point is identified by a label, which is then used as an argument for the goto statement. A
label is made of a valid identifier followed by a colon (.
#include <iostream>
int main ()
int n=10;
loop:
n--;
if (n>0)
goto loop;
return 0;
_____________________________________________________________________________________
63
Chapter4
_____________________________________________________________________________________
The purpose of exit is to terminate the current program with a specific exit code. Its prototype is:
switch (expression)
case constant1:
group of statements 1;
break;
case constant2:
group of statements 2;
break;
default:
_____________________________________________________________________________________
64
Chapter4
_____________________________________________________________________________________
Using functions we can structure our programs in a more modular way, accessing all the potential that
structured programming can offer to us in C++.
A function is a group of statements that is executed when it is called from some point of the program.
The following is its format:
where:
• type is the data type specifier of the data returned by the function.
• parameters (as many as needed): Each parameter consists of a data type specifier followed by an
identifier, like any regular variable declaration (for example: int x) and which acts within the function as
a regular local variable. They allow to pass arguments to the function when it is called. The different
parameters are separated by commas.
// function example
#include <iostream>
int r;
r=a+b;
return (r);
_____________________________________________________________________________________
65
Chapter4
_____________________________________________________________________________________
int main ()
int z;
z = addition (5,3);
return 0;
you will see that the declaration begins with a type, that is the type of the function itself (i.e., the type of
the datum that will be returned by the function with the return statement). But what if we want to
return no value?
Imagine that we want to make a function just to show a message on the screen. We do not need it to
return any value. In this case we should use the void type specifier for the function. This is a special
specifier that indicates absence of type.
#include <iostream>
void printmessage ()
_____________________________________________________________________________________
66
Chapter4
_____________________________________________________________________________________
int main ()
printmessage ();
return 0;
#include <iostream>
a*=2;
b*=2;
c*=2;
_____________________________________________________________________________________
67
Chapter4
_____________________________________________________________________________________
int main ()
cout << “x=” << x << “, y=” << y << “, z=” << z;
return 0;
When declaring a function we can specify a default value for each of the last parameters. This value will
be used if the corresponding argument is left blank when calling to the function. To do that, we simply
have to use the assignment operator and a value for the arguments in the function declaration. If a
value for that parameter is not passed when the function is called, the default value is used, but if a
value is specified this default value is ignored and the passed value is used instead. For example:
#include <iostream>
int r;
r=a/b;
return (r);
_____________________________________________________________________________________
68
Chapter4
_____________________________________________________________________________________
int main ()
return 0;
Executed program: 6
In C++ two different functions can have the same name if their parameter types or number are different.
That means that you can give the same name to more than one function if they have either a different
number of parameters or different types in their parameters. For example:
// overloaded function
#include <iostream>
return (a*b);
_____________________________________________________________________________________
69
Chapter4
_____________________________________________________________________________________
return (a/b);
int main ()
int x=5,y=2;
float n=5.0,m=2.0;
return 0;
Executed program: 10
2.5
Until now, we have defined all of the functions before the first appearance of calls to them in the source
code. These calls were generally in function main which we have always left at the end of the source
code. If you try to repeat some of the examples of functions described so far, but placing the function
main before any of the other functions that were called from within it, you will most likely obtain
compiling errors. The reason is that to be able to call a function it must have been declared in some
earlier point of the code, like we have done in all our examples.
_____________________________________________________________________________________
70
Chapter4
_____________________________________________________________________________________
But there is an alternative way to avoid writing the whole code of a function before it can be used in
main or in some other function. This can be achieved by declaring just a prototype of the function
before it is used, instead of the entire definition. This declaration is shorter than the entire definition,
but significant enough for the compiler to determine its return type and the types of its parameters.
It is identical to a function definition, except that it does not include the body of the function itself (i.e.,
the function statements that in normal definitions are enclosed in braces { }) and instead of that we end
the prototype declaration with a mandatory semicolon (;).
The parameter enumeration does not need to include the identifiers, but only the type specifiers. The
inclusion of a name for each parameter as in the function definition is optional in the prototype
declaration. For example, we can declare a function called protofunction with two int parameters with
any of the following declarations:
Anyway, including a name for each variable makes the prototype more legible.
#include <iostream>
int main ()
int i;
_____________________________________________________________________________________
71
Chapter4
_____________________________________________________________________________________
do {
cin >> i;
odd (i);
} while (i!=0);
return 0;
if ((a%2)!=0)
if ((a%2)==0)
_____________________________________________________________________________________
72
Chapter4
_____________________________________________________________________________________
Number is odd.
Number is even.
Number is even.
Number is even.
4.16 Arrays
When declaring a regular array of local scope (within a function, for example), if we do not specify
otherwise, its elements will not be initialized to any value by default, so their content will be
undetermined until we store some value in them. The elements of global and static arrays, on the other
hand, are automatically initialized with their default values, which for all fundamental types this means
they are filled with zeros.
In both cases, local and global, when we declare an array, we have the possibility to assign initial values
to each one of its elements by enclosing the values in braces { }. For example:
In any point of a program in which an array is visible, we can access the value of any of its elements
individually as if it was a normal variable, thus being able to both read and modify its value. The format
is as simple as:
name[index]
_____________________________________________________________________________________
73
Chapter4
_____________________________________________________________________________________
Multidimensional arrays can be described as “arrays of arrays”. For example, a bidimensional array can
be imagined as a bidimensional table made of elements, all of them of a same uniform data type.
Jimmy represents a bidimensional array of 3 per 5 elements of type int. The way to declare this array in
C++ would be:
and, for example, the way to reference the second element vertically and fourth horizontally in an
expression would be:
jimmy[1][3]
Multidimensional arrays are not limited to two indices (i.e., two dimensions). They can contain as many
indices as needed. But be careful! The amount of memory needed for an array rapidly increases with
each dimension. For example:
declares an array with a char element for each second in a century, that is more than 3 billion chars. So
this declaration would consume more than 3 gigabytes of memory!
Multidimensional arrays are just an abstraction for programmers, since we can obtain the same results
with a simple array just by putting a factor between its indices:
At some moment we may need to pass an array to a function as a parameter. In C++ it is not possible to
pass a complete block of memory by value as a parameter to a function, but we are allowed to pass its
address. In practice this has almost the same effect and it is a much faster and more efficient operation.
In order to accept arrays as parameters the only thing that we have to do when declaring the function is
to specify in its parameters the element type of the array, an identifier and a pair of void brackets []. For
_____________________________________________________________________________________
74
Chapter4
_____________________________________________________________________________________
accepts a parameter of type “array of int” called arg. In order to pass to this function an array declared
as:
procedure (myarray);
// arrays as parameters
#include<iostream>
int main ()
_____________________________________________________________________________________
75
Chapter4
_____________________________________________________________________________________
printarray (firstarray,3);
printarray (secondarray,5);
return 0;
Executed program: 5 10 15
2 4 6 8 10
As you may already know, the C++ Standard Library implements a powerful string class, which is very
useful to handle and manipulate strings of characters. However, because strings are in fact sequences of
characters, we can represent them also as plain arrays of char elements.
Because arrays of characters are ordinary arrays they follow all their same rules. For example, if we want
to initialize an array of characters with some predetermined sequence of characters we can do it just
like any other array:
In this case we would have declared an array of 6 elements of type char initialized with the characters
that form the word “Hello” plus a null character ‘\0’ at the end. But arrays of char elements have an
additional method to initialize their values: using string literals.
In both cases the array of characters myword is declared with a size of 6 elements of type char: the 5
characters that compose the word “Hello” plus a final null character (‘\0’) which specifies the end of the
_____________________________________________________________________________________
76
Chapter4
_____________________________________________________________________________________
sequence and that, in the second case, when using double quotes (“) it is appended automatically.
Null-terminated sequences of characters are the natural way of treating strings in C++, so they can be
used as such in many procedures. In fact, regular string literals have this type (char[]) and can also be
used in most cases.
For example, cin and cout support null-terminated sequences as valid containers for sequences of
characters, so they can be used directly to extract strings of characters from cin or to insert them into
cout. For example:
#include <iostream>
int main ()
return 0;
_____________________________________________________________________________________
77
Chapter4
_____________________________________________________________________________________
Hello, John!
4.18 Pointers
ted = &andy;
This would assign to ted the address of variable andy, since when preceding the name of the variable
andy with the reference operator (&) we are no longer talking about the content of the variable itself,
but about its reference (i.e., its address in memory).
We have just seen that a variable which stores a reference to another variable is called a pointer.
Pointers are said to “point to” the variable whose reference they store.
Using a pointer we can directly access the value stored in the variable which it points to. To do this, we
simply have to precede the pointer’s identifier with an asterisk (*), which acts as dereference operator
and that can be literally translated to “value pointed by”.
beth = *ted;
Due to the ability of a pointer to directly refer to the value that it points to, it becomes necessary to
specify in its declaration which data type a pointer is going to point to. It is not the same thing to point
to a char as to point to an int or a float.
type * name;
where type is the data type of the value that the pointer is intended to point to. This type is not the type
of the pointer itself! But the type of the data the pointer points to. For example:
_____________________________________________________________________________________
78
Chapter4
_____________________________________________________________________________________
int * number;
char * character;
float * greatnumber;
4.19 Classes
Classes are generally declared using the keyword class, with the following format:
class class_name {
access_specifier_1:
member1;
access_specifier_2:
member2;
} object_names;
Where class_name is a valid identifier for the class, object_names is an optional list of names for objects
of this class. The body of the declaration can contain members, that can be either data or function
declarations, and optionally access specifiers.
All is very similar to the declaration on data structures, except that we can now include also functions
and members, but also this new thing called access specifier. An access specifier is one of the following
three keywords: private, public or protected. These specifiers modify the access rights that the members
following them acquire:
• private members of a class are accessible only from within other members of the same class or from
their friends.
_____________________________________________________________________________________
79
Chapter4
_____________________________________________________________________________________
• protected members are accessible from members of their same class and from their friends, but also
from members of their derived classes.
• Finally, public members are accessible from anywhere where the object is visible.
Objects generally need to initialize variables or assign dynamic memory during their process of creation
to become operative and to avoid returning unexpected values during their execution. For example,
what would happen if in the previous example we called the member function area() before having
called function set_values()? Probably we would have gotten an undetermined result since the members
x and y would have never been assigned a value.
In order to avoid that, a class can include a special function called constructor, which is automatically
called whenever a new object of this class is created. This constructor function must have the same
name as the class, and cannot have any return type; not even void.
#include <iostream>
class Crectangle {
public:
Crectangle (int,int);
int area () {
return (width*height);}
};
_____________________________________________________________________________________
80
Chapter4
_____________________________________________________________________________________
width = a;
height = b;
} int main () {
return 0;
rectb area: 30
In principle, private and protected members of a class cannot be accessed from outside the same class
in which they are declared. However, this rule does not affect friends.
If we want to declare an external function as friend of a class, thus allowing this function to have access
to the private and protected members of this class, we do it by declaring a prototype of this external
function within the class, and preceding it with the keyword friend:
// friend functions
#include <iostream>
_____________________________________________________________________________________
81
Chapter4
_____________________________________________________________________________________
class Crectangle {
public:
int area ()
};
width = a;
height = b;
Crectangle rectres;
rectres.width = rectparam.width*2;
rectres.height = rectparam.height*2;
_____________________________________________________________________________________
82
Chapter4
_____________________________________________________________________________________
return (rectres);
int main ()
rect.set_values (2,3);
return 0;
Executed program: 24
Just as we have the possibility to define a friend function, we can also define a class as friend of another
one, granting that first class access to the protected and private members of the second one.
// friend class
#include <iostream>
class Csquare;
class Crectangle {
public:
_____________________________________________________________________________________
83
Chapter4
_____________________________________________________________________________________
int area () {
};
class Csquare {
private:
int side;
public:
side=a;
};
width = a.side;
height = a.side;
int main () {
Csquare sqr;
_____________________________________________________________________________________
84
Chapter4
_____________________________________________________________________________________
Crectangle rect;
sqr.set_side(4);
rect.convert(sqr);
return 0;
Executed program: 16
Classes that are derived from others inherit all the accessible members of the base class. That means
that if a base class includes a member A and we derive it to another class with another member called B,
the derived class will contain both members A and B.
In order to derive a class from another, we use a colon ( in the declaration of the derived class using
the following format:
// derived classes
#include <iostream>
class Cpolygon {
protected:
public:
_____________________________________________________________________________________
85
Chapter4
_____________________________________________________________________________________
width=a;
height=b;
};
public:
int area () {
};
public:
int area () {
};
int main () {
Crectangle rect;
Ctriangle trgl;
rect.set_values (4,5);
_____________________________________________________________________________________
86
Chapter4
_____________________________________________________________________________________
trgl.set_values (4,5);
return 0;
Executed program: 20
10
• its friends
In C++ it is perfectly possible that a class inherits members from more than one class. This is done by
simply separating the different base classes with commas in the derived class declaration. For example,
if we had a specific class to print on screen (Coutput) and we wanted our classes Crectangle and
Ctriangle to also inherit its members in addition to those of Cpolygon we could write:
// multiple inheritance
#include <iostream>
_____________________________________________________________________________________
87
Chapter4
_____________________________________________________________________________________
class Cpolygon {
protected:
public:
width=a;
height=b;
};
class Coutput {
public:
};
public:
int area () {
_____________________________________________________________________________________
88
Chapter4
_____________________________________________________________________________________
};
public:
int area () {
};
int main () {
Crectangle rect;
Ctriangle trgl;
rect.set_values (4,5);
trgl.set_values (4,5);
rect.output (rect.area());
trgl.output (trgl.area());
return 0;
Executed program: 20
10
_____________________________________________________________________________________
89
Chapter4
_____________________________________________________________________________________
When dealing with classes we create a file called main.cpp and known as implementation file which
contains the called functions and the core main parts of the program, including variables inserted by
user or programmer, the other file is named after the class created and follows the coming formula:
Class-name.cpp, and that one contains the functions of the class, constructor and deconstructor of the
class. The third and last file follows that formula: Class-name.h, and it’s called the header file that simply
contains the addressed and the prototypes of used functions constructors and deconstructors.
__________________________________________
90
Chapter5
Theoretical background about OpenCV
Chapter5
_____________________________________________________________________________________
Chapter5
It’s an open source computer vision and machine learning software library.
OpenCV was built to provide a common infrastructure for computer vision
applications and to accelerate the use of machine perception in the commercial
products. Being a BSD-licensed product, OpenCV makes it easy for businesses to
utilize and modify the code.
OpenCV course shows you how to modify the nature of image, how to load
videos saved on your computer, or taken from a live capture device, or live from a
website, and stream it into your own program so that you can do live processing.
You’ll learn how to get DFT (Discrete Fourier Transform) of an image so that you
can process it more easily, how to separate a colored image into three gray-scale
images representing red, blue, and green components of the original image, how
to merge them again into one single image or merge two of them into one image
with three channels. OpenCV course includes the way you setup OpenCV into
your visual Studio program, how to add libraries and separated classes that have
special uses in image processing field. You’ll learn how to calibrate camera.
Camera calibration is very vital for any image processing application is real life,
also principles of how to detect specific objects and determine the 3 main
dimensions (x, y, z) for Aruco markers you insert to your program.
5.2 Building OpenCV from Source Using Cmake, Using the Command Line
______________________________________________________________________________
91
Chapter5
_____________________________________________________________________________________
1. Create a temporary directory, which we denote as , where you want to put the generated Makefiles,
project files as well the object files and output binaries.
For example
cd ~/103penFr
mkdir release
cd release
make
• The easiest way of using OpenCV in your code is to use Cmake. A few advantages (taken from the
Wiki):
2. Can easily be combined with other tools by Cmake( i.e. Qt, ITK and VTK )
• If you are not familiar with Cmake, checkout the tutorial on its website
Steps:
#include <stdio.h>
_____________________________________________________________________________________
92
Chapter5
_____________________________________________________________________________________
#include <opencv2/openCV.hpp>
if ( argc != 2 ) {
return -1;
Mat image;
if ( !image.data ) {
return -1;
waitKey(0);
return 0;
Now you have to create your CmakeLists.txt file. It should look like this:
_____________________________________________________________________________________
93
Chapter5
_____________________________________________________________________________________
cmake_minimum_required(VERSION 2.8)
project( DisplayImage )
This part is easy, just proceed as with any other project using Cmake:
cd <DisplayImage_directory>
cmake .
make
4. Result
By now you should have an executable (called DisplayImage in this case). You just have to run it giving
an image location as an argument, i.e.:
./DisplayImage lena.jpg
3. Make sure you have admin rights. Unpack the self-extracting archive.
4. You can check the installation at the chosen path as you can see below.
_____________________________________________________________________________________
94
Chapter5
_____________________________________________________________________________________
5. To finalize the installation go to the Set the OpenCV environment variable and add it to the systems
path section.
5.3.2 Installation by Making Your Own Libraries from the Source Files (Building the library)
1. Make sure you have a working IDE with a valid compiler. In case of the Microsoft Visual Studio just
install it and make sure it starts up.
2. Install Cmake. Simply follow the wizard, no need to add it to the path. The default install options are
OK.
3. Download and install an up-to-date version of msysgit from its official site. There is also the portable
version, which you need only to unpack to get access to the console version of Git. Supposing that for
some of us it could be quite enough.
4. Install TortoiseGit. Choose the 32 or 64 bit version according to the type of OS you work in. While
installing, locate your msysgit (if it doesn’t do that automatically). Follow the wizard – the default
options are OK for the most part.
5. Choose a directory in your file system, where you will download the OpenCV libraries to. I
recommend creating a new one that has short path and no special charachters in it, for example
D:/OpenCV. For this tutorial I’ll suggest you do so. If you use your own path and know, what you’re
doing – it’s OK.
(a) Clone the repository to the selected directory. After clicking Clone button, a window will appear
where you can select from what repository you want to download source files
(https://github.com/opencv/opencv.git) and to what directory (D:/OpenCV).
(b) Push the OK button and be patient as the repository is quite a heavy download. It will take some
time depending on your Internet connection.
(a) Download the Python libraries and install it with the default options. You will need a couple other
python extensions. Luckily installing all these may be automated by a nice tool called Setuptools.
Download and install again.
_____________________________________________________________________________________
95
Chapter5
_____________________________________________________________________________________
(b) Installing Sphinx is easy once you have installed Setuptools. This contains a little application that will
automatically connect to the python databases and download the latest version of many python scripts.
Start up a command window (enter cmd into the windows start menu and press enter) and use the CD
command to navigate to your Python folders Script sub-folder. Here just pass to the easy_install.exe as
argument the name of the program you want to install. Add the sphinx argument.
(c) The easiest way to install Numpy is to just download its binaries from the sourceforga page. Make
sure your download and install exactly the binary for your python version (so for version 2.7).
(d) Download the Miktex and install it. Again just follow the wizard. At the fourth step make sure you
select for the “Install missing packages on-the-fly” the Yes option, as you can see on the image below.
Again this will take quite some time so be patient.
(e) For the Intel © Threading Building Blocks (TBB) download the source files and extract it inside a
directory on your system. For example let there be D:/OpenCV/dep. For installing the Intel © Integrated
Performance Primitives (IPP) the story is the same. For exctracting the archives I recommend using the
7-Zip application.
(f) In case of the Eigen library it is again a case of download and extract to the D:/OpenCV/dep directory.
(h) For the OpenNI prebuilt binaries you need to install both the development build and the
PrimeSensor Module.
(i) For the CUDA you need again two modules: the latest CUDA Toolkit and the CUDA Tools SDK.
Download and install both of them with a complete option by using the 32 or 64 bit setups according to
your OS.
(j) In case of the Qt framework you need to build yourself the binary files (unless you use the Microsoft
Visual Studio 2008 with 32 bit compiler). To do this go to the Qt Downloads page. Download the source
files (not the installers!!!):
7. Now start the Cmake (cmake-gui). You may again enter it in the start menu search or get it from the
All Programs → Cmake 2.8 → Cmake (cmake-gui). First, select the directory for the source files of the
OpenCV library (1). Then, specify a directory where you will build the binary files for OpenCV (2).
_____________________________________________________________________________________
96
Chapter5
_____________________________________________________________________________________
5.4 Image Watch: viewing in-memory images in the Visual Studio debugger
5.4.1 Prerequisites
3. Ability to create and build OpenCV projects in Visual Studio (Tutorial: How to build applications with
OpenCV inside the Microsoft Visual Studio).
5.4.2 Installation
Download the Image Watch installer. The installer comes in a single file with extension .vsix (Visual
Studio Extension). To launch it, simply double-click on the .vsix file in Windows Explorer. When the
installer has finished, make sure to restart Visual Studio to complete the installation.
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
if( argc != 2)
_____________________________________________________________________________________
97
Chapter5
_____________________________________________________________________________________
return -1;
Mat image;
if(! Image.data )
cout << “Could not open or find the image” << std::endl ;
return -1;
return 0;
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
_____________________________________________________________________________________
98
Chapter5
_____________________________________________________________________________________
#include <iostream>
Mat image;
return -1;
Mat gray_image;
waitKey(0);
_____________________________________________________________________________________
99
Chapter5
_____________________________________________________________________________________
return 0;
When you run your program you should get something like this:
When it comes to performance you cannot beat the classic C style operator[] (pointer) access.
Therefore, the most efficient method we can recommend for making the assignment is:
CV_Assert(I.depth() == CV_8U);
_____________________________________________________________________________________
100
Chapter5
_____________________________________________________________________________________
if (I.isContinuous())
nCols *= nRows;
nRows = 1;
int i,j;
uchar* p;
p = I.ptr(i);
p[j] = table[p[j]];
return I;
_____________________________________________________________________________________
101
Chapter5
_____________________________________________________________________________________
CV_Assert(I.depth() == CV_8U);
switch(channels)
case 1:
*it = table[*it];
break;
case 3:
(*it)[0] = table[(*it)[0]];
_____________________________________________________________________________________
102
Chapter5
_____________________________________________________________________________________
(*it)[1] = table[(*it)[1]];
(*it)[2] = table[(*it)[2]];
return I;
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
if( I.empty())
return -1;
_____________________________________________________________________________________
103
Chapter5
_____________________________________________________________________________________
Mat complexI;
dft(complexI, complexI); // this way the result may fit in the source matrix
log(magI, magI);
// rearrange the quadrants of Fourier image so that the origin is at the image center
int cx = magI.cols/2;
int cy = magI.rows/2;
Mat q0(magI, Rect(0, 0, cx, cy)); // Top-Left – Create a ROI per quadrant
_____________________________________________________________________________________
104
Chapter5
_____________________________________________________________________________________
q0.copyTo(tmp);
q3.copyTo(q0);
tmp.copyTo(q3);
q2.copyTo(q1);
tmp.copyTo(q2);
normalize(magI, magI, 0, 1, CV_MINMAX); // Transform the matrix with float values into a
waitKey();
return 0;
_____________________________________________________________________________________
105
Chapter5
_____________________________________________________________________________________
#include <sstream>
#include <string>
#include <iomanip>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
_____________________________________________________________________________________
106
Chapter5
_____________________________________________________________________________________
#include <iostream>
help();
if (argc != 5)
return -1;
stringstream conv;
conv << argv[3] << endl << argv[4]; // put in the strings
char c;
_____________________________________________________________________________________
107
Chapter5
_____________________________________________________________________________________
if (!captRefrnc.isOpened())
cout << “Could not open reference “ << sourceReference << endl;
return -1;
if (!captUndTst.isOpened())
cout << “Could not open case test “ << sourceCompareWith << endl;
return -1;
(int) captRefrnc.get(CV_CAP_PROP_FRAME_HEIGHT)),
(int) captUndTst.get(CV_CAP_PROP_FRAME_HEIGHT));
if (refS != uTSi)
// Windows
_____________________________________________________________________________________
108
namedWindow(WIN_RF, CV_WINDOW_AUTOSIZE);
namedWindow(WIN_UT, CV_WINDOW_AUTOSIZE);
cout << “Reference frame resolution: Width=” << refS.width << “ Height=” << refS.height
double psnrV;
Scalar mssimV;
if (frameReference.empty() || frameUnderTest.empty())
cout << “ < < < Game over! > > > “;
break;
++frameNum;
psnrV = getPSNR(frameReference,frameUnderTest);
_____________________________________________________________________________________
109
Chapter5
_____________________________________________________________________________________
<< “ R “ << setiosflags(ios::fixed) << setprecision(2) << mssimV.val[2] * 100 << “%”
<< “ G “ << setiosflags(ios::fixed) << setprecision(2) << mssimV.val[1] * 100 << “%”
<< “ B “ << setiosflags(ios::fixed) << setprecision(2) << mssimV.val[0] * 100 << “%”;
imshow(WIN_RF, frameReference);
imshow(WIN_UT, frameUnderTest);
c = (char)cvWaitKey(delay);
if (c == 27) break;
return 0;
Mat s1;
_____________________________________________________________________________________
110
Chapter5
_____________________________________________________________________________________
return 0;
else
return psnr;
int d = CV_32F;
_____________________________________________________________________________________
111
Chapter5
_____________________________________________________________________________________
i2.convertTo(I2, d);
sigma1_2 -= mu1_2;
sigma2_2 -= mu2_2;
sigma12 -= mu1_mu2;
_____________________________________________________________________________________
112
Chapter5
_____________________________________________________________________________________
t1 = 2 * mu1_mu2 + C1;
t2 = 2 * sigma12 + C2;
Mat ssim_map;
VideoCapture captRefrnc(sourceReference);
// or
VideoCapture captUndTst;
captUndTst.open(sourceCompareWith);
if ( !captRefrnc.isOpened())
cout << “Could not open reference “ << sourceReference << endl;
return -1;
_____________________________________________________________________________________
113
Chapter5
_____________________________________________________________________________________
(int) captRefrnc.get(CV_CAP_PROP_FRAME_HEIGHT)),
cout << “Reference frame resolution: Width=” << refS.width << “ Height=” << refS.height
// now a read operation would read the frame at the set position
Settings s;
if (!fs.isOpened())
cout << “Could not open the configuration file: \”” << inputSettingsFile << “\”” << endl;
_____________________________________________________________________________________
114
Chapter5
_____________________________________________________________________________________
return -1;
fs[“Settings”] >> s;
if (!s.goodInput)
return -1;
for(int i = 0;;++i)
Mat view;
view = s.nextImage(); //----- If no more image, or got enough, then stop calibration and show result -------
if( mode == CAPTURING && imagePoints.size() >= (unsigned)s.nrFrames )
mode = CALIBRATED;
else
mode = DETECTION;
_____________________________________________________________________________________
115
Chapter5
_____________________________________________________________________________________
if(view.empty()) // If no more images then run calibration, save and stop loop.
break;
If( s.flipVertical )
vector pointBuf;
bool found;
case Settings::CHESSBOARD:
break;
case Settings::CIRCLES_GRID:
_____________________________________________________________________________________
116
Chapter5
_____________________________________________________________________________________
break;
case Settings::ASYMMETRIC_CIRCLES_GRID:
break;
// we will draw the found points on the input image using findChessboardCorners function.
Mat viewGray;
if( mode == CAPTURING && // For camera only take new samples after delay time
{ imagePoints.push_back(pointBuf);
prevTimestamp = clock();
_____________________________________________________________________________________
117
Chapter5
_____________________________________________________________________________________
blinkOutput = s.inputCapture.isOpened();
int baseLine = 0;
if(s.showUndistorsed)
else
_____________________________________________________________________________________
118
Chapter5
_____________________________________________________________________________________
if( blinkOutput )
bitwise_not(view, view);
//If we ran calibration and got camera’s matrix with the distortion coefficients we may want to correct
//the image using undistort function:
break;
s.showUndistorsed = !s.showUndistorsed;
mode = CAPTURING;
imagePoints.clear();
_____________________________________________________________________________________
119
Chapter5
_____________________________________________________________________________________
if(view.empty())
continue;
char c = waitKey();
break;
_____________________________________________________________________________________
120
Chapter5
_____________________________________________________________________________________
5.12.1 Code
vector reprojErrs;
double totalAvgErr = 0;
cout << (ok ? “Calibration succeeded” : “Calibration failed”) << “. avg re projection error = “ <<
totalAvgErr ;
return ok;
corners.clear();
switch(patternType)
_____________________________________________________________________________________
121
Chapter5
_____________________________________________________________________________________
case Settings::CHESSBOARD:
case Settings::CIRCLES_GRID:
break;
case Settings::ASYMMETRIC_CIRCLES_GRID:
break;
vector imagePoints2;
int i, totalPoints = 0;
_____________________________________________________________________________________
122
Chapter5
_____________________________________________________________________________________
perViewErrors.resize(objectPoints.size());
distCoeffs, imagePoints2);
int n = (int)objectPoints[i].size();
totalPoints += n; }
5.12.2 Results
_____________________________________________________________________________________
123
Chapter 6
Theoretical background about OpenFrameworks
Chapter 6
______________________________________________________________________________
Chapter 6
It’s an open source C++ toolkit designed to assist the creative process by providing
a simple and intuitive framework for experimentation.
_____________________________________________________________________________________
124
Chapter 6
______________________________________________________________________________
_____________________________________________________________________________________
125
Chapter 6
______________________________________________________________________________
When you hit generate these addons will be added to the project
You can also import an existing project and change the addons it is using, such as adding
or removing addons it needs.
_____________________________________________________________________________________
126
Chapter 6
______________________________________________________________________________
ofxXmlSettings
ofxGui
If you want to save many files, each file will need to have its own unique file name. A quick
way of doing this is to use the current timestamp because it is never the same. So instead
of naming it “myFile.xml”, which will write over itself everytime you save, you can
do “myFile_” + ofGetTimestampString() + “.xml” to give each file its own name.
_____________________________________________________________________________________
127
Chapter 6
______________________________________________________________________________
You can save a file anywhere in your application, but you may want to trigger it at a
specificmoment. You might want to your file to save everytime you press a specific key.
if(key == ‘s’){
Note: that exit() will fire automatically when you close or esc your app, but not if you stop
the app from the IDE.
A Text File
in the header file (.h)
ofFile myTextFile;
______________________________________________
128
Chapter 6
______________________________________________________________________________
myTextFile.open(“text.txt”,ofFile::WriteOnly);
myTextFile.open(“text.txt”,ofFile::Append);
To add text.
XML Settings
in the header file (.h)
Include the XML addon at the top:
#include “ofxXmlSettings.h”
ofxXmlSettings XML;
XML.setValue(“settings:number”, 11);
save it!
XML.saveFile(“mySettings.xml”);
_____________________________________________________________________________________
129
Chapter 6
______________________________________________________________________________
An Image
in the header file (.h)
ofImage img;
//in draw
ofSetColor(255,130,0);
ofFill();
ofDrawCircle(100,100,50);
// in keyPressed
img.grabScreen(0,0,300,300);
Then trigger a save in your location of choice. Perhaps in the keyPressed or exit functions.
You can save as either a .png or a .jpg.
img.save(“myPic.jpg”);
Optionally you can specify the quality at which you wish to save by adding an additional
parameter.
_________________________________________________________________________
130
Chapter 6
______________________________________________________________________________
img.save(“myPic.jpg”,OF_IMAGE_QUALITY_LOW);
_____________________________________________________________________________________
131
Chapter 6
______________________________________________________________________________
The ofSetupOpenGL method allows you to specify how you want your project displayed on
screen.
The first two parameters specify the width and height of the window:
With the third parameter, you can specify how you want the window to be displayed using
three possible modes:
OF_WINDOW mode will create a free floating window of the size specified by width and height.
OF_FULLSCREEN mode will display your project in the top left corner of the screen at the size
specified by width and height, with the rest of the screen a solid grey.
Alternatively, you can set the window size (and position) in the setup function of ofApp,
which gets called right at the start as the app launches:
void ofApp::setup(){
ofSetWindowShape(500, 500);
ofSetWindowPosition(10, 10);
_____________________________________________________________________________________
132
Chapter 6
______________________________________________________________________________
Console output
The following examples will show you how you can create output at the console.
Using std::cout
Probably the simplest way is to use std::cout. This command lets you combine different
types of values with strings. Appending endlcreates a line break.
Output:
value: 0.2
Using std::printf
printf can be used to force all sorts of different output formats. %f is a placeholder for a float
variable you are appending. %.0f and %.3f set the decimal places of the printed
value. \n creates a line break. Have a look at the reference for details and examples.
Output:
_________________________________________________________________________
133
Chapter 6
______________________________________________________________________________
value: 0.200000
value: 0
value: 0.200
Using ofLog()
The best way to integrate with the openFrameworks workflow is to use the implemented
logging functions. There are different log levels and 146penFram ways of usage – have a
look at the ofLog() documentation. Here is one example:
Output:
ofLogToFile(“myLogFile.txt”, true);
Graphical output
Drawing Text
Drawing text to the screen is as simple as this:
_________________________________________________________________________
134
Chapter 6
______________________________________________________________________________
void draw() {
ofBackground(ofColor::black);
void draw() {
ofSetColor(ofColor::white);
Using ofxGui
Another nice way of viewing your variable that also gives you the ability to change it is
using ofParameter and the core addon ofxGui. Read how to add an existing addon for
details on how to add ofxGui to your project.
In the header file, wrap your variable with ofParameter. You can still work with this variable
as you were used to, but it makes you able to add listeners to the variable or to add the
variable to a GUI that will interact with it.
_________________________________________________________________________
135
Chapter 6
______________________________________________________________________________
//ofApp.h
#include “ofxGui.h”
#include “ofMain.h”
..
ofParameter<float> value;
ofxPanel gui;
In the source file, you can give the value a name, a default value and minimal / maximal
borders (in case of numerical values). You have to setup the GUI and add the value, draw
the GUI. You will then be able to interact with the value.
_________________________________________________________________________
136
Chapter 6
______________________________________________________________________________
//ofApp.cpp
void setup() {
gui.setup();
gui.add(value);
void draw(){
gui.draw();
You can use the new project generator on most platforms (osx, windows, linux)
_____________________________________________________________________________________
137
Chapter 6
______________________________________________________________________________
It’s recommended that you make your project within the openFrameworks folder in a
subfolder of “apps”. For example, I could make a project called simpleSketch and it could go
in apps/myApps so the full path from the root of openFrameworks is
apps/myApps/simpleSketch. You can also create folders inside “apps” to organize, for
example if you have different projects you are working on.
_________________________________________________________________________
138
Chapter 6
______________________________________________________________________________
You may have to adjust the settings as to where openFrameworks is located on your hard
drive
One important note is that openFrameworks projects work relatively, meaning, the project
looks for the libs folder in openFrameworks in a relative manner, ie ../../../libs. If you create a
project, it should always live that level deep from the root of the openFrameworks folder.
Alternatively, you can update it using the project generator if you want to change its folder
height.
_____________________________________________________________________________________
139
Chapter 6
______________________________________________________________________________
Creating a GUI slider is very simple. You simply generate a project with the GUI add on,
initialize an ofxFloatSlider and gui, draw the gui, and link the slider to a specific variable.
_________________________________________________________________________
140
Chapter 6
______________________________________________________________________________
When you open your app in xCode, you should see the gui add on source files here:
#include “ofxGui.h”
Initialize a slider and a panel. Here we will use ofxFloatSlider radius to control the size of
a circle. If you wish to work with intergers, use ofxIntSlider.
ofxFloatSlider radius;
ofxPanel gui;
_________________________________________________________________________
141
Chapter 6
______________________________________________________________________________
void ofApp::setup(){
gui.setup();
For the sake of example, draw a circle in the draw() function and pass the variable ‘radius’
as the third parameter.
Void ofApp::draw(){
gui.draw();
When you run the app, move the radius slider back and forth to change the size of the
circle.
_________________________________________________________________________
142
Chapter 6
______________________________________________________________________________
6.5 Graphics
Select images to load and display. Images can be of .gif, .jpg, or .png file format.
Create a new folder in the bin/data folder of your OF project, name it “images” and drop
your images in it.
Add an instance variable of type ofImage for each image you wish to load.
ofImage bikers;
ofImage bikeIcon;
Load the images by calling the load() method of ofImage, with the relative path to the
image:
example:
void ofApp::setup(){
bikers.load(“images/bikers.jpg”);
bikeIcon.load(“images/bike_icon.png”);
Display the images by calling the draw() method of ofImage, 155penFramewo them on the
stage by specifying their horizontal and vertical coordinate positions. The coordinate
positions reference by default the top left corner of the image.
_________________________________________________________________________
143
Chapter 6
______________________________________________________________________________
imageName.draw(xPosition, yPosition)
example:
void ofApp::draw(){
bikers.draw(0, 0);
bikeIcon.draw(190, 490);
Additionally, you can resize images by specifying the new width and height of the displayed
image.
example:
void ofApp::draw(){
Since 0.9.0 openFrameworks has had the ability to set the alpha mask for a texture with
another texture. In this example, we draw a path into an FBO (offscreen image) and then
pass the result to the image we want to mask.
_________________________________________________________________________
144
Chapter 6
______________________________________________________________________________
ofPath path;
ofImage img;
ofFbo fbo;
void setup(){
path.lineTo(…);
fbo.begin();
path.draw();
fbo.end();
img.getTexture().setAlphaMask(fbo.getTexture());
void draw(){
img.draw();
_________________________________________________________________________
145
Chapter 6
______________________________________________________________________________
Creating a screenshot of your work is very simple. You simply initialize an ofImage, draw
something, and then use img.grabScreen(); to capture what you drew.
ofImage img;
Void ofApp:draw(){
Next, trigger grabbing and saving the screen. Here, when “x” is pressed, a rectangle starting
at point (0,0) with a width and height of ofGetWidth() and ofGetHeight() is grabbed and
saved.
if(key == ‘x’){
img.save(“screenshot.png”);
_________________________________________________________________________
146
Chapter 6
______________________________________________________________________________
After adding this to any of your apps, press “x” and a screenshot of your work will save to
the bin >> data folder within your specific app folder.
_________________________________________________________________________
147
Chapter 6
______________________________________________________________________________
Note!: This page is useful only if you are working with the current master branch. You can
skip this information if you are working with the version 0.9.3 that you have downloaded
from this website.
The next version of openFrameworks will replace the internal math library with GLM. GLM is
a solid C++ library used for all the math operations needed when doing vectors and
matrices operations. The use of this library implies some change in the syntax used to
declare vectors and to execute vector’s operation. The legacy mode is still supported, but
the new mode, enabled by default, uses the new glm syntax.
If you are not interested using this library and you want to continue using the syntax you
were used to, or if you want to run an old project using the last openFrameworks master
branch, you can define the OF_USE_LEGACY_MESH constant in ofConstants.h. Doing
this, glm will be disabled for ofPolyline and ofMesh.
Instead, if you want to use GLM and prepare yourself for what will be the future syntax
adopted by openFrameworks, these are the things that are changed:
ofVec3f myVector;
ofVec2f my2dVector;
To:
glm::vec3 myVector
glm::vec2 my2dVector;
When the vectors come, for example, from a mesh, you don’t have to worry about the type
declaration, since c++ 11 you have the wonderful auto keywords, that tells to the compiler
to figure out the type for you. For example, if you have code like:
_________________________________________________________________________
148
Chapter 6
______________________________________________________________________________
And you want to migrate it to the new mode, just use auto
Methods names
Methods are now plain c functions. Most methods have the exact same name but without
camel case and others are slightly different:
v.length()
becomes
glm::length(v)
And
v.squaredLength()
becomes
glm::length2(v)
And
a.getInterpolated(b, 0.5);
becomes
glm::mix(a, b, 0.5);
And
_________________________________________________________________________
149
Chapter 6
______________________________________________________________________________
v.getMiddle(v1)
becomes
Containers
When using the pure glm mode (enabled by default right now) you’ll usually need to change
any container of ofVec to a container of glm::vec.
Vector<ofVec3f> myContainer
becomes
vector<glm::vec3> myContainer
As said before, if it’s in a function using auto is the best solution since the compiler can
automatically detect the type, and it will work for both modes, the glm one and the legacy
one.
Transitional typedefs
There’s also some typedefs that can be used to make containers compatible with both the
legacy and glm mode:
std::vector<ofDefaultVec3>
_________________________________________________________________________
150
Chapter 6
______________________________________________________________________________
camera.getGlobalPosition().distance(node.getGlobalPosition());
glm::distance(camera.getGlobalPosition(), node.getGlobalPosition());
or
cameraPos.distance(node.getGlobalPosition());
Create a new folder in the bin/data folder of your OF project, name it “movies” and drop
your video in it.
Add an instance variable of type ofVideoPlayer for the video you wish to load.
ofVideoPlayer fingerMovie;
Load the video by calling the load() method of ofVideoPlayer, with the relative path to the
video:
_________________________________________________________________________
151
Chapter 6
______________________________________________________________________________
videoName.play();
Example:
void ofApp::setup()
{fingerMovie.load(“movies/fingers.mov”);
fingerMovie.play();}
videoPlayer.update();
Example:
void ofApp::update(){
fingerMovie.update();
Example:
_________________________________________________________________________
152
Chapter 6
______________________________________________________________________________
Loading and playing sound is very simple. You simply initialize an ofSoundPlayer, load the
sound file, and play the sound file.
ofSoundPlayer mySound;
_________________________________________________________________________
153
Chapter 6
______________________________________________________________________________
void ofApp:setup(){
mySound.load(“166penFram.mp3”);
Next, play the sound file. If you add this to the setup method, the sound will play once right
when you start your app. You can also set your sound to loop if you want it to play
continuously.
Void ofApp:setup(){
mySound.load(“166penFram.mp3”);
mySound.play();
You can also trigger the play function for mousepress, keys, mousedrag, etc. For example,
if(key == “p”){
mySound.play();
Additional Resources
For more information on how to manipulate sound files using OF in the sound chapter of the
OF book.
_________________________________________________________________________
154
Chapter 6
______________________________________________________________________________
#include “ofxAssimpModelLoader.h”
ofxAssimpModelLoader yourModel;
Then, in your ofApp.cpp file you load the model and draw it like this:
void ofApp::setup(){
yourModel.loadModel(“squirrel/NewSquirrel.3ds”, 20);
void ofApp::draw(){
yourModel.drawFaces();
Introduction
In order to be able to listen to an event in your application you need three things: a listener,
an event and an handler. The listener tells to your application to listen for a certain event,
the event is the action that you want to catch, and the handler tells to your application what
_________________________________________________________________________
155
Chapter 6
______________________________________________________________________________
to do when that event is triggered. To add a listener to your app, you have to define it in the
setup method of your App.cpp file, using the method ofAddListener.
In the App.cpp file, you also have to define what myHandler is doing
Default events
Any openFrameworks app, comes at the beginning with handlers for a lot of events:
_________________________________________________________________________
156
Chapter 6
______________________________________________________________________________
You don’t need to add the listeners for this handlers, because they are already there for
you, ready to use.
Resources
If you want to create listener for your class (not in ofApp.ccp), have a look at the
example examples/events/SimpleEventsExample. In general, in the
folder examples/events there are a lot of useful resources about events, also about
how to create custom events.
The 169penFrameworks documentation has a section for events
_________________________________________________________________________
157
Chapter 7
Software and codes
Chapter7
______________________________________________________________________________
Chapter 7
7.2.1 ransac_ellipse
______________________________________________________________________________
158
Chapter7
______________________________________________________________________________
______________________________________________________________________________
159
Chapter7
______________________________________________________________________________
______________________________________________________________________________
160
Chapter7
______________________________________________________________________________
______________________________________________________________________________
161
Chapter7
______________________________________________________________________________
______________________________________________________________________________
162
Chapter7
______________________________________________________________________________
______________________________________________________________________________
163
Chapter7
______________________________________________________________________________
______________________________________________________________________________
164
Chapter7
______________________________________________________________________________
7.2.2 testApp
______________________________________________________________________________
165
Chapter7
______________________________________________________________________________
______________________________________________________________________________
166
Chapter7
______________________________________________________________________________
______________________________________________________________________________
167
Chapter7
______________________________________________________________________________
We’ll build these class files where we’ll create classes needed for our program equipping them
with functions and defining variables.
______________________________________________________________________________
168
Chapter7
______________________________________________________________________________
1. #include <math.h> 25. p[m][n], diag d[n] and 47. for (k = i; k < m; k++)
2. #include <stdlib.h> q[n][n]. 48. scale += fabs(p[k][i]);
3. #include "svd.h" 26. */ 49. if (scale)
27. void svd(int m, int n, double 50. {
**a, double **p, double *d, 51. for (k = i; k < m; k++)
double **q) 52. {
28. { 53. p[k][i] /= scale;
4. //svd function 29. int flag, i, its, j, jj, k, l, 54. s += p[k][i] * p[k][i];
5. #define SIGN(u, v) ( nm, nm1 = n - 1, mm1 = m - 55. }
(v)>=0.0 ? fabs(u) : -fabs(u) ) 1; 56. f = p[i][i];
6. #define MAX(x, y) ( (x) 30. double c, f, h, s, x, y, z; 57. g = -SIGN(sqrt(s), f);
>= (y) ? (x) : (y) ) 31. double anorm = 0, g = 58. h = f * g - s;
0, scale = 0; 59. p[i][i] = f - g;
32. //double *r = 60. if (i != nm1)
tvector_alloc(0, n, double); 61. {
33. double 62. for (j = l; j < n; j++)
7. static double radius(double *r = 63. {
u, double v) (double*)malloc(sizeof(doubl 64. for (s = 0.0, k = i; k < m;
8. { e)*n); k++)
9. double w; 65. s += p[k][i] * p[k][j];
10. u = fabs(u); 66. f = s / h;
11. v = fabs(v); 67. for (k = i; k < m; k++)
12. if (u > v) { 68. p[k][j] += f * p[k][i];
13. w = v / u; 34. for (i = 0; i < m; i++) 69. }
14. return (u * sqrt(1. + w * w)); 35. for (j = 0; j < n; j++) 70. }
15. } else { 36. p[i][j] = a[i][j]; 71. for (k = i; k < m; k++)
16. if (v) { 37. //for (i = m; i < n; i++) 72. p[k][i] *= scale;
17. w = u / v; 38. // p[i][j] = 0; 73. }
18. return (v * sqrt(1. + w * w)); 74. }
19. } else 75. d[i] = scale * g;
20. return 0.0; 76. g = s = scale = 0.0;
21. } 77. if (i < m && i != nm1)
22. } 39. /* Householder reduction to 78. {
bidigonal form */ 79. for (k = l; k < n; k++)
40. for (i = 0; i < n; i++) 80. scale += fabs(p[i][k]);
41. { 81. if (scale)
42. l = i + 1; 82. {
23. /* 43. r[i] = scale * g; 83. for (k = l; k < n; k++)
24. Given matrix a[m][n], m>=n, 44. g = s = scale = 0.0; 84. {
using svd decomposition a = 45. if (i < m) 85. p[i][k] /= scale;
p d q' to get 46. { 86. s += p[i][k] * p[i][k];
______________________________________________________________________________
169
Chapter7
______________________________________________________________________________
170
Chapter7
______________________________________________________________________________
246. g = g * c - x * s; 263. s = h * z;
247. h = y * s; 264. }
248. y = y * c; 265. f = (c * g) + (s * y);
281. // dhli add: the original code
249. for (jj = 0; jj < n; jj++) 266. x = (c * y) - (s * g);
does not sort the eigen value
250. { 267. for (jj = 0; jj < m; jj++)
282. // should do that and change
251. x = q[jj][j]; 268. {
the eigen vector accordingly
252. z = q[jj][i]; 269. y = p[jj][j];
253. q[jj][j] = x * c + z * s; 270. z = p[jj][i];
254. q[jj][i] = z * c - x * s; 271. p[jj][j] = y * c + z * s;
255. } 272. p[jj][i] = z * c - y * s;
256. z = radius(f, h); 273. }
283. }
257. d[j] = z; /* rotation can be 274. }
arbitrary 275. r[l] = 0.0;
258. id z=0 */ 276. r[k] = f;
259. if (z) 277. d[k] = x;
260. { 278. }
261. z = 1.0 / z; 279. }
262. c = f * z; 280. free(r);
7.3.3 cvEyeTracker
______________________________________________________________________________
171
Chapter7
______________________________________________________________________________
44. #define
MAX_CONTOUR_COUNT
20 67. char Feature_Names[9][30] 92. int
={ number_calibration_points_s
68. "BRIGHTNESS", et = 0;
69. "EXPOSURE", 93. int ok_calibrate = 0;
45. // Firewire Capture Variables 70. "SHARPNESS",
46. int dev; 71. "WHITE BALANCE",
47. int 72. "HUE",
width=640,height=480,frame 73. "SATURATION", 94. CvPoint
rate=30; 74. "GAMMA", calipoints[CALIBRATIONPOIN
48. FILE* imagefile; 75. "SHUTTER", TS]; //conversion from
49. dc1394_cameracapture 76. "GAIN"}; eye to scene calibration
cameras[2]; points
50. int numNodes; 95. CvPoint
51. int numCameras; scenecalipoints[CALIBRATIO
52. raw1394handle_t handle; 77. typedef struct { NPOINTS]; //captured (with
53. nodeid_t * camera_nodes; 78. int offset_value; mouse) calibration points
54. dc1394_feature_set 79. int value; 96. CvPoint
features; 80. int min; pucalipoints[CALIBRATIONPO
81. int max; INTS]; //captured eye points
82. int available; while looking at the
83. void (*callback)(int); calibration points in the
55. // Load the source image. 84. } camera_features; scene
56. IplImage *eye_image=NULL; 97. CvPoint
57. IplImage crcalipoints[CALIBRATIONPOI
*original_eye_image=NULL; NTS]; //captured corneal
58. IplImage 85. camera_features reflection points while
*threshold_image=NULL; eye_camera_features[9]; looking at the calibration
59. IplImage points in the scene
*ellipse_image=NULL; 98. CvPoint
60. IplImage vectors[CALIBRATIONPOINTS
*scene_image=NULL; 86. CvPoint pupil = {0,0};
]; //differences between
//coordinates of pupil in
the corneal reflection and
tracker coordinate system
pupil center
87. CvPoint corneal_reflection =
61. // Window handles {0,0}; //coordinates of
62. const char* eye_window = corneal reflection in tracker
"Eye Image Window"; coordinate system 99. //scene coordinate
63. const char* 88. CvPoint diff_vector = {0,0}; interpolation variables
original_eye_window = //vector between the 100. float a, b, c, d, e;
"Original Eye Image"; corneal reflection and pupil //temporary storage of
64. const char* ellipse_window = 89. int corneal_reflection_r = 0; coefficients
"Fitted Ellipse Window"; //the radius of corneal 101. float aa, bb, cc, dd, ee;
65. const char* scene_window = reflection //pupil X coefficients
"Scene Image Window";
66. const char* control_window
= "Parameter Control
Window"; 90. int view_cal_points = 1;
91. int do_map2scene = 0;
__________________________________________________________________________________
172
Chapter7
______________________________________________________________________________
102. float ff, gg, hh, ii, jj; (double*)malloc(FRAMEH*siz 132. b = y + ((u*1814) >> 10);\
//pupil eof(double)); //horizontal 133. r = r < 0 ? 0 : r;\
Y coefficients intensity factor for noise 134. g = g < 0 ? 0 : g;\
reduction 135. b = b < 0 ? 0 : b;\
115. double *avg_intensity_hori = 136. r = r > 255 ? 255 : r;\
(double*)malloc(FRAMEH*siz 137. g = g > 255 ? 255 : g;\
103. float centx, centy; eof(double)); //horizontal 138. b = b > 255 ? 255 : b
// translation to center pupil average intensity
data after biquadratics
104. float cmx[4], cmy[4];
// corner 139. #define FIX_UINT8(x) ( (x)<0
correctioncoefficients 116. //parameters for the ? 0 : ((x)>255 ? 255:(x)) )
105. int inx, iny; algorithm
// translation to center pupil 117. int edge_threshold = 20;
data before biquadratics
//threshold of 140. //----------------------- Firewire
pupil edge points detection Image Capture Code -----------
118. int rays = 18; ------------//
106. int
White,Red,Green,Blue,Yello //number of rays
w; to use to detect feature
107. int frame_number=0; 141. void Open_IEEE1394()
points
142. {
119. int min_feature_candidates =
143. int i;
10; //minimum
number of pupil feature
108. #define FRAMEW 640
candidates
109. #define FRAMEH 480
120. int cr_window_size = 301; 144. handle =
dc1394_create_handle(0);
//corneal 145. if (handle==NULL) {
110. int refelction search window 146. fprintf( stderr, "Unable to
monobytesperimage=FRAME size aquire a raw1394
W*FRAMEH; handle\n\n"
111. int a. "Please check \n"
yuv411bytesperimage=FRAM b. " - if the kernel
121. double map_matrix[3][3];
EW*FRAMEH*12/8; modules
122. int save_image = 0;
`ieee1394',`raw13
123. int image_no = 0;
94' and `ohci1394'
124. int save_ellipse = 0;
are loaded \n"
112. int 125. int ellipse_no = 0;
c. " - if you have
cameramode[2]={MODE_640 126. char eye_file[30];
read/write access
x480_MONO,MODE_640x48 127. char scene_file[30];
to
0_YUV411}; 128. char ellipse_file[40];
/dev/raw1394\n\
n");
147. exit(1);
129. #define YUV2RGB(y, u, v, r, g, 148. }
113. const double beta = 0.2;
//hysteresis factor b)\
for noise reduction 130. r = y + ((v*1436) >> 10);\
114. double 131. g = y - ((u*352 + v*731) >>
*intensity_factor_hori = 10);\
__________________________________________________________________________________
173
Chapter7
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
175
Chapter7
______________________________________________________________________________
______________________________________________________________________________
176
Chapter7
______________________________________________________________________________
______________________________________________________________________________
177
Chapter7
______________________________________________________________________________
433. c = (-
x32*x4*y22*X5+x32*x5*y22
420. //Cramer's rule solution of *X4- 445. e = -(-
4x4 matrix */ x32*y42*x5*X2+x32*X2*x4* x3*y2*x52*X4+x22*y3*x4*X
421. den = -x2*y3*x52*y42- y52+ 5+x22*y4*x5*X3-
x22*y3*x4*y52+x22*y5*x4* 434. x32*x2*y42*X5- x3*x42*y5*X2-
y32-y22*x42*y3*x5- x32*x2*X4*y52- 446. x42*x2*y3*X5+x42*x2*y5*X
422. x32*y22*x4*y5- x3*y22*x52*X4+x3*y22*x42 3+x42*y3*x5*X2-
x42*x2*y5*y32+x32*x2*y5* *X5+ y2*x42*x5*X3+
y42-y2*x52*x4*y32+ 435. x3*x22*X4*y52- 447. x32*x2*y4*X5-
423. x52*x2*y4*y32+y22*x52*y3 x3*X2*x42*y52+x3*X2*x52* x22*y3*x5*X4+x32*y2*x5*X
*x4+y2*x42*x5*y32+x22*y3 y42-x3*x22*y42*X5- 4-x22*y5*x4*X3+
*x5*y42- 436. y22*x42*x5*X3+y22*x52*x4 448. x2*y3*x52*X4-
424. x32*x2*y4*y52- *X3+x22*y42*x5*X3- x52*x2*y4*X3-
x3*y22*x52*y4+x32*y22*x5 x22*x4*X3*y52- x52*y3*x4*X2-
*y4-x32*y2*x5*y42+ 437. x2*y32*x42*X5+X2*x42*x5* x32*y2*x4*X5+
425. x3*y22*x42*y5+x3*y2*x52* y32+x2*X3*x42*y52+x2*y32 449. x3*x22*y5*X4+x3*y2*x42*X
y42+x32*y2*x4*y52+x42*x2 *x52*X4+ 5+y2*x52*x4*X3-
*y3*y52- 438. x22*x4*y32*X5- x32*x5*y4*X2-
426. x3*y2*x42*y52+x3*x22*y4* x22*X4*x5*y32- 450. x32*x2*y5*X4+x3*x52*y4*X
y52-x22*y4*x5*y32- X2*x52*x4*y32- 2+x32*x4*y5*X2-
x3*x22*y5*y42; x2*X3*x52*y42)/den; x3*x22*y4*X5)/den;
451. }
______________________________________________________________________________
456. int calx[10], caly[10]; 474. // Solve Y biquadratic 497. cmx[i] = (calx[j]-wx[j]-
//scene 475. dqfit((float)eye_x[0],(float)ey centx)/(wx[j]*wy[j]);
coordinate interpolation e_y[0],(float)eye_x[1],(float) 498. cmy[i] = (caly[j]-wy[j]-
variables eye_y[1],(float)eye_x[2], centy)/(wx[j]*wy[j]);
457. int eye_x[10], eye_y[10]; 476. (float)eye_y[2],(float)eye_x[3 499. }
//scene ],(float)eye_y[3],(float)eye_x[
coordinate interpolation 4],(float)eye_y[4],
variables 477. (float)caly[0],(float)caly[1],(fl
oat)caly[2],(float)caly[3],(floa 500. return 0;
t)caly[4]); 501. }
478. ff = a; gg = b; hh = c; ii = d; jj =
458. // Place scene coordinates e;
into calx and caly
459. for(i = 0; i<9;i++) {
460. calx[i] = scenecalipoints[i].x;
caly[i] = scenecalipoints[i].y; 479. // Biquadratic mapping of 502. void Draw_Cross(IplImage
461. } points *image, int centerx, int
480. for(i = 0; i < 9; i++) { centery, int x_cross_length,
481. x = (float)(eye_x[i] - inx); int y_cross_length, double
482. y = (float)(eye_y[i] - iny); color)
462. // Set the last "tenth" point 483. wx[i] = 503. {
463. calx[9] = scenecalipoints[0].x; aa+bb*x+cc*y+dd*x*x+ee*y 504. CvPoint pt1,pt2,pt3,pt4;
caly[9] = scenecalipoints[0].y; *y;
484. wy[i] =
ff+gg*x+hh*y+ii*x*x+jj*y*y;
505. pt1.x = centerx -
485. }
464. // Store pupil into eye_x and x_cross_length;
eye_y 506. pt1.y = centery;
465. for(i = 0; i < 9; i++) { 507. pt2.x = centerx +
466. eye_x[i] = vectors[i].x; 486. // Shift screen points to x_cross_length;
467. eye_y[i] = vectors[i].y; center for quadrant compute 508. pt2.y = centery;
468. } 487. centx = wx[0];
488. centy = wy[0];
_____________________________________________________________________________________
179
Chapter7
________________________________________________________________________
______________________________________________________________________________
180
Chapter7
______________________________________________________________________________
570. 607. }
__________________________________________________________________________________
181
Chapter7
______________________________________________________________________________
______________________________________________________________________________
__________________________________________________________________________________
183
Chapter7
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
930. cvReleaseImageHeader(&eye
895. //Make the eye image (in _image );
monochrome): 912. cvCreateTrackbar("Edge 931. cvReleaseImageHeader(&thr
896. threshold_image = Threshold", control_window, eshold_image );
cvCloneImage(eye_image); &pupil_edge_thres, 255, 932. cvReleaseImageHeader(&ori
NULL ); ginal_eye_image );
913. cvCreateTrackbar("Rays 933. cvReleaseImageHeader(&elli
Number", control_window, pse_image );
897. //Make the ellipse image (in &rays, 180, NULL ); 934. cvReleaseImageHeader(&sce
RGB) : 914. cvCreateTrackbar("Min ne_image );
898. ellipse_image=cvCreateImag Feature Candidates",
eHeader(cvSize(640,480), 8, control_window,
3 ); &min_feature_candidates,
899. ellipse_image- 30, NULL ); 935. cvReleaseImage(&eye_image
>imageData=(char 915. cvCreateTrackbar("Corneal );
*)malloc(640*480*3); Window 936. cvReleaseImage(&threshold_
Size",control_window, image);
&cr_window_size, FRAMEH, 937. cvReleaseImage(&original_ey
NULL ); e_image);
900. //Make the scene image: 938. cvReleaseImage(&ellipse_im
901. scene_image=cvCreateImage age);
Header(cvSize(640,480), 8, 3 939. cvReleaseImage(&scene_ima
); 916. //Init colors ge);
902. scene_image- 917. White = 940. }
>imageData=(char CV_RGB(255,255,255);
*)malloc(640*480*3); 918. Red = CV_RGB(255,0,0);
919. Green = CV_RGB(0,255,0);
920. Blue = CV_RGB(0,0,255); 941. void Open_Logfile(int argc,
921. Yellow = CV_RGB(255,255,0); char** argv)
903. //Create the windows 942. {
922. }
904. cvNamedWindow(control_wi 943. char
ndow, 1); defaultlogfilename[]="logfile.
905. cvNamedWindow(ellipse_wi txt";
ndow, 0); 923. void Close_GUI() 944. char *logfilename;
906. cvNamedWindow(scene_win 924. {
dow, 0); 925. cvDestroyWindow(eye_wind
907. cvNamedWindow(eye_wind ow);
ow, 0); 926. cvDestroyWindow(original_e 945. if (argc>1) {
908. cvNamedWindow(original_e ye_window); 946. logfilename=argv[1];
ye_window, 0); 927. cvDestroyWindow(ellipse_wi 947. } else {
ndow); 948. logfilename=defaultlogfilena
928. cvDestroyWindow(scene_wi me;
ndow);
909. //setup the mouse call back
funtion here for calibration
_____________________________________________________________________________________
186
Chapter7
______________________________________________________________________________
949. }
950. logfile=fopen(logfilename,"w
+"); 973. int main( int argc, char**
argv )
974. {
975. char c;
951. if (logfile!=NULL) { 1003. while
952. fprintf(logfile,"Timestamp ((c=cvWaitKey(50))!='q') {
(seconds)\t pupil X\t pupil 1004. if (c == 's') {
Y\t Scene X\t Scene Y\n"); 976. Open_IEEE1394(); 1005. sprintf(eye_file,
953. } else { "eye%05d.bmp", image_no);
954. fprintf(stderr,"Error opening 1006. sprintf(scene_file,
logfile %s.",logfilename); "scene%05d.bmp",
955. exit(-1); 977. Open_GUI(); image_no);
956. } 1007. image_no++;
957. } 1008. cvSaveImage(eye
_file, eye_image);
978. Open_Logfile(argc,argv);
1009. cvSaveImage(scen
e_file, scene_image);
958. void Close_Logfile() 1010. printf("thres:
959. { %d\n", pupil_edge_thres);
979. Start_Timer();
960. fclose(logfile); 1011. } else if (c == 'c') {
961. } 1012. save_image = 1 -
save_image;
980. int i, j; 1013. printf("save_imag
981. double T[3][3], T1[3][3]; e = %d\n", save_image);
962. void Open_Ellipse_Log() 1014. } else if (c == 'e') {
982. for (j = 0; j < 3; j++) {
963. { 1015. save_ellipse = 1 -
983. for (i = 0; i < 3; i++) {
964. static char save_ellipse;
984. T[j][i] = j*3+i+1;
*ellipse_log_name = 1016. printf("save_ellips
985. }
"./Ellipse/ellipse_log.txt"; e = %d\n", save_ellipse);
986. }
965. ellipse_log = 1017. if (save_ellipse ==
987. T[2][0] = T[2][1] = 0;
fopen(ellipse_log_name,"w+ 1) {
988. printf("\nT: \n");
"); 1018. Open_Ellipse_Log
989. for (j = 0; j < 3; j++) {
990. for (i = 0; i < 3; i++) { ();
991. printf("%6.2lf ", T[j][i]); 1019. } else {
992. } 1020. fclose(ellipse_log)
966. if (logfile!=NULL) {
993. printf("\n"); ;
967. fprintf(logfile,"Timestamp
994. } 1021. }
(seconds)\t a\t pupil b\t
995. affine_matrix_inverse(T, T1); 1022. }
centerx\t centery\t
996. printf("\nT1: \n"); 1023. if (start_point.x
theta\n");
997. for (j = 0; j < 3; j++) { == -1 && start_point.y == -1)
968. } else {
998. for (i = 0; i < 3; i++) { 1024. Grab_Camera_Fra
969. fprintf(stderr,"Error opening
999. printf("%6.2lf ", T1[j][i]); mes();
logfile %s.",
1000. } 1025. else
ellipse_log_name);
1001. printf("\n"); 1026. process_image();
970. exit(-1);
971. } 1002. }
972. }
__________________________________________________________________________________
187
Chapter7
______________________________________________________________________________
1027. if
(frame_number%1==0)
Update_Gui_Windows();
1028. }
1029. Close_Logfile();
1030. Close_GUI();
1031. Close_IEEE1394();
1032. return 0;
1033. }
7.3.4 remove_corneal_reflection
______________________________________________________________________________
______________________________________________________________________________
98. if (crar > biggest_crar) { 121. x = (int)(crx + 146. if (crx == -1 || cry == -1 || crr
99. printf("(corneal) size wrong! (r+r_delta)*cos_array[i]); == -1)
crx:%d, cry:%d, crar:%d 122. y = (int)(cry + 147. return;
(should be less than %d)\n", (r+r_delta)*sin_array[i]);
crx, cry, crar, biggest_crar); 123. x2 = (int)(crx + (r-
100. cry = crx = -1; r_delta)*cos_array[i]);
101. crar = -1; 124. y2 = (int)(cry +
102. } (r+r_delta)*sin_array[i]); 148. if (crx-crr < 0 || crx+crr >=
125. if ((x >= 0 && y >=0 && x < image->width || cry-crr < 0
image->width && y < image- || cry+crr >= image->height)
>height) && {
126. (x2 >= 0 && y2 >=0 && x2 < 149. printf("Error! Corneal
103. if (crx != -1 && cry != -1) { image->width && y2 < reflection is too near the
104. printf("(corneal) startx:%d, image->height)) { image border\n");
starty:%d, crx:%d, cry:%d, 127. sum += *(image- 150. return;
crar:%d\n", startx, starty, crx, >imageData+y*image- 151. }
cry, crar); >width+x);
105. crx += startx; 128. sum2 += *(image-
106. cry += starty; >imageData+y2*image-
107. } >width+x2);
129. } 152. int i, r, r2, x, y;
130. } 153. UINT8 *perimeter_pixel =
131. ratio[r-crar] = sum / sum2; (UINT8*)malloc(array_len*siz
132. if (r - crar >= 2) { eof(int));
108. } 133. if (ratio[r-crar-2] < ratio[r- 154. int sum=0, pixel_value;
crar-1] && ratio[r-crar] < 155. double avg;
ratio[r-crar-1]) { 156. for (i = 0; i < array_len; i++) {
134. free(ratio); 157. x = (int)(crx +
135. return r-1; crr*cos_array[i]);
109. int 136. } 158. y = (int)(cry +
fit_circle_radius_to_corneal_ 137. } crr*sin_array[i]);
reflection(IplImage *image, 138. } 159. perimeter_pixel[i] =
int crx, int cry, int crar, int (UINT8)(*(image-
biggest_crar, double >imageData+y*image-
*sin_array, double >width+x));
*cos_array, int array_len) 160. sum += perimeter_pixel[i];
110. { 139. free(ratio); 161. }
111. if (crx == -1 || cry == -1 || 140. printf("ATTN! 162. avg = sum*1.0/array_len;
crar == -1) fit_circle_radius_to_corneal_
112. return -1; reflection() do not change
the radius\n");
141. return crar;
142. } 163. for (r = 1; r < crr; r++) {
164. r2 = crr-r;
113. double *ratio = 165. for (i = 0; i < array_len; i++) {
(double*)malloc((biggest_cra 166. x = (int)(crx + r*cos_array[i]);
r-crar+1)*sizeof(double)); 167. y = (int)(cry + r*sin_array[i]);
114. int i, r, r_delta=1; 143. void 168. *(image-
115. int x, y, x2, y2; interpolate_corneal_reflectio >imageData+y*image-
116. double sum, sum2; n(IplImage *image, int crx, >width+x) =
117. for (r = crar; r <= int cry, int crr, double (UINT8)((r2*1.0/crr)*avg +
biggest_crar; r++) { *sin_array, double (r*1.0/crr)*perimeter_pixel[i
118. sum = 0; *cos_array, ]);
119. sum2 = 0; 144. int array_len) 169. }
120. for (i = 0; i < array_len; i++) { 145. {
_____________________________________________________________________________________
190
Chapter7
______________________________________________________________________________
For all class files there must be header files containing prototypes of functions, so we’ll go back
to the C++ part especially to the function part so that we know how to type the prototype of a
function.
7.4.1 ransac_ellipse
______________________________________________________________________________
191
Chapter7
______________________________________________________________________________
7.4.2 remove_corneal_reflection
7.4.3 svd
7.4.4 testApp
______________________________________________________________________________
192
Chapter7
______________________________________________________________________________
7.4.5 timing
When you illuminate the eye with IR light and observe it through an IR sensitive camera with a
visible light filter, the iris of the eye turns completely white and the pupil stands out as a high-
contrast black dot. This makes tracking the eye much easier. In order to provide some IR
illumination, we have made a quick and dirty IR LED circuit using connecting wires, IR LEDs
and a 2x AAA battery holder
______________________________________________________________________________
193
Chapter7
______________________________________________________________________________
7.5.2 Software
The EyeWriter software is two parts or actually two separate software; an eye-tracking software
designed for use with our low-cost glasses used for calibrating and testing that everything works
fine and the other is theoretically the same but with some changes and a keyboard to actually
type with it. The software for both parts has been developed using openframeworks, OpenCv for
images processing and a c++ library for creative development.
______________________________________________________________________________
194
Chapter7
______________________________________________________________________________
1. Filtering: from RGB color image to greyscale image to make it easier to process on.
2. Another panel is to make a threshold function in which you can set certain pixels which are
above the threshold value (adjustable) to white and below it to black to distinguish the pupil from
the eye.
______________________________________________________________________________
195
Chapter7
______________________________________________________________________________
output: the outline of all the objects found in the binary image.
2. Option for the ALS patients to communicate easily and with a high accuracy based on the rig
1.Wireless rig other than the wired one which will be more reliable and easier for the patient to
use.
2. Smaller camera and rig so that it can be lighter and doesn't feel the burden to wear it or use it.
______________________________________________________________________________
196
Sources
1. http://www.instructables.com/id/The-EyeWriter/
2. https://www.youtube.com/playlist?list=PLAE85DE8440AA6B83
3. https://www.youtube.com/playlist
4. https://www.edx.org/cour…/introduction-c-microsoft-dev210x-5
5. https://www.youtube.com/watch?v=Rub-JsjMhWY
6. http://eyewriter.org/videos/
7. http://fffff.at/eyewriter/The-EyeWriter.pdf
8. https://opencv.org/
9. http://www.instructables.com/id/The-EyeWriter-20
10. http://www.instructables.com/id/Eye-Writer-30
11. https://www.mediafire.com/folder/0zxtaeqtquziu/Main_Basic
12. http://www.cplusplus.com/files/tutorial.pdf
13. https://docs.opencv.org/2.4/opencv_tutorials.pdf
14. https://openframeworks.cc/
______________________________________________________________________________
197