You are on page 1of 41

What you will learn Today

 Bluetooth Control Robot


 Camera Calibration
 Lane detection and Arduino robot steering
 Algorithm for Building a Self Driving Car
 LIDAR in Autonomous robot
About PANTECH

 Started in the Year 2004


 Lab equipment's and Sensor Interface
 Manufacturer of Brain sense EEG Headset
 Reconfigurable Algorithms on AI
 Manufacture of AI development Boards
 Power electronics, Fuel cell and Renewable Energy trainers

www.pantechsolutions.net
About me

My Primary Expertise
Microcontroller Architecture: 8051,PIC,AVR,ARM,MSP430,PSOC3
DSP Architecture: Blackfin,C2000,C6000,21065L Sharc
FPGA: Spartan,Virtex,Cyclone
Image Processing Algorithms: Image/Scene Recognition, Machine Learning, Computer Vision, Deep Learning, Pattern
Recognition, Object Classification ,Image Retrieval, Image enhancement and denoising.
Neural Networks : SVM,RBF,BPN
Cryptography :RSA,DES,3DES,Ellipti curve,Blowfish,Diffe Hellman
Compilers: Keil,Visual DSP++,CCS, Xilinx Platform studio,ISE, Matlab, Open CV
https://www.linkedin.com/in/jeevarajan/
Announcement
Attendance Link at 9 pm
Minimum attendance required for an E-Certificate is 3 Days.
Attendance link will be valid for 1 hrs. after the event.

https://nvda.ws/2YjE5vK
Circuit diagram of a
Arduino robot
Bluetooth Controlled Robot

Components Required
•Arduino UNO
•L298N Motor Driver Module
•HC-05 Bluetooth Module
•Robot Chassis
•4 x 5V Geared Motors
•Connecting Wires
•Battery Holder
•Power Supply
•Android Phone
•Bluetooth Controller App
Step 1 –Pairing Blue Car with Matlab

clc;clear;

instrhwinfo('Bluetooth','HC-05');

bt = Bluetooth('HC-05', 1);

fopen(bt)

fwrite(bt,'F');

fclose(bt);
Step 2-Arduino Code
#define BAUDRATE 57600

const int MotR_A = 6; const int MotR_B = 9;


const int MotL_A = 5; const int MotL_B = 3;
const int ENA = 10; const int ENB = 11;
const int spdR = 100; const int spdL = 120;

byte GetValue;

void setup()
{
Serial.begin(BAUDRATE);
pinMode(MotR_A, OUTPUT);
pinMode(MotR_B, OUTPUT);
pinMode(MotL_A, OUTPUT);
pinMode(MotL_B, OUTPUT);
pinMode (ENA, OUTPUT);
pinMode (ENB, OUTPUT);
}
Step 2-Arduino Code
byte ReadOneByte() // One
Byte Read Function
{
int ByteRead;
while(!Serial.available());
ByteRead = Serial.read();
return ByteRead;
}
Step 2-Arduino Code
void loop()
{

GetValue = ReadOneByte();
Serial.print(GetValue);
switch(GetValue)
{
case 'L':
Robot_Left();
delay(500);
Robot_Stop();
break;

case 'R':
Robot_Right();
delay(500);
Robot_Stop();
break;

case 'F':
Robot_Forword();
break;

case 'B':
Robot_Reverse();
break;

case 'S':
Robot_Stop();
break;
}
}
Step 2-Arduino Code
void Robot_Forword()
{
analogWrite(ENB, spdL);
analogWrite(ENA, spdR);
digitalWrite(MotR_A, HIGH);
digitalWrite(MotR_B, LOW);
digitalWrite(MotL_A, HIGH);
digitalWrite(MotL_B, LOW);
}

void Robot_Reverse()
{
digitalWrite(MotR_A, LOW);
digitalWrite(MotR_B, HIGH);
digitalWrite(MotL_A, LOW);
digitalWrite(MotL_B, HIGH);
}
Step 2-Arduino Code
void Robot_Left()
{
digitalWrite(MotL_A, LOW);
digitalWrite(MotL_B, HIGH);
digitalWrite(MotR_A, HIGH);
digitalWrite(MotR_B, LOW);
}

void Robot_Right()
{
digitalWrite(MotR_A, LOW);
digitalWrite(MotR_B, HIGH);
digitalWrite(MotL_A, HIGH);
digitalWrite(MotL_B, LOW);
}
Step 2-Arduino Code
void Robot_Stop()
{
digitalWrite(MotR_A, LOW);
digitalWrite(MotR_B, LOW);
digitalWrite(MotL_B, LOW);
digitalWrite(MotL_A, LOW);

}
Step 3- Read a frame from Camera
url='http://192.168.1.4:8080/shot.jpg';
i=1;
Data=[];
while(1)

a=imread(url);

filename=strcat('frame',num2str(i),'.jpg');

imshow(a);
end
Step 4-Camera Calibration
 Follow this workflow to calibrate your camera using the app.
 Prepare the images, camera, and calibration pattern.
 Add the images and select standard or fisheye camera model.
 Calibrate the camera.
 Evaluate the calibration accuracy.
 Adjust the parameters to improve the accuracy (if necessary).
 Export the parameters object.
 If the default values work well, then you do not need to make any
adjustments before exporting the parameters.
Camera Calibration-Step1
clear;
camObj = webcam(3);
box_color = {'red','green','yellow'};
preview(camObj);

% Read the first image to obtain image size


originalImage = imread(imageFileNames{1});
[mrows, ncols, ~] = size(originalImage);

% Generate world coordinates of the corners of the squares


squareSize = 25; % in units of 'millimeters'
worldPoints = generateCheckerboardPoints(boardSize, squareSize);

% Calibrate the camera


[cameraParams, imagesUsed, estimationErrors] =
estimateCameraParameters(imagePoints, worldPoints, ...
'EstimateSkew', false, 'EstimateTangentialDistortion', false, ...
'NumRadialDistortionCoefficients', 2, 'WorldUnits', 'millimeters', ...
'InitialIntrinsicMatrix', [], 'InitialRadialDistortion', [], ...
Camera Calibration-Step2
% Define images to process
imageFileNames = {'C:\camera\Image1.png',...
'C:\camera\Image2.png',...
'C:\camera\Image3.png',...
'C:\camera\Image4.png',...
'C:\camera\Image5.png',...
'C:\camera\Image6.png',...
'C:\camera\Image7.png',...
'C:\camera\Image8.png',...
'C:\camera\Image9.png',...
'C:\camera\Image10.png',...
'C:\camera\Image11.png',...
'C:\camera\Image12.png',...
'C:\camera\Image13.png',...
'C:\camera\Image14.png',...
'C:\camera\Image15.png',...
'C:\camera\Image16.png',...
'C:\camera\Image17.png',...
'C:\camera\Image18.png',...
'C:\camera\Image19.png',...
'C:\camera\Image20.png',...
};
% Detect checkerboards in images
[imagePoints, boardSize, imagesUsed] =
detectCheckerboardPoints(imageFileNames);
imageFileNames = imageFileNames(imagesUsed);
Camera Calibration-Step3
[imagePoints, boardSize, imagesUsed] = detectCheckerboardPoints(imageFileNames);
imageFileNames = imageFileNames(imagesUsed);

% Read the first image to obtain image size


originalImage = imread(imageFileNames{1});
[mrows, ncols, ~] = size(originalImage);

% Generate world coordinates of the corners of the squares


squareSize = 25; % in units of 'millimeters'
worldPoints = generateCheckerboardPoints(boardSize, squareSize);

% Calibrate the camera


[cameraParams, imagesUsed, estimationErrors] = estimateCameraParameters(imagePoints,
worldPoints, ...
'EstimateSkew', false, 'EstimateTangentialDistortion', false, ...
'NumRadialDistortionCoefficients', 2, 'WorldUnits', 'millimeters', ...
'InitialIntrinsicMatrix', [], 'InitialRadialDistortion', [], ...
'ImageSize', [mrows, ncols]);
Camera Calibration –Reference Link

https://in.mathworks.com/help/vision/ug/u
sing-the-single-camera-calibrator-
app.html
Objective of the Selfdriving

 Follow the Lane Path autonomously and steer.


 Able to identify Vehicle, Pedestrian, Traffic light.
 Stop and Start of Vehicles based on Traffic Light.
 Collision avoidance and Lane changing Algorithm
Get the road printed on the flex sheet.
Algorithm steps
1) Get the Feed from the camera and Stereo camera(Construct a low cost stereo camera using
esp32)
2) Calibrate the Camera ,Calibrate the drive of the robot.
3) Capture Lanes and Train using Alexnet
4) Capture Vehicles, Traffic light and pedestrian and Train using RCNN or YOLO.
5) Detect the lanes and estimate the steering angle and send commands to Arduino via
Bluetooth
6) Detect Pedestrian and distance of each object and send commans to the robot.
7) If a vehicle present in the existing Lane steer for the another lane halt it if u have Vehicles.

Con’s
1) Illumination not taken in to account
2) Collision avoidance is done only for stationary
3) Differential drive
Lane Detection
Curve Tracking
Algorithm steps
1) Get the Feed from the camera and Stereo camera(Construct a low cost stereo camera using
esp32)
2) Calibrate the Camera ,Calibrate the drive of the robot.
3) Capture Lanes and Train using Alexnet
4) Capture Vehicles, Traffic light and pedestrian and Train using RCNN or YOLO.
5) Detect the lanes and estimate the steering angle and send commands to Arduino via
Bluetooth
6) Detect Pedestrian and distance of each object and send commans to the robot.
7) If a vehicle present in the existing Lane steer for the another lane halt it if u have Vehicles.

Con’s
1) Illumination not taken in to account
2) Collision avoidance is done only for stationary
3) Differential drive
Lane detection and Control Command for robot
instrhwinfo('Bluetooth','HC-05');
bt = Bluetooth('HC-05', 1);
fopen(bt)
url='http://192.168.1.4:8080/shot.jpg';
i=1;
Data=[];
while(1)
a=imread(url);
filename=strcat('frame',num2str(i),'.jpg');
imshow(a);
% pause(0.5);
COMMAND=lanedetection(a)
switch lower(COMMAND)
case 'f'
disp('FORWARD');
fwrite(bt,'F');
i=i+1;
case 'b'
disp('BACKWARD');
fwrite(bt,'B');
i=i+1;
Lane detection and Control Command for robot
case 'r'
disp('RIGHT');
fwrite(bt,'R');
i=i+1;
case 'l'
disp('LEFT');
fwrite(bt,'L');

otherwise
disp('STOP');
fwrite(bt,'S');
end

end
fclose(bt);
Indoor Autonomous robot
Using LIDAR
What is SLAM
SLAM (simultaneous localization and mapping) is a method used for
autonomous vehicles that lets you build a map and localize your vehicle in
that map at the same time. SLAM algorithms allow the vehicle to map out
unknown environments
SLAM is a technique used to build up a map within an unknown environment or a
known environment while at the same time keeping track of the current
location.
What is SLAM
o The problem has 2 stages
• Mapping
• Localization
o The paradox:
• In order to build a map, we must know our position
• To determine our position, we need a map!
o Solution is to alternate between the two steps.
The Three SLAM paradigms

Most of the SLAM algorithms are based on the following three different approaches:

Extended Kalman Filter SLAM: (called EKF SLAM)


Particle Filter SLAM: (called FAST SLAM)
Graph-Based SLAM
Features of sLAM

 Landmark extraction
 Data association
 State estimation
 State update
 Landmark update
Toolbox Required
Toolbox Required
Step 1- Load Laser Scan Data from File

load('offlineSlamData.mat');
Step 2- Run SLAM Algorithm, Construct Optimized Map and Plot Trajectory of the
Robot

maxLidarRange = 8;
mapResolution = 20;
slamAlg = lidarSLAM(mapResolution, maxLidarRange);
slamAlg.LoopClosureThreshold = 210;
slamAlg.LoopClosureSearchRadius = 8;
for i=1:10
[isScanAccepted, loopClosureInfo, optimizationInfo]
= addScan(slamAlg, scans{i});
if isScanAccepted
fprintf('Added scan %d \n', i);
end
end
figure; show(slamAlg); title({'Map of the
Environment','Pose Graph for Initial 10 Scans'});
Step 3- Observe the Effect of Loop Closures and the Optimization Process
firstTimeLCDetected = false;
figure;
for i=10:length(scans)
[isScanAccepted, loopClosureInfo, optimizationInfo] = addScan(slamAlg, scans{i});
if ~isScanAccepted
continue;
end
% visualize the first detected loop closure, if you want to see the
% complete map building process, remove the if condition below
if optimizationInfo.IsPerformed && ~firstTimeLCDetected
show(slamAlg, 'Poses', 'off');
hold on;
show(slamAlg.PoseGraph);
hold off;
firstTimeLCDetected = true;
drawnow
end
end
title('First loop closure');
Step 4- Visualize the constructed map and Trajectory of the robot
figure
show(slamAlg);
title({'Final Built Map of the Environment', 'Trajectory of the Robot'});
Step 5- Build Occupancy Grid
[scans, optimizedPoses] = scansAndPoses(slamAlg);
map = buildMap(scans, optimizedPoses, mapResolution,
maxLidarRange);
figure;
show(map);
hold on
show(slamAlg.PoseGraph, 'IDs', 'off');
hold off
title('Occupancy Grid Map Built Using Lidar SLAM');
DEMO
Thank you

You might also like