You are on page 1of 44

AUTOMATIC SHAPE IDENTIFICATION

BY

MBOKAZI M.
21924501
Final Report
Electronic Design Project 3B (EDPB301)
Submitted in the Faculty of Engineering: Department of Electronic Engineering
In Partial Fulfilment of the Requirement for the
Bachelor of Technology in Electronic Engineering
at the
Durban University of Technology
January 2022

____________________ _______________________________
17 - January - 2022
Signature of Student Date
PLAGIARISM DECLARATION

1. I know and understand that plagiarism is using another person’s work and pretending it
is one’s own, which is wrong.

2. This report is my own work.

3. I have appropriately referenced the work of other people I have used.

4. I have not allowed and will not allow anyone to copy our work with the intention of
passing if off as his/her own work.

_______________________
Mbokazi M. 21924501
___________________ _____________________
Surname and Initials Student Number Signature

17 - January - 2022
_________________________
Date

II
Abstract
People have the ability to classify objects around them according to shape, size, color, and they
can also identify the type of object in front of them. Humans use their vision with the help of
eyes to identify things around them. This concept can be used to computers, however unlike
humans, computers do not have eyes to identify the objects. To compensate for this the
computers are equipped with sensors and devices like cameras to aid in the vision. Computers
can also identify shapes, colors and the types of an object using computer vision. Computer
vision is used in many applications which include medical, home security systems, driverless cars
use computer vision and in the robotics.

Shape identification and detection is an import branch of computer vision as it plays a vital role
in object detection. This paper focuses on the branch of shape detection. Shapes make up the
environment around us, we classify objects around us with shapes for instance a ball is known
to be spherical, the mobile phones we use rectangular, and the pyramids of Egypt are triangular
prisms. The approach used on this paper is to classify if an object belongs to the most common
shapes which are circle, triangle, and the rectangle. The method of contour detection is used to
classify the shapes. In most of the previous work of shape identification, object identification
the color/texture image binarization and foreground extraction were used. These two
approaches can also be used. Other methods of shape identification are based in edge detection
or generalized Hough transforms. The focus on this paper is more on using the Hough transform
which detect the edges on an image and identify the type of the shape.

III
Table of Contents

PLAGIARISM DECLARATION .......................................................................................................... II

Abstract ........................................................................................................................................ III

Table of Contents ......................................................................................................................... IV

List of Figures .............................................................................................................................. VI

List of Tables................................................................................................................................ VII

List of Abbreviations .................................................................................................................... IX

CHAPTER 1..................................................................................................................................... 1

1.1 INTRODUCTION ............................................................................................................. 1

1.1.1 Computer Vision........................................................................................................... 1

1.1.2 Shape Identification ..................................................................................................... 2

1.2 BAKGROUND AND SIGNIFICANCE ................................................................................. 3

1.3 PROBLEM STATEMENT .................................................................................................. 4

1.4 PROJECT OBJECTIVES .......................................................................................................... 4

1.4 PROJECT LIMITATIONS ........................................................................................................ 4

CHAPTER 2..................................................................................................................................... 6

2.1 LITARATURE SURVEY ........................................................................................................... 6

CHAPTER 3..................................................................................................................................... 9

3.1 COMPONENT ANALYSIS ...................................................................................................... 9

3.1.1 Arduino Mega 2560 Microcontroller Board ................................................................. 9

3.1.2 ESP32-Cam ................................................................................................................... 9

3.1.3 Liquid Crystal Display (LCD) ........................................................................................ 10

3.1.4 Components List and Prices ....................................................................................... 11

3.2 MACHINERY....................................................................................................................... 11

3.2.1 Software ..................................................................................................................... 11

3.3 MANPOWER AND TIME MANAGEMENT ........................................................................... 12

CHAPTER 4................................................................................................................................... 13

4.1 ENGINEERING EXECUTION ................................................................................................ 13

IV
4.1.1 Block Diagram ............................................................................................................ 13

4.1.2 Programming Design .................................................................................................. 14

4.1.3 Shape Detection Process in Python OpenCV ............................................................. 17

4.1.4 Digital Image Processing ............................................................................................ 18

CHAPTER 5................................................................................................................................... 21

5.1 TECHNO-ECONOMIC ANALYSIS AND JUDGEMENT ........................................................... 21

5.2 IMPACT AND BENEFITS ..................................................................................................... 21

5.2.1 Socio-Economic Impacts and Benefits ....................................................................... 21

5.2.2 Common Applications ................................................................................................ 22

5.2.3 Alternatives in The Industry ....................................................................................... 23

CHAPTER 6................................................................................................................................... 24

6.1 SAFETY IN INDUSTRY ......................................................................................................... 24

6.1.1 Occupational Health and Safety (OHS) ...................................................................... 24

6.1.2 Safety in Industry Related Shape Detectors............................................................... 24

CHAPTER 7................................................................................................................................... 26

7.1 TESTING AND RESULTS ...................................................................................................... 26

7.1.1 Progress Testing ......................................................................................................... 27

7.1.2 Project Progress Results ............................................................................................. 28

7.1.3 Final Project Results ................................................................................................... 29

CHAPTER 8................................................................................................................................... 30

8.1 CONCLUSION ..................................................................................................................... 30

8.1 REASONS FOR USING PYTHON .......................................................................................... 30

REFERENCES ................................................................................................................................ 31

APPENDIX A ................................................................................................................................. 32

APPENDIX B ................................................................................................................................. 34

V
List of Figures
Figure 1.1: Resulted Image After Labelling ................................................................................... 6
Figure 2.2: Contour Detection Results .......................................................................................... 8
Figure 3.1: Arduino Mega 2560 Microcontroller Board................................................................ 9
Figure 3.2: ESP32-CAM Module .................................................................................................. 10
Figure 3.3: Liquid Crystal Display (LCD) ....................................................................................... 10
Figure 3.4: Gantt Chart Showing Project Timeline. ..................................................................... 12
Figure 4.1: Project Block/Flow Diagram ...................................................................................... 13
Figure 4.2: Project Flow Chart ..................................................................................................... 14
Figure 4.3: Develop GUI ............................................................................................................. 15
Figure 4.4: GUI Code ................................................................................................................... 15
Figure 4.5: RGB Scales of an Image ............................................................................................. 18
Figure 7.1: Project Protype ......................................................................................................... 26
Figure 7.2: Testing of a Rectangle ............................................................................................... 27
Figure 7.3: Testing a Triangle ...................................................................................................... 27
Figure 7.4: Testing for Circle ....................................................................................................... 28
Figure 7.5: Progress Rectangle Results ....................................................................................... 28
Figure 7.6: Triangle Progress Results .......................................................................................... 28
Figure 7.7: Circle Progress Results .............................................................................................. 28

VI
List of Tables
Table 3.1: List of Components and Prices ................................................................................... 11
Table 3.2: Project Task Dates ...................................................................................................... 12
Table 4.1: RGB Values from Image Plots ..................................................................................... 19
Table 7.1: Final Project Results ................................................................................................... 29

VII
List of Equations
Equation [4.1] .............................................................................................................................. 18
Equation [4.2] .............................................................................................................................. 18
Equation [4.3] .............................................................................................................................. 19
Equation [4.4] ............................................................................................................................. 20
Equation [4.5] .............................................................................................................................. 20

VIII
List of Abbreviations

Acronym Definition
2D Two-Dimension
AI Artificial Intelligence
CCD Charge Coupled Device
CCTV Closed-Circuit Television
GPIO General-Purpose Input/Output
GUI Graphical User Interface
I2C Inter-Integrated Circuit, eye-squared-C
ICSP In-Circuit Serial Programming
IDLE Integrated Development Environment
IoT Internet of Things
LCD Liquid Crystal Display
Mask R-CNN Mask-R Convolutional Neural Networks
ML Machine Learning
NASA National Aeronautics and Space Administration
OHS Occupational Health and Safety
OHSA Occupational Safety and Health Administration
OpenCV Open-Source Computer Vision
PCBs Printed Circuit Boards
PDF Portable Document Format
PWM Pulse Width Modulation
QR Quick Response
SCL Serial Clock
SDA Serial Data
Structuring Element SE
TF Trans-Flash
UART Universal Asynchronous Reception and Transmission
USB Universal Serial Bus
Wi-Fi Wireless Fidelity

IX
CHAPTER 1
1.1 INTRODUCTION
Humans have the ability to see object around them, identify them according to their shape, tell
their color, tell the texture, count the object around them and estimate the quality of the object,
the main organs that enables humans to do all of that is the brain and eyes to provide them with
vision. As we live in the era of the 4th Industrial revolution computers can also perform any task
that humans can perform without any assistance, there are humanoid bots, driverless cars,
automatic pick and place robots, Security system and home automation systems like Alexa. All
this is accomplished by providing the computers with vision to navigate the world around us.

1.1.1 Computer Vision


The greatest bandwidth sense is vision, which provides a flow of information about the status
of the world and how to act on it. As a result, computer scientists have given computers vision,
creating the sub-field of computer vision. Its purpose is to enable computers to derive high-level
knowledge from digital photos and videos. Artificial Intelligence (AI) includes the field of
computer vision. Using AI and Machine Learning (ML) algorithms, the technology helps in
automating visual learning from a sequence of photos, videos, Portable Document Formats
(PDFs), or text images[1]. In other words, computer vision copies some of the functions of
human vision, but faster and sometimes more precisely.

Since the number of picture data is expanding at an exponential rate nowadays, detecting and
analyzing photos is becoming increasingly important for creating insights[1]. Computer vision
uses software and robotics to analyze hundreds of photos, videos, and documents, including
PDFs, in order to extract valuable information. It also allows for object detection, picture
restoration, and scene reconstruction.

Computer Vision Applications


The computer vision concept has been effectively used to facial recognition. The technology is
used by Facebook to tag users on the images, while Snapchat uses it to detect a person's face
when applying filters. Recently, computer vision has been applied to self-driving automobiles.
The technology enabled driverless automobiles to recognize people, other cars, objects,
motorcycles, pedestrians, and so on while driving[1]. It is now commonly used to extract data
from PDFs and photos.

1
1.1.2 Shape Identification
The shape of an object is one of the key features that play a significant role in the identification
of the object, and it is the key information that any eyes recognizes when an object is projected
to it. The concept of shape identification can also be applied to computers since they also have
vision. The revolutionized computer systems can identify shapes and tell the distinction between
shapes of different objects since they are not limited to identify the most common geometric
shapes like a circle or a square. The computers use images and video frames as inputs signals.
The concept of processing the images and video frame signal is called Image processing. Image
processing process the characteristics of the images or the video frames and transform them
into output signals.

Computer systems interpret the shape as a region encircled by an outline of an object, which is
the most common technique computers use to identify the shape of an object. In shape
identification the shape information is extracted and found using the images or video frames.
There are many shapes identification approaches, some of the approaches include Curvature
Scale Space, dynamic programming, shape context, Fourier descriptor, and wavelet descriptor,
however the most common shape identification methods are area-based and boundary-based
techniques[2].

With the method of area-based identification, the system takes into consideration all the pixels
within the region of an image to obtain the shape. Usually the area-based technique makes use
of the moment descriptors to show the shape. On the other hand, the boundary-based
technique uses the boundary of the image[2]. The boundary-based method is more accurate as
compared to the area-based method, the boundary-based method represents the features of
the object clearly in order to identify the shape.

2
1.2 BAKGROUND AND SIGNIFICANCE
Computer vision research began in the 1960s at institutions that saw it as a steppingstone to AI.
Early scientists were highly hopeful about the future of these connected fields, and they pushed
AI as a technology with the potential to change the world[3]. Marvin Minsky made the first
attempt to simulate the human brain more than 50 years ago, encouraging additional study into
computers' ability to process information and make intelligent judgments. The technique of
automating image analysis led to the programming of algorithms throughout time. However, it
was only around 2010 that deep learning approaches began to gain traction[4].

Google Brain created a neural network of 16,000 computer processors in 2012 that could
recognize cat images using a deep learning method. Marvin Minksy instructed a graduate
student in 1966 to attach a camera to a computer and have it report what it visualized[4]. In
2001, two MIT researchers created the first real-time face detection system. In 2010 Google
introduced Goggles, an image recognition tool for searches based on images captured by mobile
devices and on the same year Facebook introduced face recognition to tag users on their photos
uploaded on the app. Amazon sold its real-time facial recognition system, Rekognition, to law
enforcement agencies in 2018.

The concept of shape detection is all the above-mentioned events, for instance the face detector
may use the shape or structure of a person to identify them. The Goggles and some of the search
engines make use of shape detection when searching based on an image taken by the mobile
phone. The object recognition system also uses shape detection because an object is defined by
its shape, firstly these systems must understand the structure of the object which is basically its
shape.

Applications of Shape Detection and Computer Vision


• In robotics for pick and place.
• Automated vehicles/driverless vehicles.
• Security systems and home automation systems.
• Fingerprint analysis.
• Handwriting mapping.
• Face recognition.
• Remote sensors.
• Search engines like Google, where you search with an image.

3
1.3 PROBLEM STATEMENT
Humans often have certain default shapes that the eye can recognize and identify, hence vision
is an essential aspect of human knowledge. The identical technique can be carried out in
machines and computers since software must detect the shape before performing any operation
on it. Humans may make errors when it comes to identifying the shapes and they may not be
accurate, computer vision can detect any shapes. Shape detection is essential, more especially
to the revolutionized technologies like the driverless automobiles, the home security systems
and some of the robotics applications in the car manufacturing industry. The aim of the project
was to give computer vision so that it can detect and identify shapes automatically.

1.4 PROJECT OBJECTIVES


The main objective was to develop an automatic shape detector, using Python Integrated
Development Environment (IDLE) and Arduino IDLE. The project should not use any robot,
scanning must be done using a camera. The following are objectives of the project.

Project Objectives
• Develop Graphical User Interface (GUI) using Python.
• Implement Serial communication between Python and Arduino IDLE.
• Implement Image Processing.
• Using laptop webcam or any Camera detect shapes on photos.

1.4 PROJECT LIMITATIONS


The system is limited to operate with a serial communication only, this means it is not controlled
wirelessly. It detects shapes from live pictures, no pictures are read from the computer memory
and detect shapes from them. The information of the pictures is transferred from the Arduino
or camera to the Python IDLE via the serial communication for processing. Another constraint is
that the system is only able to detect shapes drawn on the white sheet paper or background to
avoid over segmentation and detection contours/lines that are not part of the shape.

There is a limitation of the shapes that can be detected because system can only be able to
detect shapes that are defined on the code on Python, the system cannot detect any shape if it
is not defined on the code or any irregular shape. The code can be modified so that the system
can detect some shapes, however this does not mean that the system will detect any shape, the
shape has to be defined with area and the approximation of polygons has to be taken into
consideration in order for the shape on the paper to be detected. If the area or approximation

4
of polygons of that certain shape are not defined on the code, the system will not be able to
detect it.

Challenges Experienced
Initially the system was meant to use an ESP32-CAM however the ESP32-CAM provided had a
problem with one of the pins on it, so for this system the web cam of a laptop was used to
replace the ESP32-CAM. There was a great distinction between the results obtained using the
ESP32-CAM and those obtained using the web cam of a laptop. The use of a web cam had an
effect when it comes to switching it off.

5
CHAPTER 2
2.1 LITARATURE SURVEY
A Computer Engineering student Shikha Garg and his assistance professor Gianetan Singh
Sekhon from Punjabi University, Patiala, India developed a method of shape recognition among
different regular geometrical shapes using morphological operations, developed a shape
detector using MATLAB. After an introduction to shape recognition concept, the process of
extracting the boundaries of objects was described in order to avoid over segmentation. Their
system read an image saved from the computer and performed the operation to detect the
shapes on the image[2]. The algorithm detected the shapes in certain cases. The shapes were
detected in the case when there were different objects in the given image, when the objects in
a given image were touching, when there were objects overlapped and lastly when the one
object was contained in the other in the image which was read. After all the four cases with the
help of help of boundaries concentrate and shape properties, classification of the shapes was
done.

Their proposed method was to Avoid of over segmentation between different shapes like
square, rectangle, square, circle etc. with the use of morphological operations, Label the shapes
after they have been identified. Over segmentation was prevented, and the area that
overlapped was segmented so that they can extract the boundaries of the shapes on the image.
When the shapes were segmented from one another, they were then identified, and the filtering
technique was used. A data base was created and the features of the shapes were preloaded for
instance the circle had these features; Number of corners = 0 , Absolute difference b/w length
and breadth < 25 and Sensitivity Factor = 0.24, all the shapes’ features were loaded in this
manner[2]. The shapes on the input image were matched with these preloaded features, thus
the shape was labelled and detected. The following figure shows the output image labelled with
identified images.

Figure 1.1: Resulted Image After Labelling[2]


6
Raghav Puri, Archit Gupta and Manas Sikri from the Bharati Vidyapeeth’s College of Eng. in New
Delhi, India developed a contour, shape, and color detection system using Python Open-Source
Computer Vision (OpenCV) library. Using Python 2.7, OpenCV, and NumPy, their system
detected contours, shapes, and colors of various geometrical figures in binary images. These key
functions were used to process the images, load them, and detect various shapes and colors
within the given sample images. The first step that was implemented was to perform object
detection on the provided image[5]. To implement the object detection, all the necessary
Python 2.7 packages which are Matplotlib, Python 2.7.x and NumPy were downloaded and
installed. These modules were all imported.

Firstly, the image which was going to be processed was read, the image was saved on the
computer, and it was read using the cv2.imread() Python function. The path of the input image
was written as an argument on the function, then the contour detection was applied on the
image. In the sample images, the length of various shapes on the image was found[5].
Elementary geometry was used, and if the length came out to be 4, a square was assigned, if it
came out to be 5, a pentagon was assigned, if it came out to be 3, a triangle was assigned and if
it was not any of the three cases a circle was assigned. The project started with reading an image
from a computer, then detected contour then shape detection. Pixel detection was
implemented then from it the color was detected.

The challenge Puri and his colleagues faced progress of their project was to determine the shape
and color if the two figures overlapped. It was difficult to detect small figures which were inside
the big figures. They began by detecting small contours and their shapes, then proceeded to
bigger contours and their shapes. Then they moved to the next, detecting its colors. The results
obtained were displayed on the Python terminal/console and the text of the assigned images
was written on the output image.

7
Using OpenCV and an Arduino Uno, Xhensila Poda and Olti Qirici designed a Shape detection
and categorization system using a conveyor system with servo motors. The system opened if a
shape was detected, and the shape was then sent its specific bin by the conveyor. The essential
image processing techniques to be used were given a high priority in order to complete the
system. Poda and his colleague went all the way, from the basic image processing layers to the
stage where, using a mechanical arm, a hardware system (based on Arduino) could identify
objects in shapes of particular categories. For their system they used the web cam to scan for
available shapes in an image[6]. Since OpenCV was used, its algorithm to find contours was used
for the system. To perform shape detection, contour approximation was used once all contours
in an image were detected.

The perimeter of the contour was computed in order to implement contour approximation. The
actual contour approximation was developed after the perimeter was computed. Since a
contour is consists set of vertices, the system used the number of entries in the list to identify
an object's shape. The following figure shows the results of the contour detection.

Figure 2.2: Contour Detection Results[6]

In this case, the sender of the data was a Python script, and the receiver was an Arduino. Serial
Communication was used to transmit data from Python to Arduino. Python sent ‘p’ for Pentagon
to Arduino. So, if a Python message contained the letter 'p,' that means a pentagon was detected
by the Python script. The servo motor opened 90 degrees, opening the bin for pentagons and
this was only for two shapes because the hardware structure had only two containers.

8
CHAPTER 3
3.1 COMPONENT ANALYSIS
The following are the main components that will be used for the project. The components’
significance on the project and their significance to the is also discussed. The project does have
many components.

3.1.1 Arduino Mega 2560 Microcontroller Board


The ATmega2560 is the basis for the Arduino Mega 2560 microcontroller board, and it is used in
this project. It has 54 digital I/O pins (of which 15 are Pulse Width Modulation (PWM) outputs),
4 Universal Asynchronous Reception and Transmission (UARTs) which are hardware serial ports,
a 16 MHz crystal oscillator, an In-Circuit Serial Programming (ICSP) header, 16 analog inputs, a
Universal Serial Bus (USB) connection, a power connector, and a reset button[7]. The figure
below shows the Arduino Mega 2560 board.

Figure 3.1: Arduino Mega 2560 Microcontroller Board[7]

The Arduino board is configured to display the results of the detected shape, and data from
Python is received through serial communication and read. After the data is read, Arduino then
commands the LCD to display the data, which is the name of the identified shape.

3.1.2 ESP32-Cam
The ESP32-CAM is a compact camera, which uses low-power, and it is based on the ESP32. It has
an OV2640 camera and an inbuilt Trans-Flash (TF) card slot(shown on Figure 3.1). The ESP32-
CAM is suitable for a broad range of intelligent Internet of Things (IoT) applications, including
wireless video monitoring, Wireless Fidelity (Wi-Fi) picture upload, Quick Response (QR)
identification, and so on. This device can operate on 3.3 or 5 V[8]. Initially this camera module
was going to be used to scan for shapes, but it was damaged beyond repair. However, to
compensate for the damaged webcam, a laptop web cam was the only option to replace. The

9
Lenovo Ideapad 330-15IGM laptop was used, it has 0.3MP resolution, fixed focus and it is a 0.3
Megapixel webcam.

Figure 3.2: ESP32-CAM Module[8]

3.1.3 Liquid Crystal Display (LCD)


The 16x1 LCD is a relatively basic module that is frequently used in DIY projects and circuits. The
16x1 represents a display of 16 characters per line in 1 line. Each character is presented in a 57-
pixel matrix on this LCD. For this project the LCD will be used to display the results of the
detected shape. The LCD displays the results only when it receives a command from Arduino.
The following figure shows a 16x1 LCD.

Figure 3.3: Liquid Crystal Display (LCD)

10
3.1.4 Components List and Prices
The following table shows the estimated of the all the components that were bought for the
project. The prices were estimated using Mantech prices, which is the most common electronic
shop in Durban.
Table 3.1: List of Components and Prices
Component Price
Arduino Mega 2560 R300.00
LED x 5 R2.55
ESP32-CAM R212.11
Male to male wires R28.00
Female to male wires R28.00
I2C Interface LCD Module R44.64
LCD R146.12
Resistors x 5 R4.50
Total R737.92

3.2 MACHINERY
3.2.1 Software
The project schematic was designed on Fritzing, all the necessary library for the components
needed were installed on the Fritzing software. Fritzing is a creative ecosystem that includes a
software tool, a community website, and services that allow users to capture their prototypes,
share them with others, teach electronics, and make professional Printed Circuit Boards (PCBs).
To run and execute codes Arduino Integrated Environment was used. For image processing and
detecting the shape on the photo, Python IDLE was used. The Python environment will be used
to implement serial communication between the Python IDLE and the Arduino IDLE. Python
OpenCV is the most important module/software that was need for image processing, using the
command ‘pip install opencv-python’ on the Anaconda cmd window the package was installed
on Python. The block diagram and the flow chart of the project were designed using an online
tool called drawio, this online tool had all the required flowchart shapes.

11
3.3 MANPOWER AND TIME MANAGEMENT
The table and Gantt Chart below illustrate task conduction and time management for each
project task. It was necessary to complete each task in the allocated time for it so that time can
be managed.
Table 3.2: Project Task Dates

Task Start Date End Date Days to Complete


Brainstorming and Formulation 20-Sep-22 26-Sep-21 6
Literature Review 25-Sep-21 29-Sep-21 7
Project Proposal 17-Oct-21 10-Nov-21 11
Python Code Programming 22-Oct-22 10-Nov-21 27
Arduino Code Programming 30-Oct-21 08-Nov-21 14
GUI Programming 30-Nov-21 15-Dec-21 15
Progress Circuit Testing and Results Analysis 28-Dec-21 08-Jan-22 8
Progress Presentation 21
15-Dec-22 11-Jan-22 17
Final Report 28-Dec-21 17-Jan-22 20
Final Presentation and Demo 17-Jan-22 22-Jan-22 7

The following is the Gantt Chart showing the time management, project timeline and the
completion date for each task performed during the progress of the project. The bars on the
Gantt Chart represent the number of days it took each task to be completed.

Figure 3.4: Gantt Chart Showing Project Timeline.

12
CHAPTER 4
4.1 ENGINEERING EXECUTION
4.1.1 Block Diagram

Figure 4.1: Project Block/Flow Diagram

The above figure shows the block diagram of the project and the process flow represented by
the arrows; the number represents the steps of the process flow. For this this the ESP32-CAM
was going to be used, but as mentioned above the ESP32 was damaged and the webcam of the
computer had to be used in place of it. The project is controlled using GUI, the GUI turns on/off
the webcam of the computer. As mentioned, the project was designed using Arduino and
Python. To communicate the two IDLE environments serial communication was implemented
on Python code. Python was used to develop a shape detection algorithm; this program detects
the shape drawn on a piece of paper in real time using the OpenCV package, which is the main
tool used in Python for computer vision.

The user turns on the webcam using the GUI, the webcam opens and scans for shapes in real
time. While scanning if a shape is detected on the paper, Python identifies the shape whether it
is a Circle, a Triangle, or a Rectangle. A signal is created and communicated to the Arduino board
through serial communication. The Arduino reads the serial communication for an incoming
data which is the name of the detected shape, after it has been read successfully Arduino sends
a command to the LCD to print the name of the Identified shape.

13
4.1.2 Programming Design
The flowchart below shows the programming approach used for the project, the visual
representation of the data flow and it helps in identifying of project's important steps and
provides a bigger context of the process flow.

Figure 4.2: Project Flow Chart


14
I. OpenCV
OpenCV is a free and open-source library for computer vision applications such as video and
Closed-Circuit Television (CCTV) footage analysis, as well as picture analysis. The OpenCV
module may be imported in Python with import cv2, which is required to access all of the
OpenCV library features. The line cv2.VideoCapture() is an OpenCV function for reading video.
Pass “0” in the function parameter allows access to the camera. The live video is then shown as
a window on the computer screen. The OpenCV module can read, write, blur, crop, rotate,
transpose, show information, and detect edges on images.

II. Graphical User Interface(GUI)


GUI is a user interface that enables the user to interact with electronic devices like smartphones
and computers by using icons, menus, and other visual indicators or representations (graphics).
In contrast to text-based interfaces, whereby data and commands are purely in text, GUIs
graphically show information and corresponding user controls[9]. A pointing device, such as a
mouse or a finger on a touch screen, is used to control GUI representations. The system is
controlled using a GUI, the GUI turn on/off the camera. The developed GUI is shown below, it
has two buttons to turn the camera off and to turn it on.

Figure 4.3: Develop GUI


The GUI of the system was designed using Tkinter Python package, which is used for most
graphical representations in Python like Trackbars. The following figure shows the code used to
develop the GUI.

Figure 4.4: GUI Code


15
III. Serial Communication Between Arduino and Python
This section shows how to set up serial connection between Arduino and Python. Both Python
and Arduino have a library specific for serial communication, PySerial for Python and Serial for
Arduino, respectively. In this case, the sender of the data is a Python script, and the receiver is
an Arduino. On the Python script, PySerial Library object was declared, which started the
communication path through which the data was sent. The PySerial Object saved information
that is in the port and that information is received by Arduino. To implement serial
communication between the two environment the command ‘arduino = serial.Serial(port =
‘COM5’,baudrate = 9600,timeout = 0.1)’ was also declared in the Python script. Data is
transmitted bit by bit in serial communication. The command arduino.write() sends the name
of the detected shape from Python to Arduino. The transmitted data is only an indication of
shape that has been detected.

The Arduino code structure is made up of two main methods: init(), which declares all of the
essential objects and variables, and loop(), which runs as long as Arduino is operational.
Serial.begin() is called in the body of init(), which initiates the communication for Arduino. On
the loop the command Serial.read() used to receive data from Python.

IV. The Working Principle of the Project


The working principle of the project is illustrated by the flow chart is Figure 4.2. The following
are the steps of the working principle of the project.
• When the ‘TURN ON CAM’ button on the GUI is clicked the Camera TURNS ON,
otherwise it stays OFF.
• The video frame is converted to Hue-Saturation-Value (HSV), the upper and the lower
arrays for the color red are obtained, a mask is created for red, and the mask and video
frame show on the screen.
• Contours are obtained from the mask and from the contour the area and the polygon
approximation are obtained.
• If area is greater than 400 and the approximation length is 3 , the shape is detected as a
TRIANGLE, and it is printed on the LCD on Arduino
• If area is greater than 400 and the approximation length is 4 , the shape is detected as a
RECTANGLE, and it is printed on the LCD on Arduino
• If area is greater than 400 and the approximation length is greater than 10 but less than
25 , the shape is detected as a CIRCLE, and it is printed on the LCD on Arduino
• If it is not any of the cases the LCD prints ‘NO SHAPE DETECTED’

16
4.1.3 Shape Detection Process in Python OpenCV
The live video is first converted to grayscale then the is converted to a mask and the before the
mask is eroded. Masking the video shows all the red pixels on an image and hides others,
showing a binary video with white foreground against a black background. The contours are
found on the masked video using the function ‘_, contours, _ = cv2.findContours(mask,
cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)’. The function has three arguments, however the
most important is the third argument cv2.CHAIN_APPROX_SIMPLE which is responsible to find
points if there is line in an image. The argument unlike the cv2.CHAIN_APPROX_NONE which
marks all the points if there is a line, it marks a few points and connects the lines to make a
vertex.

I. Ramer-Douglas-Peucker Algorithm for Shape Detection


The algorithm used for shape detection in this project is the contour approximation which uses
Ramer–Douglas–Peucker (RDP) algorithm, given a threshold value, this algorithm reduces a
polyline by reducing its vertices. If the start and end points of a curve are given the algorithm
finds the vertex at the maximum distance from the line connecting the two reference points.
The term "contour approximation" refers to an algorithm for reducing the number of points in
a curve by using a reduced set of points. The concept behind contour approximation is that a
curve can be approximated by a sequence of small line segments. As a result, an estimated curve
is constructed that contains a subset of the points defined by the original curve.

Contour approximation in Python OpenCV was implemented by the function ‘approx =


cv2.approxPolyDP(cnt, 0.02*cv2.arcLength(cnt, True), True)’. This has three arguments
respectively are the cnt – the contour, which is approximated, epsilon – which is the maximum
distance between the original curve and its approximation and closed – if it is ‘True’,
approximated curve is closed, otherwise, not. The Epsilon value is normally between the range
1-5% of the original contour perimeter, see Equation 4.5. This function returns a contour that is
approximated and of the same type as the input curve. The contours are drawn on shapes using
the function ‘cv2.drawContours(frame, [approx] , 0, (0, 0, 0), 4)’, this function draws the
approximated contours. On this project the shape detection was done by counting the numbers
of the approximated contours, if the approximated contours are the 3 the system prints
‘TRIANGELE’, if it is 4 the system prints ‘RECTANGLE’, if it is greater than 10 but less than 25 the
system prints ‘CIRCLE’ and if it not any of the cases the system prints ‘NO SHAPE DETECTED’
since the system detects 3 shapes. The

17
4.1.4 Digital Image Processing
An image is a two-dimensional function, f(x,y), where x and y are spatial (plane) coordinates,
and the amplitude of, f at any pair of coordinates (x,y) is called the picture's intensity or gray
level at that location. The image is considered a digital image if x, y, and the intensity values of f
are all finite, discrete quantities[10]. Digital image processing is the process of using a computer
to process digital images. A digital image is made up of a certain number of elements, each with
its own location and value. Image elements and pixels are the terms for these elements. Pixel is
the most generally a used term to describe the elements of a digital image. On this project some
image processing techniques were applied for instance the contour detection, morphological
image techniques like the erosion and grayscale and the image segmentation. Images has RGB
scales which are Red, Green and Blue scaled stacked on top of each other. The figure below
shows the three RGB scales of a typical image.

Figure 4.5: RGB Scales of an Image[11]

The feature extraction was done to calculate the number of pixels in an image. Feature
extraction is a part in the dimensionality reduction process, which divides and reduces a large
set of raw data into smaller groupings. The following equation can be applied to extract the
number of features which is basically the number of pixels in image.

𝑁𝑜. 𝑂𝑓 𝐹𝑒𝑎𝑡𝑢𝑟𝑒𝑠 = 𝑊𝑖𝑑𝑡ℎ × 𝐻𝑒𝑖𝑔ℎ𝑡 [4.1]

The following formular can be used to find the number of Pixels of an image. This is calculated
using RGB values of an image.

𝑃𝑖𝑥𝑒𝑙𝑠 = 𝑅 × 𝐺 × 𝐵 [4.2]

18
The following are the three images plotted on python, before converting the image to gray scale
using a formula the RGB pixel values at certain points were extracted from the images. These
values are then used on the conversion formula to calculate the grayscale image value. The
following Table shows the plotted images with the pixel values extracted at certain points.
Table 4.1: RGB Values from Image Plots
Image RGB Values[Red, Green, Points[x,y]
Blue]
[200, 200]

[215 60 55]

[200, 205]

[173 51 66]

[200, 150]

[180 48 69]

I. Converting an Image to Grayscale


Grayscale images are single-dimensional images that are used to reduce the training complexity
of models in a variety of applications and techniques. The images were converted to gray scale
then masked. The standard RGB to grayscale conversion formula was used to convert the images
to grayscale as shown below.

𝐼𝑚𝑔𝐺𝑟𝑎𝑦 = 0.2989 × 𝑅 + 0.5870 × 𝐺 + 0.1140 × 𝐵 [4.3]

19
Using Equation 4.3 the grayscale value of an image can be calculated as follows. The images
used are shown on the Appendix.
For Triangle
𝐼𝑚𝑔𝐺𝑟𝑎𝑦 = 0.2989 × 215 + 0.5870 × 60 + 0.1140 × 55
𝐼𝑚𝑔𝐺𝑟𝑎𝑦 = 105.7535
The gray scale value of Triangle at the point [200, 200] using Python was found to be
105.75349999999999
For Rectangle
𝐼𝑚𝑔𝐺𝑟𝑎𝑦 = 0.2989 × 173 + 0.5870 × 51 + 0.1140 × 66
𝐼𝑚𝑔𝐺𝑟𝑎𝑦 = 86.7107
The gray scale value of Rectangle at the point [200, 205] using Python was found to be 89.1707.
For Circle
𝐼𝑚𝑔𝐺𝑟𝑎𝑦 = 0.2989 × 180 + 0.5870 × 48 + 0.1140 × 69
𝐼𝑚𝑔𝐺𝑟𝑎𝑦 = 89.844
The gray scale value of Circle at the point [200, 150] using Python was found to be 89.844

II. Erosion Morphology


This procedure, as the name implies, erodes, or removes pixels from the object boundary. This
helped in removing noise on the boundary of the images when the shapes were detected. The
erosion process determines whether the structuring element is compatible with the item[12]. If
the image pixel fits, it is assigned 1, otherwise it is eroded. ("Fits" indicates that all the image
pixels underneath the Structuring Element (SE) should have the same value as the related SE)
(assigned 0). The erosion of a binary image A by some SE B is defined in as illustrated below.

𝐴⊖𝐵 = {ƶ|(B)z CA} [4.4]

That is the set of all z values such that B is a subset of A or is contained in A when translated by
z. SE is shifted over the image, and all points where the SE has no common element with the
background are set to 1 and the remaining positions are eroded[12]. As a result, the object's
surface area shrinks. If the object has any holes, this process tends to enlarge the whole region.
This may be easily applied to binary images by taking the minimum of the neighborhood
described by the SE. The Epsilon which is the second parameter on the cv2.approxPolyDP(), it
sharpens the vertices on the circle to get clear approximation and used to calculate the area. It
is calculated as follows on Python and the arclength is the contour length and True means the
counter has to be closed.

𝐸𝑝𝑠𝑖𝑙𝑜𝑛 = 0.01 × 𝑐𝑣2. 𝑎𝑟𝑐𝐿𝑒𝑛𝑔𝑡ℎ(𝑐𝑛𝑡, 𝑇𝑟𝑢𝑒) [4.5]


20
CHAPTER 5
5.1 TECHNO-ECONOMIC ANALYSIS AND JUDGEMENT
The computer vision plays a vital role in the production industries like the car industries, canned
food production industry and in the security systems. Shape detection can be applied in
production industries, however there are some alternatives of shape detectors. Computer vision
on the other hand is common in many security industries more importantly the IoT and
automated homes. Most the security systems in the automated home use face recognition and
object recognition which all require computer vision.

5.2 IMPACT AND BENEFITS


5.2.1 Socio-Economic Impacts and Benefits
In the automotive industry, assembly robots employ a multi-image camera structure, which
comprises the installation of two or more cameras that record photo information from different
angles. Each image point has an X, Y, and Z coordinate. It can move, track, and precisely locate
moving objects in three-dimensional space using a series of numerous images. The method
detects not only the precise location of the car parts, but also their movement direction and
speed in space. Unlike when the part is mounted by a human, there may be some human error
and damaging expensive of metal. This type of system has saved cost since it minimizes labour
costs and manual handling, which is the main reason industries use robots for tasks like welding
and lifting heavy objects. This system has no environmental flaws; however, it has the advantage
of taking jobs of many employees, this results in the retrenchment of many employees since
their services will no longer be needed.

The use of computer vision has been adopted by the many manufacturing industries like car
manufacturing industry which makes used of robots to mounts parts of the cars and lifting up
heavy metals that humans can not lift. The packing companies makes used the computers vision
in most cases the conveyer belts have a system that detects that objects that needs to be
packed. The shape detections systems on the other hand have been used in many industries,
like some industries like the textile industries. The shapes detection can be used in the
construction industries to detect if workers are wearing helmets, as to ensure everyone wear
their safety gear. Shape detection is also employed by many security industries with systems
that can detect weapons and metals, this can be seen in airports as well.

21
5.2.2 Common Applications
Medical Applications
Computer vision applications have shown to be quite useful in the health sector, particularly in
the precise diagnosis of brain cancers. Medical experts may employ computer vision
technologies to speed up and simplify the detection procedure[13]. In healthcare, computer
vision techniques such as Mask-R Convolutional Neural Networks (Mask R-CNN) can help in the
identification of brain cancers, significantly lowering the likelihood of human incorrect diagnosis.
This may also pose as a risk to patients because x-rays are commonly used to detect health issues
like bone fracture. Too much exposure to x-rays may be a risk to a patient as it used high
radiation waves which cause cell defects like cancer and tumor.

Car Industry
In the manufacturing industry, manual handling has been minimized by using automated robots.
Robots can weld, paint and even assembly car parts using image processing, shape recognition
and object recognition. These robots pick objects and correctly assembly them and screw them.
Most of these robots used cameras and sensors[14]. They record the information of the object,
like shape, edges and contours using computer vision. The driverless automobiles make use of
the computers vision, they record the information of their surroundings while they self-drive
themselves.

Some robots may use a number of cameras to detect if there are no defects on the metal before
it is mounted on the part of the car[15]. The camera captures an image of the working area or
object that the robot will pick, and software scans the image for features that will allow it to
establish position and orientation. This intelligent robot vision creates data that is supplied to
the robot controller, and the previously preset locations are modified.

Electronic Industry
The electronics sector is likely the most active in the use of automated visual inspection on goods
such as PCBs. The boards are inspected here to identify defects such as shorts, opens,
overreaching, under-etching, and fake metals[14]. To convert the PCB grey level image to a
binary image, a simple thresholding algorithm is applied.

Aeronautics And Space Research


National Aeronautics and Space Administration (NASA) uses computer vision to keep track of
the earth’s surroundings and the robots sent to space are all equipped with machine vision for
new life discovery. Some of the alternatives shape detectors are the face detectors, in most
22
cases these systems need the user to register their face in order for their faces to be detected
in future. This uses face features in order to register it, on the system. Some systems which are
alternatives for shape detection are the object detectors which are commonly used in the
automated home security systems.

5.2.3 Alternatives in The Industry


Object Detectors
Object detectors are also used on the AI of automated vehicles or self-driving cars, these cars
usually use computer vision to get information about their surroundings. The contour shape
detector from a German car manufacturing company is one example which is common
alternative of shape detectors in the industry. The device detects the shape an object using laser
light sensors and cameras. The system scans a red-hot metal to check if there are no defects, it
analyzes hot and cold steel profiles using three-dimensional surface reconstruction. As a result,
it captures geometrical defects caused by rolling faults, such as scale seams, shells, or roller
breakouts.

Laser Shape Detectors


A shape detecting system based on a laser line and neural networks is sometimes used. A laser
line is used to scan an object in this technique, a Charge Coupled Device (CCD) camera captures
a series of pictures from the scanning. The object shape is retrieved by analyzing these photos.
The topographic information in a picture is extracted by identifying the location of the laser line
in the image plane. Neural networks are used to determine the mathematical model of the
relationship between the laser line location and the object surface[16]. A shape detecting
system based on a laser line and neural networks is sometimes used. A laser line is used to scan
an object in this technique, a Charge Coupled Device (CCD) camera captures a series of pictures
from the scanning. The object shape is retrieved by analyzing these photos. The topographic
information in a picture is extracted by identifying the location of the laser line in the image
plane[13]. Neural networks are used to determine the mathematical model of the relationship
between the laser line location and the object surface

23
CHAPTER 6
6.1 SAFETY IN INDUSTRY
6.1.1 Occupational Health and Safety (OHS)
OHS is concerned with workplace safety, welfare, and as well as health. OHS includes the rules,
regulations, and programs aimed at improving the workplace for workers/employees and
stakeholders like coworkers, family members and consumers. Improving the occupational health
and safety standards of an organization guarantees good profitability, a stronger brand image,
and increased staff morale. Occupational health and safety regulations require the removal,
reduction, or replacement of job-site risks[17]. OHS programs should also contain content that
assists in mitigating the consequences of risks.

Employers and corporate executives must offer a safe working environment for all of their
workers. Every company has a responsibility under OHS to ensure that their employees work in
safe conditions and that their mental health is a key concern. Long hours, little breaks, little
acknowledgment, and unreasonable demands will rapidly leave employees exhausted, worried,
and in bad mental health.

6.1.2 Safety in Industry Related Shape Detectors


Shape detector is a safe tool to use in the industry and it works automatically it does not need
any human interference. There are many fatalities in the engineering industry and many of these
fatalities occur as a result of workers not putting on their safety gear. Another thing that may
lead to injury risk in the industry in the incorrect use of the equipment and machinery, because
of poor knowledge when it comes to operating the machinery. The advantage of this system is
that it can automatically work without any interference of human.

Manual handling is always the problem in the workplace more especially in the engineering
industry, but all of that is taken care of since there are automated systems to assist workers.
One of the most common risks to accidents and fatalities in the industry is the not abiding the
safety rules. These rules are set by the OSH, the employers are responsible for the safety of the
employees, and they have to make sure the OSH safety rules are not violated. Since the head is
the most critical and vulnerable part of the body any impact on it can cause serious injury or
even death, it is important for employees to wear helmets all the time. Some employees may
not always follow the Occupational Safety and Health Administration (OSHA) regulations which
is to wear their safety gear or helmets. As a result, strategies for improving safety performance
measurement on construction sites are critical. The detection of construction workers wearing

24
or not wearing safety equipment (such as a helmet) in construction surveillance images leads to
the identification of safety violations. The automatic shape detector can be used to detect
helmets and automatically alert the employers if their employees are violating the OSH and
OSHA regulations.

25
CHAPTER 7

7.1 TESTING AND RESULTS


The diagram below shows the project prototype which was used to test for the final results. The
system worked in a manner when the user clicked on the ‘TURN ON CAM’ button on GUI the
system turned in the webcam of the computer. After the webcam was the turned on, the user
was required to place a sheet of paper with either a triangle, rectangle, or a circle for that system
to scan for the shape. If a shape was detected that system used the Ramer-Peucker Douglas
algorithm to identify which is the typed of shape between the three. Once the shape was
identified the system sent the name of the shape identified to Arduino and when Arduino
received the shape it turned on LED 1. Since the system has two LEDs one blinks every time to
indicate that the is serial communication and the other blinked when a shape was detected.
Arduino then sent a command to the LCD to display the results which is the name of the
identified shape.

Figure 7.1: Project Protype

26
7.1.1 Progress Testing
The following are the shapes being detected and identified during the progress of the project.
These we obtained using the ESP32-CAM. This shows the three shapes masked and detected.
The contours are drawn around the as to calculate the area and get the approximation points.
The same procedure was done for the final testing.

For the rectangle drew contours around the image, calculated the area and if it obtained four
approximation points, connected them, area was greater than 400 detected the rectangle.

Figure 7.2: Testing of a Rectangle

For the Triangle the system drew contours around the image, calculated the area and obtained
three approximation points, connected them with an area greater than 400 detected the
triangle.

Figure 7.3: Testing a Triangle

27
For the circle the system drew contours around the image and counted the number of
approximation points or vertices on the image, if the approximation point were between 10 and
25 and area is greater than 400, the system identified the shape a circle.

Figure 7.4: Testing for Circle

7.1.2 Project Progress Results


The following are the results obtained during the project final testing. The LCD displayed the
results after 2 milliseconds they were received by the through serial communication. This was
important so that the Arduino can carefully read the data.

Figure 7.5: Progress Rectangle Results

Figure 7.6: Triangle Progress Results

Figure 7.7: Circle Progress Results


28
7.1.3 Final Project Results
Table 7.1: Final Project Results

29
CHAPTER 8
8.1 CONCLUSION
Computer vision gives all the machines seen in our daily lives a perception of the world. There
have been some good inventions employed by many companies in order to minimize labor costs
and manual handling. Cellphones, computers, robots, and cars are some of the greatest
inventions that have employed the use of computer vision. Some industries use inventions like
object detectors and shape detectors for instance the self-driving automobiles use computers
vision to detect the shapes of traffic signs, road shapes and road contours. On the project shape
detection system will developed, it will make use of a camera, Arduino and the programming
environment will be Arduino IDLE and Python IDLE. Images captured by the camera will be
scanned for shape, and the results will be shown on the LCD.

8.2 REASONS FOR USING PYTHON


Python OpenCV is an open source, and it is easy to use. When compared to MATLAB, using
OpenCV libraries in Python for image processing functions is faster. This is mostly due to the fact
that OpenCV libraries are built in C/C++, requiring just a short period of time to execute the
code. When a code is run, MATLAB uses more time since it is built on a number of wrappers.
MATLAB may take time for processing, and it requires high end computers[18]. Python is a free
programming tool which is easy to use, and most computers are compatible with Python. Python
has a number of libraries that offer a variety of algorithms and mathematical computations that
may be modified depending on the conditions, such as for image processing. Python includes a
number of algorithms that may be employed[9]. In order to interface Python with Arduino there
are no large packages that have to be installed, only as few modules and they are easy to install
using the Anaconda Command Prompt.

8.3 IMPORTANCE OF CONSULTED TECHINCAL PAPERS


For this project four conference papers on Shape detection and computer vision were consulted,
these papers helped in the understanding of shape recognition and identification algorithms and
computer vision as a whole. These paper all had the relevant information need for shape
detection and identifications, different algorithms were used to detected and identify shapes.
This helped in layout the project results and ensuring all the relevant information about the
project is discussed. There is a very limited papers on the internet for shapes detection, however
the few available have enough information and give better understanding on the concept of
shape identification.

30
REFERENCES
[1] J. Xuan, "Understanding the Computer Vision Technology," in Innoplexus vol. 2021, ed,
2018.
[2] S. Garg and G. S. Sekhon, "Shape Recognition based on Features matching using
Morphological Operations," Shape Recognition, vol. 2, no. 4, pp. 2290-2292, 04 July
2012. [Online]. Available: http://www.ijmer.com/.
[3] "How Artificial Intelligence Revolutionized Computer Vision: A Brief History."
https://www.motionmetrics.com/how-artificial-intelligence-revolutionized-computer-
vision-a-brief-history/ (accessed 05 January 2022).
[4] "History of computer vision: Timeline," in Verdict vol. 05 January 2022, ed, 2020.
[5] R. Puri, A. Gupta, and M. Sikri, "CONTOUR, SHAPE, AND COLOR DETECTION USING OPEN
CV-PYTHON," International Journal of Advances in Electronics and Computer Science,
vol. 5, no. 3, pp. 20-25, 02 March 2018. [Online]. Available: http://iraj.in/.
[6] X. Poda and O. Qirici, "Shape detection and classification using OpenCV and Arduino
Uno," vol. 02, no. 56, pp. 3-9, 23 September 2019.
[7] "Mega 2560 Rev3 | Arduino Documentation." https://docs.arduino.cc/hardware/mega-
2560 (accessed 05 January 2022).
[8] "ESP32-CAM, Camera Module Based on ESP32, OV2640 Camera Included."
https://www.waveshare.com/esp32-cam.htm (accessed 04 November 2021).
[9] J. Stoltzfus. "What is a Graphical User Interface (GUI)? - Definition from Techopedia."
Techopedia. https://www.techopedia.com/definition/5435/graphical-user-interface-
gui (accessed 04 November 2021).
[10] R. C. Gonzalez and R. E. Woods, Digital Image Processing, 4th ed. Pearson, 2018.
[11] A. Singh, "Image Feature Extraction | Feature Extraction Using Python," in Analytics
Vidhya vol. 28 December 2021, ed.
[12] "erosion." https://theailearner.com/tag/erosion/ (accessed 01 January 2022).
[13] G. Boesch. "Top 10 Applications Of Deep Learning and Computer Vision In Healthcare."
https://viso.ai/applications/computer-vision-in-healthcare/ (accessed 02 November
2021).
[14] V. F. LEAVERS, Shape Detection in Computer Vision Using the Hough Transform, 1 ed.
Springer-Verlag London, 1992.
[15] "Robotic Vision Systems." https://www.acieta.com/automation-application/vision-
systems/ (accessed 01 November 2021).
[16] J. A. Muñoz-Rodríguez, "Shape detection by applying a laser line and neural networks,"
vol. 6046, 01 February 2006, doi: 10.1117/12.674558.
[17] "Occupational Health and Safety (OHS)." Safeopedia.
http://www.safeopedia.com/definition/439/occupational-health-and-safety-ohs
(accessed 04 November 2021).
[18] "How is image processing in Matlab different from Python?"
https://www.matlabsolutions.com/matlab/how-is-image-processing-in-matlab-
different-from-python.php (accessed 28 October 2021).

31
APPENDIX A
A.1 Python Code

32
A.2 Arduino Code

33
APPENDIX B
B.1 SHAPES TO BE DETECTED

34
B.2 GRAY SCALE IMAGES

35

You might also like