You are on page 1of 13

Chapter 2 – Review Related Literature and Studies

Conceptual Literature

Foreign literature

According to (Hsien-Chou Liao 2009), he stated on the article An Automatic

Camera Calibration Method Using GPS-enabled Mobile Device. “When a moving

object carried GPS-enabled device and moves into the field-of-view (FOV) of a

camera, the object can be detected and marked directly on the real-time image

according to its GPS coordinate.” Based on the article, to trace the exact location of

a moving object, he proposed to incorporate GPS-enabled mobile device in UbiCam

environment to provide a GPS-based visual tracking service that has automatic

calibration method that tracks the object in real-time even if it’s moving. He believes

that in the future the automatic calibration method will be incorporated with the

popular digital map, e.g., Google map, to provide a worldwide visual tracking service.

As stated by (Jae-Kil Lee 2016), on the article Development of all-in-one-type

deep-sea camera for monitoring red snow-crab habitats. “Because the red snow

crabs inhabit the deep waters in the east sea of South Korea, where the depth is

approximately between 1,000 m and 1,800 m, an underwater camera that can endure

high water pressures is required.” Based on the article, they developed an

underwater camera for monitoring red snow crabs in the east sea of South Korea.

The maximum depth capability of the camera was 2,000 m. The camera was

attached to the fishing traps and could be automatically operated without external
cable links to the boat. Using this camera they successfully monitor and record

videos of red snow-crabs.

As explained by (K.Fintzel 2003), on the article 3D vision system for vehicles.

“Several works about automated driving systems have been dedicated to collision

avoidance, and it has already been proved that image technology can be useful in

this area.” Based on the article, by providing a 3D representation of images that

acquired from the camera, the driver would be aware of his surroundings and used

for warning and driving guidance.

According to (Kenton O'Hara 2016), he stated on the article Body Tracking in

Healthcare, Synthesis Lectures on Assistive, Rehabilitative, and Health-Preserving

Technologies. “Understanding the posture and motion of the human body has been

concern within healthcare settings.” This article has highlighted the exciting

possibilities that sensor technologies are opening up in the healthcare space. From

the assessment and monitoring of medical conditions, to new opportunities for

rehabilitation, to innovations in interaction in the operating room, body-tracking

technology makes possible a whole new world of applications and systems.

Local Literature

As mentioned by (Cyrel O. Manlises 2015), on the article Real-Time Integrated

CCTV Using Face and Pedestrian Detection Image Processing Algorithm for

Automatic Traffic Light Transitions. “With the technology today, detection of pedestrian

was introduced to decrease the accidents in the roads. It was implemented on the

traffic lights and cars.” Pedestrian crossings are one of the places where CCTVs are
installed, CCTV cameras are now equipped with pedestrian detection feature to detect

the presence of people that helps them consider their turn to cross.

Conceptual Studies

Foreign Studies

According to (Chung-Hua Chu 2017), he stated on the article Camera Pose

Trace Based on Motion Sensor in Mobile Devices. “The gyroscopes and

accelerometers can provide the precise geometry data of the rotational and

translational movements in the effect on picture taking.” Based on this study, using

Micro-Electro-Mechanical Systems (MEMS) gyroscopes and accelerometers are

effective to achieve better visual quality and camera path tracing. They also proposed

an efficient and effective algorithm, their experiment shown a better results compared

to the other existing works on mobile devices.

As claimed by (Simone Milani 2015), he stated on the article Three-

dimensional reconstruction from heterogeneous video devices with camera-in-view

information. “The reconstruction of 3D scenes from unsynchronized and uncalibrated

cameras has been a flourishing research area during the last years.” Based on this

article, a 3D modelization of the surrounding environment is enabled with an

improvised ad-hoc camera networks of both static and mobile devices. A 3D

reconstruction proves to be extremely helpful for an accurate camera localization and

scene understanding. By testing different scenarios he presents a new algorithm to


reconstruct 3D models from uncalibrated images generated by an ad-hoc network of

cameras.

According to (Yuki Kaneto 2016), he stated on the article Space-sharing AR

interaction on multiple mobile devices with a depth camera. “Many applications using

AR technology on mobile devices such as smartphones have been developed and

various kinds of applications, including ones designed for amusement are available.

However, most current AR applications have been developed for a single user.”

Based on this study, he proposed a method of realizing registration of 3D point sets

that are obtained from two depth cameras in a face-to-face situation. With this

method, we can realize markerless AR in which multiple users can share the

augmented space and can interact with the same virtual object in the space.

As stated by (Hyunho Kim 2016), on the article Embedded camera module for

automotive camera system. “A camera module is recently being used a lot under the

influence of a high degree and automation of vehicle parts. Global camera module

industry has reached $20.1B in 2014 and should reach $51B in 2020.” Based on this

study, the increasing demand of camera module was due to their applications being

directly linked to the safety of consumers, quality and reliability because of this he

presents embedded camera module using embedded assembly technology that

provides many features to deliver a high-quality product to the automotive market.

According to (Slavomir Matuska 2018), he stated on the article Determination

of the big mammals migration corridors in the particular areas using remotely-

operating intelligent camera system. “With the growing rate of transportation,


urbanization and industrialization across the all countries, the barriers in the country

sides are created. Because of these barriers, the natural migration corridors of big

mammals are endangered.” Based on his study, mammals migration defines their

interaction with the environments and their survivability. His goal was to create a

remotely-operating intelligent camera system, which can help identifies the migration

corridors of mammals and in order to help planning new roads on the correct places

that avoids their natural migration or create special overpass for the mammals.

As believed by (Dan Ionescu 2013), he stated on the article A new NIR camera

for gesture control of electronic devices. “A camera which operates using an image

sensor such that its output contains depth information has been the key element in

obtaining the well-known six degrees of freedom in user interaction.” Based on this

study, by using image processing algorithms to detect and track finger movements,

a user is able to use gestures similar to multi-touch surfaces to pinch and grab a 3D

object within a virtual environment.

According to (Tao Liu 2014), he stated on the article Vision Guidance System

for Autocollimator with Single Camera. “To reduce human impact, machine vision

technology has been widely used in measurement system.” Based on this article, a

new vision guidance system for autocollimator based on single camera and

automatic collimation method was proposed and proved feasible. To improve

efficiency of the system, image processing algorithm need to be studied further to

provide higher feature extraction precision.


As stated by (Jin-Yeong Park 2015), on the article Multi-legged ROV Crabster

and an acoustic camera for survey of underwater cultural heritages. “Because of the

fast ocean currents and low visibility, underwater survey and excavation by divers

are strictly restricted. Especially, optical cameras and human’s visual inspection

become defective.” Based on this article, he conducts an experiment for survey of

underwater using a ROV (Remotely Operated Vehicle) whose name is Crabster,

driven by six artificial legs with a high-resolution acoustic camera in sea. It can walk

on the seafloor and approached objects of interest successfully with less disturbance

to its surroundings.

According to (Shashi Kumar 2013), he stated on the article Face distance

estimation from a monocular camera. “The fact that the camera and user's face are

in constant motion; measuring the distance of the face from the camera becomes a

hard problem.” Based on this study, User positions himself to the device at a

particular distance to read the contents of the display. The idea is to give the user,

clarity in reading by adjusting the contents according to the face distance from the

device.

According to (MA Wenpeng 2012), he stated on the article Research of

Intelligent Search Engine Using Web Camera. “At the early date of 1990, there was

no World Wide Web. However, around this time, there was still an Internet, and many

files were scattered all over the vast network.” Based on this article, he introduced a

new search engine system using a web a camera. The computer has been able to

read the letters and numbers, even complex image through the existing recognition

technology, it just need to transfer this technology on mobile devices in able to


identify data, instead of inputting by users. When we see something that catch our

interest or questions, just let your mobile devices to see then you will receive relevant

information immediately.

As stated by (Manoj R. Rege 2011), on the article Using Participatory Camera

Networks For Object Tracking. “Mobile devices come embedded with various

sensors of which camera is most widely used, however operated by the device owner

for the individual needs.” Based on this article, participatory sensing involves

contributing and sharing of interesting data by people using their mobile devices. The

advantages of getting people to participate in object tracking are the mobility they

lend to camera devices carried by them, relatively ubiquitous coverage offered by

their presence and no deployment cost involved in using their already existing

camera devices.

According to (Edwin Walsh 2017), he stated on the article Assistive Pointing

Device Based on a Head-Mounted Camera. “Due to computing devices gaining

importance in our everyday activities, an increased need arises to improve on

human–computer interaction (HCI) for people with a disability (that limits their ability

to control computing devices), allowing them to participate in this growing trend.”

Based on this article, he introduces and validates the performance of an alternative

input device for people with limited hand/arm movement and control. A low-cost head

mounted camera could be used as an alternative human interface device for people

with limited hand/arm movement and control, allowing them to participate in the

ongoing trend of computing devices gaining importance in our everyday activities.


As stated by (Junjie He 2014), on the article Mobile-Based AR Application

Helps to Promote EFL Children’s Vocabulary Study. “Children’s often have a number

of problems when learning vocabulary, e.g. lacking of scenario creation, old teaching

model and teachers’ poor pronunciation.” Based on this article, the traditional

methods of teaching English, the interaction between teachers and students is

usually by gestures and discussions, yet short of interesting interaction. Therefore,

using augmented reality (AR) technology to design and develop mobile-based

English learning software for pre-school children in order to solve the problem of

bored students and teachers’ non-standard pronunciation. A vivid picture will emerge

when using mobile camera to identify an English word on card, which improves

children’s interests in learning.

According to (Behnoosh Hariri 2011), he stated on the article Demo: Vision

Based Smart in-Car Camera System for Driver Yawning Detection. “As driver fatigue

and drowsiness is a major cause behind a large number of road accidents.” Based

on this article, one of the most significant causes of road accidents is fatigue and

drowsiness of the driver which impacts the alertness and response time of the driver.

Using in-car cameras that can be used for real time tracking and monitoring of a

driver in order to detect the driver’s drowsiness based on yawning detection can help

to prevent of such accidents.

Local Studies

As explained by (Jenel Luise C. Bolosan 2015), she stated on the article Eye

state analysis using EyeMap for drowsiness detection. “Drowsiness has become one

of the many reasons of vehicular accidents.” Based on her study, detecting a


person’s state, whether drowsy or non-drowsy state in all of the three setups, multi-

camera being the most effective however, it is limited by the capability of the camera

to adapt to different lighting condition.

Synthesis of the study

The proponents believed that all article from local and foreign studies and

literature are connected to the concept of giving convenience to the user. Based on

the table of comparison, it can be seen that the research are done with the same

goal which is vision and detection using different methods and efficiency. Therefore,

the proponents decided to improve the vision and detection of the camera using front

and rear camera on the helmet to have a complete vision and attached a sensor

connected to the mobile application. Furthermore, the proponents designed a

prototype that will display the view of the rear camera on the visor of the helmet,

record and save the view of both camera to the database. The sensor on the helmet

will triggered if an accident happens and send SMS notification to the receivers

through mobile.

On the review of related literatures and studies that we considered for the

innovation and development of 360 degree vision camera for helmet. The proponents

used the following articles of related literatures and studies to give the readers an

overview of how the design innovated the development by using some of the data

being gathered which helped in the construction of the prototype. Moreover, the review

articles gave the proponents a concise understanding of the advantages and

disadvantages that may take place in the design of the device that consist of hardware

and software.
References

[1] Hsien-Chou Liao. “An Automatic Camera Calibration Method Using GPS-enabled

Mobile Device,” 2009 11th International Conference on Advanced Communication

Technology.

https://sci-hub.tw/https://ieeexplore.ieee.org/document/4810060

[2] Jae-Kil Lee. “Development of all-in-one-type deep-sea camera for monitoring red

snow-crab habitats,” OCEANS 2016 MTS/IEEE Monterey.

https://sci-hub.tw/https://ieeexplore.ieee.org/document/7761046

[3] K. Fintzel. “3D vision system for vehicles,” IEEE IV2003 Intelligent Vehicles

Symposium. Proceedings, (Cat. No.03TH8683).

https://sci-hub.tw/https://ieeexplore.ieee.org/document/1212904

[4] Kenton O'Hara. “Body Tracking in Healthcare, Synthesis Lectures on Assistive,

Rehabilitative, and Health-Preserving Technologies,” 5(1), 1–151.

https://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=7437541

[5] Cyrel O. Manlises. “Real-Time Integrated CCTV Using Face and Pedestrian

Detection Image Processing Algorithm for Automatic Traffic Light Transitions,” 2015

International Conference on Humanoid, Nanotechnology, Information Technology,

Communication and Control, Environment and Management (HNICEM).

http://sci-hub.tw/https://ieeexplore.ieee.org/document/7393205

[6] Chung-Hua Chu. “Camera Pose Trace Based on Motion Sensor in Mobile Devices,”

2017 Conference on Technologies and Applications of Artificial Intelligence (TAAI).

doi:10.1109/taai.2017.44.
https://ieeexplore.ieee.org/document/8356895

[7] Simone Milani. “Three-dimensional reconstruction from heterogeneous video

devices with camera-in-view information,” in 2015 IEEE International Conference on

Image Processing (ICIP).

https://sci-hub.tw/https://ieeexplore.ieee.org/document/7351161

[8] Yuki Kaneto. “Space-sharing AR interaction on multiple mobile devices with a depth

camera”, 2016 IEEE Virtual Reality (VR).

https://sci-hub.tw/https://ieeexplore.ieee.org/document/7504721

[9] Hyunho Kim. “Embedded camera module for automotive camera system,” 2016

Pan Pacific Microelectronics Symposium (Pan Pacific).

https://sci-hub.tw/https://ieeexplore.ieee.org/document/7428414

[10] Slavomir Matuska. “Determination of the big mammals migration corridors in the

particular areas using remotely-operating intelligent camera system,” 2018 ELEKTRO.

https://sci-hub.tw/https://ieeexplore.ieee.org/document/8398257

[11] Dan Ionescu. “A new NIR camera for gesture control of electronic devices,” 2013

IEEE 8th International Symposium on Applied Computational Intelligence and

Informatics (SACI).

http://sci-hub.tw/https://ieeexplore.ieee.org/document/6608963

[12] Tao Liu. “Vision Guidance System for Autocollimator with Single Camera,” 2014

Fourth International Conference on Instrumentation and Measurement, Computer,

Communication and Control.

http://sci-hub.tw/https://ieeexplore.ieee.org/document/6995056
[13] Jin-Yeong Park. “ROV Crabster and an acoustic camera for survey of underwater

cultural heritages,” OCEANS 2015 - MTS/IEEE Washington.

http://sci-hub.tw/https://ieeexplore.ieee.org/document/7401986

[14] Shashi Kumar. “Face distance estimation from a monocular camera,” 2013 IEEE

International Conference on Image Processing.

http://sci-hub.tw/https://ieeexplore.ieee.org/document/6738729

[15] MA Wenpeng. “Research of Intelligent Search Engine Using Web Camera,” 2012

IIAI International Conference on Advanced Applied Informatics. doi:10.1109/iiai-

aai.2012.42.

https://sci-hub.tw/https://ieeexplore.ieee.org/document/6337180

[16] Manoj R. Rege. “Using participatory camera networks for object tracking,” 2011

Fifth ACM/IEEE International Conference on Distributed Smart Cameras.

doi:10.1109/icdsc.2011.6042947.

https://sci-hub.tw/https://ieeexplore.ieee.org/document/6042947

[17] Edwin Walsh. “Assistive Pointing Device Based on a Head-Mounted Camera,”

IEEE Transactions on Human-Machine Systems. 47(4), 590–597.

doi:10.1109/thms.2017.2649884

https://sci-hub.tw/https://ieeexplore.ieee.org/document/7831458

[18] Junjie He. “Mobile-Based AR Application Helps to Promote EFL Children’s

Vocabulary Study,” 2014 IEEE 14th International Conference on Advanced Learning

Technologies. doi:10.1109/icalt.2014.129

https://sci-hub.tw/https://ieeexplore.ieee.org/document/6901503
[19] Behnoosh Hariri. “Demo: Vision based smart in-car camera system for driver

yawning detection,” 2011 Fifth ACM/IEEE International Conference on Distributed

Smart Cameras. doi:10.1109/icdsc.2011.6042952

https://sci-hub.tw/https://ieeexplore.ieee.org/document/6042952

[20] Jenel Luise C. Bolosan. “Eye state analysis using EyeMap for drowsiness

detection,” TENCON 2015 - 2015 IEEE Region 10 Conference.

https://sci-hub.tw/https://ieeexplore.ieee.org/document/7372984

You might also like