You are on page 1of 3

Project Title: SMART CARS

Area: Image Processing


Project classification (Product/Research/Application/Review/Simulation):
Application
Project Team Members:1) Aditya Thite
2) Santosh Dudhal
3) Tejas Bhoir
Abstract:At the present time, hand gestures recognition system could be used as a more expected and
useable approach for human computer interaction. Automatic hand gesture recognition system
provides us a new tactic for interaction with the virtual environment. We will use a hand gesture
recognition system which is able to control the driving of car and media player and other
products like lights in car. We will use speech processing along with image processing to do all
such functions. Hand gesture and human gestures are the key element to interact with the smart
system. We will use speech recognition scheme for starting these advance functions and the hand
gesture recognition in mechanism of computer media player, for instance, volume down/up, next
music and etc. we will be doing image processing using camera and speech processing software
like MATLAB and other software available. By creating this intelligent car infotainment system
the driver would no longer have to look at the media player and avoid their attention from the
road while driving. It will also ensure that features like calling and using GPS is done freely.

Objective:Over the last years, the automotive research community was mainly focused onto the popular
field of research known as autonomous driving. Essentially, most of the work is dealing with the
challenge to sense the environment in a proper way in order to autonomously control a car
without any intervention by a human. Thus, in the future passengers will operate the on-board
computers for entertainment purposes only. But our vision of an intelligent car is not only made
of the aforementioned self-controlled driving feature, but also of an in-car-infotainment system
that acts autonomously based on the user needs. By detecting environment-dependent regularities
of use, an intelligent in-car-infotainment system would be able to proactively automate complex
interaction tasks that are supposed to happen anyway. In this demo we present a prototype of an

intelligent in-car-infotainment system that adapts itself based on the context-dependent daily
routines of the user. An newly developed real-world driving simulator is used in conjunction with
the prototype to simulate the environment of the car in order to let the user experience the
personalization in real-time.
Details of parameters to be monitored / specifications /real time application
1) Gesture recognitionUsing a camera and software we will be doing image processing. image processing is
processing of images using mathematical operations by using any form of signal
processing for which the input is an image, a series of images, or a video, such as
a photograph or video frame; the output of image processing may be either an image or a
set of characteristics or parameters related to the image. Using this processed parameters
we would recognize which gesture has been made and which corresponding action must
be done by the controller.
2) Spoken Command recognitionUsing a microphone and software we will be doing voice command recognition and voice
verification. We would be taking voice inputs from the user and after processing it in
software we would then instruct the controller to operate in that mode. For example after
a voice verification is done then if user wants to change the music in media player he or
she just needs to say Media player then after doing this the processor will take the
controller operation to change the output of media player.
Block Diagram:-

INPUT
microphon
e

PROCESSING
UNIT
Fig.1
(MATLAB,ARD
UINO)

CONTRO
LLER

CAR
DRIVING
SYSTEM
MEDIA
PLAYER

GPS
SYSTEM

camera

PHONE
CONTROLL
ING
SYSTEM
POWER
SUPPLY

The above block diagram mentioned in Fig.1 explains the process in which our project is going
to work .It has three main blocks namely:1. Input block
The input block is used to interface input devices to the main processing unit that is
computer. In our case the input will be consisting of a Camera and a microphone to
capture the image or gestures and microphone to control the voice command.
2. Processing unit
The Processing unit is the core of our project. The inputs from input module are given to
this unit. The processing unit then process the program loaded into it and accordingly
finds the current gesture done or the current command been spoken by the user. Also
features like voice reorganization are performed by this unit
3. Output Block
The output block is nothing but the control unit which is nothing but a controller which
controls or performs the operation being told by the processing unit. It then controls the
operation of the output devices like the GPS systems , media player system and driving
systems
Hardware/Software tools:1. MATLAB
2. Camera
3. Microphone
4. Controller
5. Media player
References:
1. http://ieeexplore.ieee.org
2. www.engpaper.com

You might also like