Professional Documents
Culture Documents
GOVERNMENT
POLY TECHNICCOLLEGEATTINGAL
SEMINAR REPORT ON
MIND READING
2017-2020
SUBMITTED BY
AKSHAY S.B. (17150204)
CERTIFICATE
This is to certify that this seminar report entitled “MIND READING” is a bonafide record
done by AKSHAY S.B final year diploma student in Computer Hardware Engineering with Reg
No: 17150204 in partial fulfilment for the award of diploma during the academic year 2019-2020
from the Directorate of Technical Education Govt. of Kerala.
The seminar titled “MIND READING ” was possible through co-operation of various
persons to whom I wish to express my sincere thanks and gratitude. First I wish to express my
sincere thanks to parents for their support behind the completion of this seminar. I express my
sincere thanks to the almighty for successful completion of this seminar. I express my sincere
gratitude to beloved Principal Shri. P O Nizar for his academic support. I wish to place on record
my grateful thanks to Mrs. Bindu Raj, H.O.D of CHE. I express my sincere thanks to all other
teaching and non-teaching staff, and my friends for their encouragement and sincere co-operation.
ABSTRACT
Drawing inspiration from psychology, computer vision and machine learning, the team in the
Computer Laboratory at the University of Cambridge has developed mind-reading machines -
computers that implement a computational model of mind-reading to infer mental states of people
from their facial signals. The goal is to enhance human-computer interaction through empathic
responses, to improve the productivity of the user and to enable applications to initiate interactions
with and on behalf of the user, without waiting for explicit input from that user.
There are difficult challenges: Using a digital video camera, the mind-reading computer ppt system
analyzes a person's facial expressions in real time and infers that person's underlying mental state,
such as whether he or she is agreeing or disagreeing, interested or bored, thinking or confused.
Prior knowledge of how particular mental states are expressed in the face is combined with analysis
of facial expressions and head gestures occurring in real time. The model represents these at different
granularities, starting with face and head movements and building those in time and in space to form
a clearer model of what mental state is being represented. Software from Nevenvision identifies 24
feature points on the face and tracks them in real time. Movement, shape and colour are then
analyzed to identify gestures like a smile or eyebrows being raised.
Combinations of these occurring over time indicate mental states. For example, a combination of a
head nod, with a smile and eyebrows raised might mean interest. The relationship between
observable head and facial displays and the corresponding hidden mental states over time is modeled
using Dynamic Bayesian Networks.
CONTENTS
INTRODUCTION
WORKING
ADVNATAGES
DIADVANTAGES
APPLICATIONS
CONCLUSION
REFERENCES
INTRODUCTION
HCI has been primarily implemented by monitoring direct manipulation of devices such as mice,
keyboards, pens, touch surfaces, etc. However, as digital information becomes more integrated into
everyday life, situations arise where it may be inconvenient to use hands to directly manipulate a
gadget. For example, a driver might find it useful to interact with a vehicle navigation system
without removing hands from the steering wheel. Further, a person in a meeting may wish to
invisibly interact with a communication device. Accordingly, in the past few years there have been
significant activities in the field of hands-free human-machine interface. It is predicted that the future
of HCI is moving toward compact and convenient hands-free devices.
Notably, in a recent report, IBM has predicted that at least in the next five years, mind-reading
technologies for controlling gadgets would be available in the communication market. In the IBM
report it is predicted that "if you just need to think about calling someone, it happens…or you can
control the cursor on a computer screen just by thinking about where you want to move it."
Accordingly, there is a need to make such enablers that could capture, analyze, process, and transfer
the brain signals, and command a gadget based on the instructions that a user has in mind. This paper
discusses an enabler that is insertable in a user’s ear to record an electroencephalography in the brain
as brain signals while the user imagines various commands for controlling a gadget. The ear could
provide a relatively inconspicuous location. Indeed, ear is known as a site where brain wave activity
is detectable.
Certain areas of the ear, such as the area of the ear canal have proven to be better locations for
detecting brain wave activity. Particularly, the area of the upper part of the ear, called the triangular
fossa has high brain wave activity, especially near the skull. It is considered that the thinness of the
skull at this area could facilitate higher reading of the brain wave activities. The proposed enabler of
this paper could transmit, for example wirelessly, the brain signals to a processing unit inserted in the
gadget. The processing unit decodes the received brain signals by a pattern recognition technique.
Based on the decoded brain signals, the processing unit could control applications that are installed
in the gadget. The details of the device and system that could facilitate such brain-machine interface
are discussed in this paper. This paper addresses the current technologies in mind-reading systems,
the deficiencies and limits of the existing technologies, along with possible solutions to have a
practical device for brain-computer interaction, and the future plans to achieve such cutting-edge
technology.
A team in the computer laboratory at the University of Cambridge has advanced the mind reading
devices like computers and it reads the mental states of people from the facial signals of the people.
The aim is to increase the interaction between human and computer and to enhance the productivity
of the user. It also has the difficult challenges like it requires the usage of digital video camera, the
mind-reading computer system examines a person’s facial and compares it with the underlying
mental state like whether the person is interested or boring, agreeing or disagreeing and much more.
The mental states expressed on the face are combined with facial expressions and head movements.
One of the software of mind reading identifies twenty-four features of the face and along with them,
it tracks the facial expressions with the real world time. To identify the smile and raised eyebrow
gestures, the movement, shape, and color are analyzed. The combination of a movement and a
gesture like a smile helps in knowing the mental states.
People express their mental states, including emotions, thoughts, and desires, all the time through
facial expressions, vocal nuances and gestures. This is true even when they are interacting with
machines. Our mental states shape the decisions that we make, govern how we communicate with
others, and affect our performance. The ability to attribute mental states to others from their behavior
and to use that knowledge to guide our own actions and predict those of others is known as theory of
mind or mind-reading.
The mind-reading computer system presents data or information about the mental state of the people
easily just like the keyboard or mouse presents the data or information on the given instructions. Just
imagine a future and we are surrounded by all the devices, vehicles and the internet services, people
will know our mood and react to us depending on our mood. This helps the car manufacturing
technology because in these days car producers are working to make a car that can analyze the
mental state of a driver that whether the driver is angry, drowsy or distracted and much more. In this
project, further improvements are going on and these will help to know the mental state of the person
depending upon the body postures and gestures. The projects are going on to support the avail of
mind reading even in the online shopping.
The mind-reading computer system presents information about your mental state as easily as a
keyboard and mouse present text and commands. Imagine a future where we are surrounded with
mobile phones, cars and online services that can read our minds and react to our moods. How would
that change our use of technology and our lives? We are working with a major car manufacturer to
implement this system in cars to detect driver mental states such as drowsiness, distraction and
anger. Current projects in Cambridge are considering further inputs such as body posture and
gestures to improve the inference. We can then use the same models to control the animation of
cartoon avatars. We are also looking at the use of mind-reading to support on-line shopping and
learning systems.
The mind-reading computer system may also be used to monitor and suggest improvements in
human- human interaction. The Affective Computing Group at the MIT Media Laboratory is
developing an emotional-social intelligence prosthesis that explores new technologies to augment
and improve people's social interactions and communication skills.
This paper discusses an enabler for controlling a gadget based on signal analysis of brain activities
transmitted from the enabler to the gadget in a system, which could overcome the issues set forth
above in the conventional devices. Therefore, it is an advantage of the system disclosed in this paper
to provide an improved human-computer interface system, having many of the same capabilities as
conventional input devices, but which is hands-free and does not require hand operated
electromechanical controls, or microphone-based speech processing methods, and is easy to insert to
provide comfort for a user of the enabler to enable easily controlling gadgets such as mobile phones,
personal digital assistant devices, media players, etc. With the proposed enabler and system of this
paper, these gadgets can be controlled without a need for an additional hardware, particularly without
additional electrodes outside the enabler.
The enabler includes a recorder that is insertable in an outer ear area of the user. The recorder
records electroencephalography signals generated in the brain. The recorded signals are transferred
to a processing unit inserted in the gadget for converting the signals to command applications in the
gadget. The proposed system is illustrated in Fig. 1, in which signals derived from the user’s ear are
used for decoding the brain activities to enable mental controlling of a gadget. As shown in the
figure, in the proposed system, an HCI enabler is inserted in the ear of the user. The enabler uses
electroencephalography recordings from the canal of the external ear to obtain brain activities in a
way that is used as a brain-computer interface using signals of complex cognitive.
A recorder that is inserted in the enabler records the brain signals. The recorder has an electrode that
is located at the entrance of the ear, and could be mounted with an earplug. Signals can be amplified
and digitized for transmitting from the enabler. The enabler wirelessly transmits the recorded brain
signals to the processing unit that includes a decoder. A transmitting device installed in the enabler
produces a radio frequency signal corresponding to voltages sensed by the recorder and transmits the
radio frequency signal by radio frequency telemetry through a transmitting antenna. The transmitting
device could include the transmitting antenna, a transmitter, an amplifying device, a controller, and a
power supply unit, such as a battery. The amplifying device could include an input amplifier and a
bandpass filter. The amplifying device receives an electrode signal from the recorder.
The electrode signal is a response to changes in the brain electrical activities of the user. The input
amplifier could provide an initial gain to the electrode signal, and the bandpass filter could provide
an additional gain to the electrode signal resulting in an output signal with an overall gain of much
higher than the electrode signal. The controller is electrically connected to the bandpass filter. The
output signal from the bandpass filter is inputted to the controller. The controller provides signal
conditioning to the output signal to provide telemetry transmission. Such signal conditioning
includes analog to digital conversion. The controller also controls the transmitter channel frequency
thereby controlling the frequency of the radio frequency signal to be transmitted.
WORKING
In the working of mind reading, the technology called functional near-infrared spectroscopy (FNIRS)
is used and this measure the amount of oxygen level and blood around the brain of the person i.e.
subject. Also by wearing a headband and it transmits the light into the tissues of the head, the light is
absorbed by the blood-filled tissues and active tissues. The results of the above are compared by
using an MRI, after wearing the functional near-infrared spectroscopy sensors the subject will be
asked to count the squares to do other work or task. After this, the subject will be asked to tell the
complexity of the task and this rating will be compared with the results of the functional near
infrared spectroscopy system.
Recently visuomotor neuron in the monkey’s premotor cortex called mirror neurons. These neurons
respond if an action is made by the monkey and if the same action is made by others too. As per this
theory, the mental states are represented.
Futuristic headband
The mind reading actually involves measuring the volume and oxygen level of the blood around the
subject's brain, using technology called functional near-infrared spectroscopy (fNIRS).
The user wears a sort of futuristic headband that sends light in that spectrum into the tissues of the
head where it is absorbed by active, blood-filled tissues. The headband then measures how much
light was not absorbed, letting the computer gauge the metabolic demands that the brain is making.
The results are often compared to an MRI, but can be gathered with lightweight, non-invasive
equipment.
Wearing the fNIRS sensor, experimental subjects were asked to count the number of squares on a
rotating onscreen cube and to perform other tasks. The subjects were then asked to rate the
difficulty of the tasks, and their ratings agreed with the work intensity detected by the fNIRS system
up to 83 percent of the time.
For the first test of the sensors, scientists trained the software program to recognize six words -
including "go", "left" and "right" - and 10 numbers. Participants hooked up to the sensors silently
said the words to themselves and the software correctly picked up the signals 92 per cent of the
time.
Then researchers put the letters of the alphabet into a matrix with each column and row labeled with
a single-digit number. In that way, each letter was represented by a unique pair of number co-
ordinates. These were used to silently spell "NASA" into a web search engine using the program.
here. Paragraph comes content here. Paragraph comes content here. Paragraph comes content here.
Paragraph comes content here. Paragraph comes content here.
Mind reading computer system technology can be used for checking the mental state of a person
he doesn't need to type or speak anything system will understand this by default. University of
Cambridge is also working on a model of mind reading. Mind reading can be done by scanning facial
expressions. The working of machine is based on digital video camera which is used for analyzing a
person's facial expressions in real time and to understand the mental state like confused or thinking
or happy or sad, interested not interested etc. Firstly head gesture and facial expressions express the
particular person's mental state which is to be stored In machine's database and after that this result is
to be matched with the stored database data then after the successful match it returns the final result
i.e. mental state. This is represented as model at different granularities that stores face and head
movements. These are represented in 3dimensional space to form a clear model that depicts the
Facial Expressions .By using Navision’s software 22 feature points are identified after that represents
these points in real time. Different parameters and movement are then analyzed that identify gestures
like open mouth, eyes Movement and half closed eyes. When all these combinations come to the real
point they depict the mental states. For example thinking of a human is specified by the inner raise of
eyebrows.
ADVANTAGES
It can be implemented on the wheelchair and the wheelchair can be moved through the mind
control. This prototype mind-controlled wheelchair developed from the University of Electro-
Communications in Japan lets you feel like half Professor X and half Stephen Hawking—
except with the theoretical physics skills of the former and the telekinetic skills of the latter.
A little different from the Brain-Computer Typing machine, this thing works by mapping
brain waves when you think about moving left, right, forward or back, and then assigns that
to a wheelchair command of actually moving left, right, forward or back. The result of this is
that you can move the wheelchair solely with the power of your mind. This device doesn't
give you MIND BULLETS (apologies to Tenacious D) but it does allow people who can't use
other wheelchairs get around easier.
It permits the people who cannot use the normal wheelchairs and other wheelchairs easily due
to their disability.
This will aid the spacewalking astronauts and physically disabled persons.
This type of system can send instructions to the rover on the other planets and also aid injured
astronauts to control devices.
This can be availed to exchange information on sly, people can avail them on crowded buses
without the problem of being overheard.
It can eliminate the capability to lie.
DISADVANTAGES
The biggest disadvantage of this system is that confidential data can be hacked by
unauthorized person so the privacy can be breached. terrorist and hackers can steal the
important data so this can be more dangerous and also 100% accuracy can't be gained by
using mind reading computers. Accuracy of the mind reading computer can be up to 86.4%.
Researchers from the Max Planck Institute for Human Cognitive and Brain Sciences,
along with scientists from London and Tokyo, asked subjects to secretly decide in advance
whether to add or subtract two numbers they would later are shown. Using computer algorithms
and functional magnetic resonance imaging, or fMRI, the scientists were able to determine with
70 percent accuracy what the participants' intentions were, even before they were shown the
numbers. The popular press tends to over-dramatize scientific advances in mind reading. FMRI
results have to account for heart rate, respiration, motion and a number of other factors that
might all cause variance in the signal. Also, individual brains differ, so scientists need to study a
subject's patterns before they can train a computer to identify those patterns or make predictions.
Before implementing the systems the scientist needs to train the systems about all the patterns
to predict the result.
Because of this scientific development, the scholars are questioning on the theories of
criminal justice of the system.
APPLICATION
Lie detection
Langleben (2008) argues that blood oxygenation level-dependent (BOLD) fMRI could be
sensitive to differences between lies and truth. The key, he claims, is that BOLD fMRI can
only compare states rather than positively identify deception. He discusses how many popular
science articles conflate how much fMRI can do.
Mertens & Allen (2008) discuss whether ERP-based procedures could detect deception,
instead of or in addition to fMRI.
Moreno (2009) discuss ethical issues in lie detection and how the law should be influenced
by cognitive neuroscience, specifically in cases where neuroimaging could be used to
determine truth, lies, and guilt.
Pain detection
Marquand et al. (2010) suggest that supervised machine learning algorithms can be used to
decode fMRI data. They use this kind of technique to show that fMRI can be used to predict
participants’ subjective pain ratings and propose that it will be a useful method for producing
qualitative predictions about brain states.
Brain-computer interfaces
Direct brain communication in paralysis, motor restoration in stroke – Birbaumer & Cohen
(2007) evaluate the use of EEG and fMRI in brain-computer interfaces, focusing on
applications for paralyzed patients and for motor restoration in the case of stroke. Although
currently, our understanding of the information flow in the brain that is required for such
interfaces to work is incomplete, such interfaces will eventually be able to be used for direct
brain communication and will allow otherwise “locked-in” patients to interact with the world.
Daly & Wolpaw (2008) also discuss advances in the analysis of brain signals and training
patients to control those signals, focusing on EEG techniques specifically for patients with
severe motor disabilities.
Pattern analysis and future research
Norman et al. (2006) argue that fMRI data can be used in conjunction with sophisticated
pattern-classification algorithms to decode the exact information represented in a patient’s
brain at a particular moment in time. They discuss factors that would boost the performance
of this method — it is possibly the most promising research toward actual mind-reading.
CONCLUSION
This paper describes how a user can read the mind of other person by using mind reading
Computer. In this technology present real time video is compared with already installed video of
facial expression which represents the mental state. Another way of detecting the mental state is to
be done with the help of futuristic headband which sends Infrared light into head’s tissues which
absorbed the light.
One of the major obstacles in this journey appears to be in the area of pattern recognition of
the mind signals, considering the limited understanding of a user’s brain and its electrical activities,
since the accuracy of a mind signal detection could be degraded as the number of mind states
increases, such as when the user thinks about a series of words to implement a task. In this paper a
system was proposed that includes an enabler for controlling gadgets based on signal analysis of
brain activities transmitted from the enabler to the gadget. The enabler could be inserted in the user’s
ear and includes a recorder that records brain signals. A processing unit of the system commands the
device based on decoding the recorded brain signals. The proposed enabler could provide a compact,
convenient, and hands-free device to facilitate a brain-machine interface to control the gadget from
electroencephalography signals in the user’s brain.
Tufts University researchers have begun a three-year research project which, if successful,
will allow computers to respond to the brain activity of the computer's user. Users wear futuristic-
looking headbands to shine light on their foreheads, and then perform a series of increasingly
difficult tasks while the device reads what parts of the brain are absorbing the light. That info is then
transferred to the computer, and from there the computer can adjust it's interface and functions to
each individual. One professor used the following example of a real world use: "If it knew which air
traffic controllers were overloaded, the next incoming plane could be assigned to another controller."
REFERENCES
[1] https://www.google.com.
[2] https://www.youtube.com.
[3] https://www.livescience.com/53535-computerreads-thoughts-instantaneously.html.
[4] https://www.seminarsonly.com/Labels/MindReading-Computer-Advantages.php
[6] https://www.1000projects.org.mind-readingcomputer-seminar-report.html.
[7] https://www.123seminarsononly.com/seminarReports/014/mind-reading-computer.html.
[8] https://www.seminarprojects.com/Threadtechnical-seminar-on-mind-reading-computer.