You are on page 1of 17

Group 15 Report

Caini Liu
Shaoyu Wang
Ziqi Fang
Contents

1. Summary of Concept and System

2. Aesthetic design

3. Mechanism design

4. Hardware design

5. Software design

6. Machine learning

7. Reference
Summary of Concept and System

According to Sato and Matsue, hanger reflex is a phenomenon that a human head
will unexpectedly rotate gently when it is horizontally being equipped with a hanger
made of wire sideways, with the head’s temporal region is sandwiched by the
hanger (2009). Based on this research, a head wearing device which gave users
a feeling to turn their heads to a specific direction was designed and prototyped.
The turning was triggered by the facial expressions of people around them, so that
users can unconsciously pay attention to the emotional state of people standing
in their blind spot vision, through which neglect in social scenario can be reduced
by enhancing one's sensory ability to facilitate emotional communication. In this
project, group discussion scenario was chosen to be an example for function
presentation.

Project Background

Through the measurement by Sato and Matsue, the pressure distribution of a


head being sandwiched by a hanger was reported as three-point forces (see figure
2 and 3). We tested and analysed the phenomenon, and the forces existed both
stress and shear force. The same forces were supposed to be simulated in the
head wearing device design to achieve the same effect. 5 participants were invited
(with low ethic risk and has asked for allowance) and the following three methods
were tested during our experiments:
A.3 bars pushing the head in specific direction.

4 bars each with a soft pad on the touching surface with head was structured in
a gear module. Servo with 2.5kg-cm stall torque and servo with 9.4kg-cm were
both tested to drive the bar. It was found that at least 5N-force was needed for
generating a feeling for a participant to turn his/her head. Meanwhile, a force
over 10N started generating a little pain to the skin. However, a soft pad with a
relatively wide width significantly reduced the force and made the participants feel
comfortable. Moreover, if the force is not strong enough, the hardness of human
head will prevent the gear from pushing the bar into it, thus cause damage to the
servo. Therefore, the 9.4kg-cm servo was chosen to generate the force, and the
soft pad was applied to reduce the force to a proper range.

However, the direction of the bar was hard to control. In order to be able to push
the bar in a specific direction, a very strong structure was needed to hold the head
in place to prevent wobbling or shifting, which resulted in a large increase in the
forces applied to the head in addition to the 3-point pressure. Nevertheless, we
found that if the headset was too heavy or applied too much excess force to other
locations on the head, it would affect the user's sensation of turning the head.
Therefore, this method is hard to reach our request.

b.Utilise the shape of hanger

We assumed that using the hanger in its original form ensured that the user feels
the force of turning the head to the greatest extent possible. We experimented with
having the head-mounted device drive a hanger up and down, and left and right, i.e.
the device's robot arms helped the user to wear the hanger in the right position and
direction. The lo-fi prototype was tested succuss. However the structure strength
needed to be improved.

c.Soft robot

Fabric matrix, balloon and air pump were used to generate shear force to the
head. However the strength was far from enough to reach the required force due
to the big contact area.

Following the results of the above three experiments, method b was chosen for
further development, while the soft pad solution in method a was also helpful.
Project logic
Our example scenario was set to be a group discussion. There are two cameras on
the left and right sides of the headset to capture the facial expressions of the group
members on either side of the user. The command is activated when a yawning or
unhappy expression of a group member is detected. The headset will automatically
rotate to a certain angle and the hanger is lowered to catch the user's head, thus
giving the user a tendency to turn his or her head in a specific direction. The user
will then pay attention to the emotional state of those around him or her and thus
take care of them.
Aesthetic design

The appearance design was in the style of steam punk with all structures are
exposed rather than hidden. The hanger was redesigned into a shape that
aesthetically appealing and was avoided to look exactly like an origin hanger. We
applied this style to provide the device a feeling of futuristic and feel like generating
a sixth sense for the user.
Mechanism design
We have tried three deriction to achieve our goal, and we finally decide to simulate
a real hanger. The follow part show how we did that.

Robot arm modification

We have changed the strength, direction, and sequence of movement of the


motors used for the original robot arm, combining them with linked structure. All
of the motors are replaced with the ones of version mg 996r. Firstly, when the
movement starts, the top servo motor rotates the larger frame horizontally by
180 degrees, then the four horizontally placed motors connected to the frame will
control the vertical up and down movement of the smaller frame by gearings, while
the range of rotation of the motors is changed to control the range of movement of
the frame to suit the function.

We have used five servo motors as you can see the purple cubes in the figure.
Sturcture and material selection

Our headset consists of three parts. The first is the structure fixed to the head. It is
made by PLA. There is also a layer of sponge padding between the PLA and the
head for comfort, secured by an adjustable bandage. This structure has a frame
for hanging the camera and servo motor (the pink part is the camera fixture). There
is also a servo motor on the top of the head to control the rotation of the entire
headset.

The second part consists of four servo motors and gear boxes. They are
suspended from the head ring and secured with M3x12 screws. The Gear box can
convert the circular movement of the servo into a straight line, which is used to
help us lift and put down the hanger ring. The Hanger ring and this servo part are
also fixed with M3x12 screws. They all made by PLA, which has good toughness
and stiffness.
The third part is a ring that simulates a hanger. After many tests, we tried soft
robot, motor direct push and other methods, but we couldn't perfectly restore the
force exerted by the hanger on the head. So we finally decided to make a structure
similar to a real hanger. We 3D printed the hanger ring in PLA and interspersed it
with steel wire. The way to fix the Steel wire and hanger ring can be seen in figure
3. There are two connected holes on the Hanger ring. The big hole is used to pass
the steel wire, and the M3x6 self-tapping screw is inserted into the small hole, so
that the screw and the ring are stuck on the steel ring. Experiments have proved
that this structure is very stable.
Reproduction instructure
Steps:

1. 3D print all the stl files blew. (The quantity is indicated in the file name)
2. Srew boom list
MxX12 x35
M3x6 x6 (Self-tapping screws)
M6x45 x4
3. Assemble the first part.

4. Assemble four gearboxs and attach them to head ring.

5. Assemble hanger ring and attach them to gear box.

Documentation
Group 15_Document/Mechanism document
Hardware design
Component choosen

Servo with 2.5 kg-cm stall torque and servo with 9.4 kg-cm were both tested for
the device. The motor needs to create enough force to drive the bar to push the
hanger and make the wire of the hanger overcome the resistance brought by the
human head and deformation, and finally sandwich the human head perfectly.
However, the 2.5 kg-cm servo can only maximumly produce approximately 5N
force, which is not strong enough. Therefore the servo with 9.4 kg-cm stall torque
was chosen.

The HBV-1716WA camera was chosen as it has clear resolution ratio of 1920*1080
2million pixels. The 140° wide angle sight could properly suit the requirement
of boarding the user’s left and right side sight. It also prevented picture from
distortion.
Reproduction instructure

Steps:

1. Connect an external camera. Edit the device code information in the setting.xml
of FaceOSC to switch the computer camera to call an external camera.

2. Transfer OSC from Faceosc to Wekinator. Change the port on which Wekinator
receives OSC to the 8338 same as that of Faceosc. Then edit OSC Input to /
raw and receive 132 data, while OSC Output is changed to 1 and type to All
Classifiers,port 12000

3. Send the output of the Wekinator to Processing. The Wekinator trains models
based on the recognized expression classes and runs them, sending the special
expressions as output to the serial port of Processing, which receives the
Oscmessage and recognizes the receivedPose, and transmits the emosig data.
The code is shown in the following figure.

4. arduino receives the signal to drive the motors. arduino receives the emsig
signal and drives the rotation of the motors in different situations. The code is
shown in the document below.
Power safety
We use two 5V1A batteries powered by an external power supply and control the
rotation frequency of the motor to keep it functioning properly.
Software design
We used the FacOSC to input facial expression data and use wekinator to train the
model.

Pipeline

Firstly, the camera module captures the human facial expression as input,
FaceOSC recognizes the expression such as yawning, and then transmits the 132
inputs of the facial mesh to the Wekinator via port 8338, which outputs 2 classes
and trains the model. The OSC signal from the Wekinator will be then sent to
Processing via port 12000, and the data will be received by Arduino via the serial
port to drive motors.
Reproduction instructure
Steps:

1. Connect an external camera.

2. Open FaceOSC and use the external camera

3. Open Wekinator and modify input and output message.

4. Run FaceOSC and Processing and train model on Wekinator

5. Open Arduino and receive OSC message from Processing

6. Run Arduino and drive motors.

Documentation
Group 15_Document/Software document
Machine learning
The input is the 132 face mesh data received by the FaceOSC, while setting the
output to two classes. We set the shut-up state to output type1 and the open-mouth
state when yawning to output type2, input as much model data as possible to train
the model, and output the message to processing. In Machine learning process,
in order to check the success of the model instantly, we modified the Processing
code to output specific statements after identifying the class of yawning.

Model chosen
OSC.client was also used in the experiments. It provided the possibility to
separately train each area of a face expression and movement in wekinator.
However, when separately train 14 positions of a face, the perception of expression
as a result of training became confused because it failed to appropriately combine
and judge the variations of different parts as one expression. Moreover, there
were differences of facial expression between different people. Therefore, we
abandoned the initial training method and trained one specific emotion at a time
(for example a sad face), rather than train each specific face organ movement
separately. This utilized faceOSC raw data.

Reproduction instructure
Showed in software reprodcutino instructure.

Documentation
Group 15_Document/Machine learning document
Reference
Ariticle & grey research
https://ieeexplore.ieee.org/document/5326327

FaceOSC
Faceosc:
http://golancourses.net/2018_60212f/daily-notes/sep-28/faceosc/
https://github.com/kylemcdonald/ofxFaceTracker/releases
https://openframeworks.cc/zh_cn/documentation/
Wekinator
http://www.wekinator.org/walkthrough/
https://www.uni-weimar.de/kunst-und-gestaltung/wiki/GMU:Tutorials/Performance_Platform/Recognizing_
Gestures_with_Wekinator

Processing
library:
https://github.com/ernestum/VSync
https://fabacademy.org/2022/labs/kannai/students/yukiya-yamane/assignments/week15/
https://blog.csdn.net/TonyIOT/article/details/106353827?ops_request_misc=&request_id=&biz_id=102&utm_
term=processing%20 arduino&utm_medium=distribute.pc_search_result.none-task-blog-2~all~sobaiduweb~defa
ult-5-106353827.142

You might also like