Professional Documents
Culture Documents
Mūdies.
Empathic AI Sound Interfaces
Applying Emotional Artificial Intelligence
On Sound Interfaces
2019
UI AND UX
SOUND INDUSTRY
RECOMMENDATIONS
1
University of Otago
Thank you to all the experts for your participation. Without your feedback and
input, this report would not have been possible.
I would also like to mention two fantastic books that inspired my thoughts:
1. Emotional AI: The Rise of Empathic Media by Andrew McStay
2. Brand Machines, Sensory Media and Calculative Culture by Sven
Brodmerkel and Nicholas Carah.
Table of Contents
Your roadmap.
Methodology 3
Context 4
Emily’s Pains 6
Emily’s Goals 7
Desires & Worries of EAI 8
Assertions 9
UI & UX Implications 11
MOODS 11
HABITS 13
INTIMACY 15
2
Methodology
Data-driven & expert-informed intelligence.
3
Context
Iterative research revealed an opportunity for UI and UX
designers and innovators to reinvent sound consumption.
5
Emily’s Pains
Emily was created to represent the pains of Gen Z aka the Mūdies.
She appeared throughout the expert consultation highlighting the
pains and the goals of the Mūdies.
Emily has lost count of how many times she has scrolled
through Facebook. She is tired and should head to bed
but continues to watch TikTok videos and cute animal
memes.
6
Emily’s Goals
7
Desires & Worries of EAI
The main forces that impacted the formation of assertions
throughout the expert consultation.
The consultation revealed several desires and concerns about the application
of EAI on sound interfaces. On the vertical axis, factors that were desired
were placed higher on the scale and vice versa. On the horizontal axis, factors
were grouped categorically into the three overarching themes of moods,
habits and intimacy but some factors overlapped. The larger font size means
that it was discussed by more experts in relation to the rest.
Inaccuracy
Lack of Control
Privacy
Manipulation
WORRIES
8
Assertions
After three rounds of iterative consultation, ten assertions
were agreed upon by the experts.
MOODS
EAI should use both passive (e.g. predefined playlists) and active ways (e.g. text,
voice, emojis) to sense your moods in the moment.
If EAI interacts with consumer moods, it would allow for greater personalisation of
content than current AI systems.
Initially, I would be all right with informing EAI my moods, as I will experience the
benefits later.
HABITS
It is more enjoyable if EAI is designed for randomness & spontaneity to break
repetitiveness.
If my listening habits are challenged, EAI should hint at the amount of sound
content not yet discovered (e.g. ‘Check out, the 1908 people who are feeling
lonely’ or ‘Curious? hear about how families deal with illnesses’).
INTIMACY
EAI should be forgiven when it makes mistakes.
If EAI is trustworthy (e.g. data managed on the device instead of the cloud), I
would allow access to my instantaneous emotions.
If EAI responds with personality, it should only be centered around sound content
(e.g. the EAI asks, ‘Heya Natasha, wanna listen to something local?’).
Besides EAI responding about sound content, the only other acceptable respons-
es are about my well-being (e.g. ’It has been a good 3 hours, take a break’).
9
‘…if you’re able to label yourself, if you’re able to annotate,
you’re able to actively engage with the construction of
your media services.’
‘You could definitely have the EAI play maybe a few seconds of
different pieces of content? And, it could maybe ask Emily what she
thinks of this?...And you get this kind of whole interaction without
the screen.’
‘I think what’s lacking with Google Home and Amazon’s Alexa is the
inability to predict mood and suggest options/encourage/listen...
when we can’t connect with humans, we will find some way to
connect with our devices.’
10
UI & UX Implications
Implications for UI and UX designers of sound interface
brands and brands that integrate sound capabilities.
MOODS
For the empathic sound interface to meet Emily’s goals, it needs to interact
with her moods to curate content for her moment.
11
‘So I think for me, what would annoy me is if
there were suggestions that seemed arbitrary.’
Content plays a huge part in our lives, like in the recent Joker
movie. More needs to be done in the UI and UX of these systems.
12
HABITS
To help Emily consistently discover new content, the empathic AI sound
interface needs to intervene with her listening habits for serendipitous
encounters. From our research, there are three traits of serendipity:
randomness, unexpectedness and emotivity.
13
13
‘There is much more interest in edge based computing which
pushes the processing back out to devices themselves so if you
push the process back to device, you’ve got a lot more control
over where the data goes.’
‘In order for like EAI to work, it senses your mood and has
access to like, your heart rate, your facial recognition, and
expressions and your location, all that to actually seem like
smart and empathetic to your situation. But how much are
consumers willing to give?’
14
INTIMACY
For Emily to reveal intimate data to the empathic sound interface, there needs
to be trust in the brand and the overall listening experience.
15
University of Otago
Thank you again to all the experts for your participation.
AUTHOR CREATIVE INDUSTRIES MENTOR
Natasha Joe Dr. Roel Wijland
natatajoe@gmail.com roel.wijland@otago.ac.nz
nnj.myportfolio.com www.homeofhopepunk.co.nz
www.linkedin.com/in/natasha-joe
This Mūdie broke out of the listening bubble with Empathic AI Sound Interfaces!