You are on page 1of 24

“BUILD A BOT”

EMERGING TECH EXPERIMENTS FOR K-12 STUDENTS

2020 | Stanford d.school K-12 Lab


Ariam Mogos (ariam@dschool.stanford.edu) Futurist Fellow//July 2020

What’s inside?
This facilitation guide includes a set of activities for children, families and parents to
experiment with the potential and peril of AI assistants. In this document there are three
workshops with facilitator guides, slide decks, worksheets and other materials. These
have all been designed as analog activities and do not require a computer.
Purpose:
Emerging technologies have the potential to make great contributions to society, and at
the same time there is an urgent need to address the embedded bias, dominant
narratives and the replication of real-world structural inequities they perpetrate. It’s
critical that all students, educators and families are knowledgeable about the ethical
implications of emerging technologies and have the agency to design, reflect and
participate in decision-making processes. The goal of this project is to provide K12
communities with learning experiences which provoke everyone to ask informed
questions about emerging technologies, and to interrogate and reflect on how our
positionalities are embedded in our design work.

Activity Usage
License: CC-BY-NC-SA under Creative Commons

These activities are licensed as CC-BY-NC-SA under creative commons. This license
allows you to remix, change, and build upon these activities non-commercially as long
as you include acknowledgement of the creators. Derivative works should include
acknowledgment of the authors and be licensed as CC-BY-NC-SA.

To acknowledge the creators, please include the text, “This Build a Bot Curriculum was
created by Ariam Mogos with key contributions from Laura McBain, Megan Stariha and
Carissa Carter from the Stanford d.school.” More information about the license can be
found at: https://creativecommons.org/licenses/by-nc-sa/3.0/

If you are interested in using this work for for-profit commercial purposes please contact
Ariam Mogos (ariam@dschool.stanford.edu) & Laura McBain
(laura@dschool.stanford.edu).

How to Access Materials:

To use and edit the activities in this document, make a copy of this document by:
1. Making sure you are logged into your Google Account.
2. Go to File > Make a copy.
3. You will be prompted to name and save the materials to your drive.

All materials can be found here:


https://drive.google.com/drive/folders/1wknBmVh1B911KIXmYGoTFfdD_RbgrdHH

1
In order to access the slides, make sure to follow the steps above to add them to your
Google Drive account.

Thank you to our colleagues at the MIT Media Lab Personal Robots Group for this
fantastic facilitation guide template and their work around AI + Ethics ; )

Our Key Questions:

1. How might we support all children, family and educators to ask informed
questions about emerging technologies and participate in the design and
decision-making processes of emerging technologies?
2. How might we support children, family and educators learn about discriminatory
design and how our positionalities (encompassing our identity and social status)
influence the creation of technology?

Goals for the project:

For youth:
● Explore curiosity about the way emerging technology works, and the implications
it has on different communities and society.
● Create analog-based solutions with concepts which underpin emerging
technology and examine the role of positionality (encompassing our identity and
social status) in design work.
● Embrace the creation of emerging technology grounded in social justice, which
centers the perspectives and experiences of non-dominant communities (Black
and Brown communities, persons with disabilities, LGBTQ, etc.), and
acknowledges the harm inflicted on them (goal for everyone in K12)***

For educators and families:


● Engage in playful and impactful ways to broach topics like race, gender and
ability in the creation of emerging technology.
● Remix and adapt these activities with different learning communities.

For broader K12 community:


● Broaden the understanding of technology and computing literacy to include
reflections on positionality (encompassing our identity and social status) and the
implications of emerging technology design on society.
● Collaborate with others in the field to understand how we can continue to design
experiences in K12 which focus on the implications of emerging technology.

2
Activities:

1. Design lines for a bot. (Designed for ages: 14-18 years old)
In this hands-on activity, participants build experiences for their own AI assistant,
all while considering the various social implications on users.

2. Design datasets for a bot. (Designed for ages: 14-18 years old)
In this hands-on activity, participants build datasets for their own AI assistant, all
while considering the various social implications on users.

3. Design rules for a bot. (Designed for ages: 14-18 years old)
In this hands-on activity, participants build rules for their own AI assistant, all
while considering the various social implications on users.

Questions & Feedback


If you have any additional questions or feedback please feel free to send it to Ariam
Mogos (ariam@dschool.stanford.edu). You can also fill out this short survey, tweet at us
@k12lab and @stanforddschool. We’d love to see what you make, what ideas you have
and how we can build on this experiment!

Acknowledgments
Thank you to all the Black and Brown women whose exceptional scholarship, fight for
liberation and continuous advocacy have influenced and inspired this work, and honors
those who came before: Safiya Noble, Ruha Benjamin, Simone Browne, Timnit Gebru,
Joy Buolamwini, Deb Raji, Rediet Abebe and many others.

3
BUILD A BOT.
Help! I need Some-bot-y.

EXPERIMENT #1: Design lines for a bot.

Overview:
In this hands-on activity, participants build experiences for their own AI assistant, all
while considering various social implications on users.

● Time: 90 minute activity


● Ages: 14-18

4
Activity Outline:

● (35 minutes) Make a bot say anything.


● (15 minutes) How lines are designed in the real world and the implications.
● (25 minutes) Design your own lines for a bot.
● (15 minutes) Debrief and reflect.

Learning Objectives:

● The interactions we have with AI assistants like Alexa, Siri and Google Home are
designed by real people who create content in order to engage us, meet our
needs and influence our behavior.
● Social attachment, gender stereotypes, abuse, etc. are all critical issues to
consider when designing how our AI assistants interact with users.
● It’s important for us to examine how our positionalities (encompassing our identity
and social status) influence how we design experiences with technology, and the
unintended consequences it may have on people who have different lived
experiences.

Materials Needed:

● Facilitation Deck/Full deck can be found here


● Request Card Deck (Print 8.5 x11 inches - letter size)
○ (Everyday cards)
● d.school movie lines card deck (Print 11x17 inches-tabloid size and cut)
● Worksheets (Print 11x17 inches-tabloid size)
● Positionality and Power Guide
● Pens/Pencils
● Post-it’s

Facilitation Guide:

1. (5 minutes) Pull up the deck or ask participants if they’re familiar with Alexa, Siri
or Google Home. What do these technologies help us do? If participants are
unsure, list a few and ask them to popcorn a few more:
a. Play music
b. Turn the lights on
c. Make a grocery list

5
2. (15 minutes) How cool would it be to design lines for our own AI assistants and
be bots? What if we could make them say anything? We’re going to experiment
rapidly and try it out!

Ask participants to pair up in groups of two. Hand each group:


● 1 copy of the AI Assistant card deck
● 1 copy of the d.school’s best movie lines card deck
● 2 copies of the design your own lines worksheet

Each team will have 15 minutes to experiment with the Everyday Request Cards
(front side only) and pick different movie lines for how they want their AI
assistant to respond to a user request. Once they pick a few lines for their request
cards, they can record their partner acting out the lines in bot mode and test it out!

Show participants the


worksheet and worksheet
example before they get
started for reference.

3. (15 minutes) Once participants have completed the activity, refer back to the deck
or prompt them with the reflection: “What if some of these movie lines were
actual responses? What might happen?” Give participants 10 minutes to
brainstorm on post-it’s or paper. Popcorn out ideas or facilitate a short activity
clustering participant ideas, and debrief.

4. (5 minutes) Share with them how real companies hire comedians, playwrights,
screenwriters and people good with language to write these lines for AI
assistants like Alexa, Siri and Google Home. Share some of the qualifications of
the job description:

6
Sample job description
to write for Siri
(published on Apple
Jobs 2020).

These people are hired to write engaging


experiences for users, just like
screenwriters for movies.

Our designs always

have implications.

It’s important to consider the implications


of the experiences we design first and how they might affect users, because
when we don’t, our designs can be harmful once they exist in the world.

5. (10 minutes) Share two examples with participants of how the design of AI
assistants can cause harm to users, which include:

a. Social attachment: This is when we design AI assistants to be so human


and engaging that users don’t understand that the AI assistant is a
machine, and users can become very attached. This can cause an
unhealthy set of boundaries for a user.

7
b. Gender Stereotypes: This is when we design AI assistants with traditional
female names, voices and gendered responses, and can reinforce gender
stereotypes and encourage sexist behavior from a user. It can also exclude
people who are gender non-conforming.

6. (25 minutes) Tell participants that now that we’ve thought about a couple of
implications, we’re going to take a different approach to designing our lines! Ask
participants to turn over their request cards and to read them over.

On the back of each request card participants


can evaluate the implications of a
pre-populated design, which can help
inform their own brainstorm and design.

They also have an opportunity to reflect on


why they made their design decision, their
positionality and how they believe it will
impact users.

Provide them with the pre-made


example or create an example of
your own.

Some participants may have a sense of how their identity and social status
influences how they view the world, and many participants may not. Hand out

8
and review the Power and Positionality Guide to support students examine and
reflect on their designs during the activity.

Popcorn out or ask participants to hang up their designs and facilitate a gallery
walk.

7. (10 minutes) Lead participants into a short debrief around the activity. Here are a
couple of starter questions:

a. How did thinking about the implications affect the way you designed the
experience?
b. How did thinking about your positionality affect the way you designed the
experience?
c. What impact might that have on different users?
d. What other issues do you think are important to consider when designing
these types of experiences and interactions for users? Who do you think is
not considered or left out when these experiences are designed? Who else
do you think can be harmed?

8. (5 minutes) Close out the activity on post-it’s with “I used to think...Now I


believe...”.

9
BUILD A BOT.
Help! I need Some-bot-y.

EXPERIMENT #2: Design datasets for a bot.

Overview:
In this hands-on activity, participants build datasets for their own AI assistant, all while
considering the various social implications on users.

● Time: 100 minute activity


● Ages: 14-18

10
Activity Outline:

● (40 minutes) What’s your tree dataset?


● (10 minutes) How datasets are designed in the real world and implications.
● (35 minutes) Design your own datasets for a bot.
● (15 minutes) Debrief and reflect.

Learning Objectives:

● A dataset is a collection of different types of information that are related to each


other, and organized or structured.
● All designers create datasets which reflect our identities, values, perspectives and
biases. Some datasets have been the dominant narrative or default view accepted
by the world.
● It’s important for us to examine how our positionalities (encompassing our identity
and social status) influence how we design datasets with technology, and the
unintended consequences it may have on people who have different lived
experiences.

Materials Needed:

● Facilitation Deck/Full deck can be found here


● Request Card Deck (Print 8.5 x11 inches - letter size)
○ (Category cards)
● Worksheets
● Positionality and Power Guide
● Pens/Pencils
● Post-it’s

Facilitation Guide:

1. (5 minutes) Pull up the deck or share with participants:

A dataset is a collection of different types of information (data) that are


related to each other, and organized or structured.

11
For example, when we get our annual physical at school, a dataset is created for
each of us that includes our height, weight, pulse, blood pressure and more.
These are different types of information, but they’re all related or “correlated” to
help us understand if we’re in good physical health. Datasets tell stories (which
includes stories that are not being told). For example, what information is not
part of our annual physical that could also tell us about our health, and why
is it left out?

“What if a user asked your AI assistant, what’s a tree? How would you design
your AI assistant to respond? What would the dataset include?”

Hand out the four


perspectives worksheet
to participants.

2. (10 minutes) Ask participants to only complete their own perspective. Call on
participants and popcorn out their datasets. Show participants the pre-made
data-set found in the slide deck:

12
3. (15 minutes) Ask participants to complete the perspectives of an ant, bird and
bear.

“If any of these animals were asked what’s a tree, how would they respond?”

Call on participants and popcorn out their datasets or pin them up and facilitate a
gallery walk. Show participants the pre-made dataset of how the Maori tribe in
New Zealand might define a tree found in the slide deck:

4. (10 minutes) Lead participants into a short debrief around the activity. Here are a
couple of starter questions:

a. How did all the datasets differ?


b. Is anyone of them the only and right dataset or perspective? If not, why? If
so, why?
c. Is there one that you might categorize as a broadly accepted or dominant
perspective in the world?
d. How do you think this might perpetuate inequity? How might this relate to
our AI assistants?

Share with participants that we all create “datasets” which reflect our identities,
values, perspectives and biases. Some datasets have been the dominant
narrative, or primary view accepted by the world. And it has done a lot of
harm to non-dominant communities. Show participants the dominant or default
perspective:

13
How might we change that? AI assistants are goldmines of information. Share
with participants that examples of datasets we can find in our AI assistants come
from Wikipedia, Yelp, Google search results and more. Who decides where the
information comes from and whose perspective does that information represent?
What influence can this have on the world?

Our designs always

have implications.

It’s important to consider the implications


of the datasets we design first and how they might affect users, because when
we don’t, our designs can be harmful once they exist in the world.

5. (5 minutes) One example of how the design of datasets for AI assistants can
cause harm to users include:

14
a. Misinformation: “Iris” (Siri spelled backwards) is a popular voice assistant
for android phones. Iris has given women misleading information about
emergency contraception and abortion services, and when women search
or ask for these services, it has quoted the bible.

6. (35 minutes) Tell participants we’re going to dive into another part of the card
deck and think about the implications of how we evaluate, select and curate
datasets for our AI assistants. Ask participants to pull out the category cards and
pick one category (food and culture/news/health/history and geography). Select a
request card from the category and turn it over to review!

On the back of each request card participants


can evaluate the implications of a
pre-populated design, which can help
inform their own brainstorm and design.

They also have an opportunity to reflect on


why they made their design decision, their
positionality and how they believe it will
impact users.

Provide them with the pre-made


example or create an example of
your own.

15
Some participants may have a sense of how their identity and social status
influences how they view the world, and many participants may not. Hand out
and review the Power and Positionality Guide to support students examine and
reflect on their designs during the activity.

Participants can use the


Brainstorm bank for helpful
data sources and guidance.

7. (10 minutes) Lead participants into a short debrief around the activity. Here are a
couple of starter questions:

a. How did thinking about the implications affect the way you searched for,
selected and curated data sources?
b. How did thinking about your positionality affect the way you searched for,
selected and curated data sources?
c. What impact might that have on a user(s)?
d. What other issues do you think are important to consider when selecting
and curating data sources for users?
e. Did the brainstorm bank help? If so, how? If not, why?
f. What reflections do you have about the way you think about technology
and information? How might it differ from others?

8. (5 minutes) Close out the activity on post-it’s with “I used to think...Now I


believe...”.

16
BUILD A BOT.
Help! I need Some-bot-y.

EXPERIMENT #3: Design rules for a bot.

Overview:
In this hands-on activity, participants build datasets for their own AI assistant, all while
considering the various social implications on users.

● Time: 90 minute activity


● Ages: 14-18

17
Activity Outline:

● (35 minutes) What are your locker rules?


● (10 minutes) How rules for a bot are designed in the real world and implications.
● (35 minutes) Design your own rules for a bot.
● (15 minutes) Debrief and reflect.

Learning Objectives:

● AI assistants like Alexa, Siri and Google home have different rules that are
designed by real people, and those rules do not always keep our data safe.
● Every question we ask and every conversation we have with our AI assistants is
data. It might not be important to us, but that data can be used to help us,
influence our behavior, or used against us. Our AI assistants can also collect data
when we’re not engaging with it, and potentially use that data.
● It’s important for us to examine how our positionalities (encompassing our identity
and social status) influence how we design rules for technology, and the
unintended consequences it may have on people who have different lived
experiences.

Materials Needed:

● Facilitation Deck/Full deck can be found here


● Request Card Deck (Print 8.5 x11 inches - letter size)
○ (Rules cards)
● Worksheets
● Positionality and Power Guide
● Pens/Pencils
● Post-it’s

18
Facilitation Guide:

1. (15 minutes) Pull up the deck or ask participants:

“Do you think all our questions and conversations stay between us and our AI
assistants?”

Share with participants that AI assistants can be like lockers at school. They can
store all kinds of data. Some data is more sensitive than other data.

Hand out the school data


worksheet to participants.

Ask participants to brainstorm their


most private and precious data or
information. Popcorn out participant
responses.

2. (10 minutes) Share with participants that an announcement over the loudspeaker
in school has been made. Pick a premade announcement or design your own.

Ask participants to brainstorm all the


ways their data could be used with
post-it notes and to design a new
rule for their locker.

19
3. (10 minutes) Lead participants into a short debrief around the activity. Here are a
couple of starter questions:
a. How did it feel to have no control over what happens with your data at
school? What did it make you want to do?
b. What type of rules did you design?

Share with participants that every question we ask and every conversation we
have is data. It might not be data that’s important to us, but it can be used to
influence us or even used against us.

Our designs always

have implications.

It’s important to consider the implications


of the rules we design first and how they might affect users, because when we
don’t, our designs can be harmful once they exist in the world.

4. (10 minutes) Share with participants one example of how the design of data for
AI assistants can cause harm to users, which include:

a. Data Privacy: According to VRT News, people are hired to listen to audio
files recorded on the Google Home smart speakers and the Google
Assistant smartphone app. These audio files support Google to improve
it’s search engine. While most of these recordings were made consciously,
some of the recordings had sensitive information and were never meant to
be recorded.

20
5. (35 minutes) Tell participants we’re going to dive into another part of the card
deck and think about the implications of the systems and rules we design for our
AI assistants. Ask participants to pull out the Rules cards and select a request
card. Turn it over and review!

On the back of each request card participants


can evaluate the implications of a
pre-populated design, which can help inform
their own brainstorm and design.

They also have an opportunity to reflect on why


they made their design decision, their
positionality and how they believe it will impact
users.

Provide them with the pre-made


or create an example of your
own.

Some participants may have a sense of how their identity and social status
influences how they view the world, and many participants may not. Hand out
and review the Power and Positionality Guide to support students examine and
reflect on their designs during the activity.

21
6. (10 minutes) Lead participants into a short debrief around the activity. Here are a
couple of starter questions:

a. How did thinking about the implications affect the way you designed rules
for your AI assistant?
b. How did thinking about your positionality affect the way you designed
rules for your AI assistant?
c. What impact might that have on a user(s)?
d. What other issues do you think are important to consider when designing
rules for AI assistants?
e. What other issues do you think are important to consider when designing
rules for an AI assistant?
f. What reflections do you have about the way you think about technology
and rules or systems?

7. (5 minutes) Close out the activity on post-it’s with “I used to think...Now I


believe...”.

22
Put it all together…

Ask participants
to reflect on one
positive impact
their bot might
make in the world.

What difference
did thinking about
implications and
positionality
make?

23

You might also like