Professional Documents
Culture Documents
1. https://emerj.com/ai-sector-overviews/artificial-intelligence-at-netflix/
2. https://netflixtechblog.com/ml-platform-meetup-infra-for-contextual-bandits-and-
reinforcement-learning-4a90305948ef
The presence of AI in today’s society is becoming more and more ubiquitous— particularly as large
companies like Netflix, Amazon, Facebook, Spotify, and many more continually deploy AI-related
solutions that directly interact (often behind the scenes) with consumers everyday.
When properly applied to business problems, these AI-related solutions can provide really unique
solutions that scale and improve over time, creating significant impact for both business and user.
But what does it mean to “properly apply” an AI solution? Does that mean there is a wrong way?
From a product perspective, the short answer is yes, and we’ll get to why that is later in this article
as we dig deeper.
Overview: First, we will outline 5 use cases of data science or machine learning at Netflix. We’ll then
discuss some business needs vs technical considerations a Product Manager would look at. Then we
will dive a little deeper into what is perhaps the most interesting of these 5 use cases as we identify
what business problem it seeks to solve.
5 Use Cases of AI/Data/Machine Learning at Netflix
Machine learning
Recommendations
Analytics
Computer vision
In this article, we’ll look at how Netflix has explored AI applications for its
business and industry through two unique use-cases:
Netflix uses the video below to show how, without artwork, much of the visual
interest—and engagement—of the company’s experience is removed.
Data generation
Model development
In another example, the Netflix Tech Blog explores how an image is chosen that
represents the movie, “Good Will Hunting.” The post explains that if a viewer
has a viewing history that includes romance movies, they may see a thumbnail
image of Matt Damon and Minnie Driver together. If that viewer watches a lot
of comedies, however, they may instead be shown a thumbnail image of Robin
Williams.
Source: Netflix
While our research did not identify specific results related to increased viewings
of specific titles due to these technologies, Netflix does disclose that they
have realized positive results through their own A/B testing and that the biggest
benefits have come from promoting less well-known titles. Given these results,
Netflix is now exploring further customization in how it presents its selections
to viewers by adapting on-screen areas like:
Synopsis
Evidence
Row Title
Metadata
Trailer
Before Netflix can choose which thumbnail images best engage which viewers,
the company must generate multiple images for each of the thousands of titles
the service offers to its members. In the early days of the service, Netflix
sourced title images from its studio partners, but soon concluded that these
images did not sufficiently engage viewers in a grid format where titles live side
by side.
Netflix explains: “Some were intended for roadside billboards where they don’t
live alongside other titles. Other images were sourced from DVD cover art
which don’t work well in a grid layout in multiple form factors (TV, mobile,
etc.).”
As a result, Netflix began to develop their own thumbnail images, or stills from
“static video frames” that come from the source content itself, according to
the Netflix TechBlog. However, if, for example, a one-hour episode of
“Stranger Things” contains some 86,000 static video frames, and each of the
show’s first three seasons has eight episodes, Netflix could have more than two
million static video frames to analyze and choose from.
Netflix states that AVA scans each frame of every title in the Netflix library to
evaluate contextual metadata and identify “objective signals” that ranking
algorithms then use to identify frames that meet the service’s “aesthetic,
creative, and diversity objectives” required before they can qualify as thumbnail
images. According to Netflix, these factors include:
This Frame Annotation process focuses on frames that represent the title and
interactions between the characters, while setting aside frames with unfortunate
traits like blinking, blurring, or that capture characters in mid-speech, according
to a Netflix Research presentation.
The CNN also evaluates the prominence of each character by evaluating the
frequency with which the character appears by him- or herself and with other
characters in the title. This helps “prioritize main characters and de-prioritize
secondary characters or extras,” Netflix claims.
Through its analysis, each frame receives a score that represents the strength of
its candidacy as a thumbnail image. Per Netflix, AVA considers the following
elements when it forms the final list of images that best represent each title:
While our research did not identify any results specific to AVA’s use within
Netflix, the company hopes that AVA will save creative teams time and
resources as it surfaces the best stills to consider for candidates as thumbnail
images and that the technology will drive more and better options to present to
viewers during that crucial minute that viewers allow before they lose interest
and search for another way to spend their time.