You are on page 1of 30

Video Fundamentals

Intel Confidential—Internal Use Only

Please note: The information presented in this document is for informational


purposes only and may contain technical inaccuracies, omissions and typographical
errors. It is intended to be a general overview of media and video technology in its
current state, intended for use by Intel employees, and is not intended to be a
complete and definitive treatise. E-mail competition@intel.com with any questions.
Video Fundamentals 

Contents 
1.  Introduction ................................................................................................................. 3 
2.  Video Distribution Methods ........................................................................................ 3 
3.  What Enables Video Playback on a PC? ..................................................................... 4 
4.  How is Video Displayed? ............................................................................................ 5 
5.  The Technology Behind High Quality Video Playback ............................................ 12 
6.  The Auditory Experience........................................................................................... 26 
7.  Future Technologies .................................................................................................. 28 
8.  Summary.................................................................................................................... 30 

Intel Confidential—Internal Use Only  Page 2 


1. Introduction
Video playback is one of the most common usage models on a PC. From watching videos
on YouTube to downloading full length movies from online retailers or viewing the latest
hit Blu-ray movies, the user desires the best video experience possible. The PC video
playback experience is dependent on the hardware, post-processing technologies, display,
and audio capabilities of a user’s notebook or desktop PC. Intel® HD graphics with
Intel® Clear Video HD technology has hardware acceleration for high definition (HD)
playback as well as post-processing capabilities that enhance the PC video viewing
experience with sharper images, precise color control, and advanced support for a wide
range of digital displays.

This document focuses on a PC’s ability to enhance and provide the best user experience
when viewing videos on a display. Many of the technologies presented in this document
can be used to describe different aspects of photograph viewing and editing; however, the
purpose of this document is to educate and be a reference for a wide audience on the
concepts of video playback and related technologies. This document will describe video
post-processing, display, and audio technologies as well as possible future directions of
video technology. This document should be used in conjunction with other material
available on competition.intel.com to provide a detailed resource enabling effective
conversations about PC video playback capabilities.

2. Video Distribution Methods 
The primary difference in how a consumer views his or her videos is the delivery method
of the video content. Common delivery methods for video are physical storage mediums
(DVDs and Blu-ray discs), television (cable and satellite providers), online video, or
personal content a consumer created with a camcorder.

Both standard definition (SD) and high definition (HD) videos can be distributed on
physical storage mediums which are read and played by a Blu-ray player, HD-DVD
player, DVD player, or VCR. Videos distributed on physical storage mediums are
typically professionally created movies but can also be documentaries, home videos,
television series, or other types of video. Physical storage mediums are portable and can
generally be used on any television or PC with hardware capable of reading the storage
medium.

Television content refers to a broad distribution of movies, television programming,


sporting events, and many other types of video. Television content is delivered over a
cable land line or satellite link to the consumer and can be either SD or HD, depending on
the preferences of the consumer.

Online video is any video content which is downloaded or streamed from the Internet in
order to be viewed, generally (although not exclusively) on a PC. Streamed video can be
either live or pre-recorded, but in both cases, the video is ‘streamed’ from the Internet for
immediate viewing. Downloaded video content is pre-recorded and downloaded fully
onto a user’s PC to be viewed at a later time. Examples of online content providers are

Intel Confidential—Internal Use Only  Page 3 


Netflix, which provides movie and television programming over the Internet, and
YouTube, which provides user generated video content. Online content is sometimes
lower quality than standard definition video; however, more and more online content
providers are switching some or all of their content to higher resolutions and even full
HD at the cost of higher storage and bandwidth requirements.

Personal video content is created by the end user. Examples of personal video content are
wedding videos, home movies, home videos, recording of sporting events, etc. Personal
video content is not typically recorded in a professional manner and video post-
processing plays a greater role in providing the best viewing experience possible.

With a PC, a user can enjoy video distributed in many ways: online video streaming and
downloading, physical storage mediums like Blu-ray or DVD movies, creating and
editing personal video content, viewing 3D movies with a 3D enabled display, and
watching live TV from a satellite or broadband cable connection by using a TV tuner
card.

3. What Enables Video Playback on a PC?  
A PC’s video playback capability typically resides as a part of a PC’s graphics solution or
2D graphics engine. The 2D graphics engine should not be confused with the PC’s 3D
graphics capability, which is the ability to render and output objects to a display in real-
time for immediate viewing. There is a common misconception that 3D gaming and
graphics performance is tied to video playback performance, however, watching a video
does not require real-time rendering since it is playing back pre-recorded content.

Graphics solutions, in general terms, have two parts to them: the 3D graphics capability
and the 2D video capability. On most graphics solutions the 3D and 2D capabilities are
entirely separate as many of the 2D capabilities such as the video decode pipeline and
other video post-processing technologies are implemented as fixed function hardware.
Fixed function hardware is hardware which is designed specifically to perform one task.
In contrast, 3D graphics is very similar to the CPU in that it uses general purpose
hardware such as the shader pipelines in order to perform a wide range of tasks. Some
graphics solutions do use the shader pipelines and other 3D graphics components to run
common video post-processing procedures but because the 3D hardware is general
purpose, performance is likely to lag behind using specific fixed function hardware.

Regardless of what hardware is used to perform video playback, it is important to


understand and remember that the performance is not related to 3D graphics performance
and cannot be measured or assessed by metrics like 3DMark Vantage from Futuremark.
For more information please refer to Graphics Fundamentals and CPU Relevance.

Intel Confidential—Internal Use Only  Page 4 


4. How is Video Displayed? 
In order to have the best video viewing experience, a high quality display must be part of
a user’s PC. The display can show video in many different formats and resolutions. Some
of the most commonly known display resolutions are SD, HD, and Low Resolution (i.e.
Handheld Devices).

Standard definition is the most common resolution in which video is displayed. The
resolution of standard definition video is 480p (640 x 480 pixels) and is still the most
widely used viewing resolution for television. An example of a standard definition
display is a cathode-ray tube TV, popular before the introduction of LCD flat panel
displays.

High definition videos are displayed at any resolution higher than standard definition.
Current high definition displays come in resolutions of 720p, 1080i, or 1080p. One
example of a high definition display is a PC monitor that supports resolutions above
640x480, also referred to as HDTVs. Bandwidth requirements for transmitting high
definition content scales with the amount of pixels being transferred over the connection.
For example, the bandwidth required to transfer 1080p video is approximately double
that of 720p video and more than four times the bandwidth required for standard
definition 480p video. Video broadcasting companies have standard definition and high
definition versions of many channels; however, full HD 1080p is not yet being used by
broadcast companies due to bandwidth limitations.

Low resolution devices are a broad category of devices which have display resolutions
lower than standard definition 480p. SD and HD videos have to be scaled down in order
to be properly displayed on these low resolution devices. Examples of devices with low
screen resolution are Mobile Phones, PDAs, and Smartphones. Low resolution devices
are becoming one of the main avenues for video consumption as they allow the user to
view videos anywhere instead of being confined to the living room or home office.

In addition to the common display resolutions above, many videos also come in other
resolutions. In order to watch a video in its intended resolution, the proper display must
be used. For example, if the desired output and visual experience is a 1080p high
definition video, then an HD display that supports 1080p resolution must be used. If a
smaller display than 1080p is used then only a small portion of the video (equal to the
resolution of the smaller display) is seen. To address this, the original video must be
scaled down from the HD resolution to the resolution of the smaller display (resulting in
data loss). This is generally called downscaling. If a 720p HD video is shown on a
display with 1080p resolution then the video must be scaled up from the original 720p
resolution to the resolution of the larger display resulting in less overall video quality.
This capability is generally called upscaling.

In summary, there are many different video display resolutions; however, video
processing can take care of displaying video on any size of display by upscaling, or
downscaling, the video to the proper resolution which the display supports.

Intel Confidential—Internal Use Only  Page 5 


Display Concepts and Technologies 
There are many terms that are used to describe a display and determine the video
capabilities of that display. The following terms are commonly found on any description
of a display and its associated hardware. These terms are used to describe the technology
in the display as well as the capabilities of the display which can ultimately have an
impact on the video viewing experience.

 Aspect Ratio: The aspect ratio is important in a display because it describes the
format in which video is seen or the ratio at which the screen is drawn. Typical
aspect ratios are widescreen 16:9, or standard screen 4:3. A widescreen aspect
ratio of 16:9, for example, means that for every 16 pixels of width there are 9
pixels of height in an image displayed on it.

 Gamut: The gamut of a display describes the total amount of colors the display is
able to produce affecting the quality of images presented on the display. Gamut
describes the range of colors that can be created by a three color Red, Green, and
Blue (RGB) display. For example, a 30-bit display with 10-bits of color
information each for Red, Green, and Blue can display 2^30 or approximately
1.073 billion unique colors.

 Touch: Touch displays recognize the touch of human skin and have the ability to
register the exact location of the touch and map the location with specific
functions such as buttons where the touch acts much like a PC mouse. From a
compute resource perspective, a touch display is not much different than a mouse
since the touch display provides the XY coordinates instead of the mouse.

 Interlaced: Interlaced describes a way in which an image can be drawn on the


display. When drawing an image on a display using interlaced scanning the rows
are drawn for odd lines top to bottom before the rows are drawn for even lines top
to bottom. Odd lines are interlaced with the even lines for a completed picture.
480i and 1080i are examples of interlaced video display formats. The benefit of
interlaced content is that the bandwidth required to transmit them is halved over
progressive formats because odd lines are sent in one frame and then the even
lines in another frame following the first.

 Progressive: Progressive describes a way in which an image can be drawn on the


display. When drawing an image on a display using progressive scanning, the
screen is drawn from top to bottom in logical order. 480p, 720p, and 1080p are
examples of progressive video display formats.

 Stereo 3D (stereoscopic): Stereo 3D or stereoscopic 3D refers to the ability to


project or display the illusion of a 3D image. Displays enabled for 3D media are
able to display two different visual streams offset from one another: one for the
right eye, and one for the left eye. The combination of the streams on the display
and viewing the display with polarized glasses gives the illusion to the human eye
of a 3D image with depth.

Intel Confidential—Internal Use Only  Page 6 


As mentioned previously, the display is one of the most important aspects of the video
viewing experience. Regardless of the video processing techniques used, a poor display
can trump any enhancements made to the video because the display hardware is of bad
quality. The following technologies can affect how video is displayed. This section
focuses on display inputs, display types, and various display technologies which affect
video playback.

Display Inputs 
The following is a list of commonly found display inputs and connection types used when
connecting a computing solution to a display in order to transmit video and audio data.
Intel® Clear Video HD technology currently supports all of the following technologies
with the exception of HDMI 1.4, however, that capability will be supported in the future.
This list is by no means a comprehensive list of all connection types that are capable of
transmitting video.

High Definition Multimedia Interface (HDMI): HDMI media transmission allows for
the transfer of high-definition video, eight channels of digital audio, and a Consumer
Electronics Control (CEC) connection which allows HDMI devices to control one
another when needed. HDMI can encrypt signals using HDCP in order to provide content
protection between the transmitting device and receiving device.1 HDMI is fast becoming
the preferred connection type for high definition media. In order for a company to license
HDMI, it must pay a royalty to HDMI Licensing, LLC.2

 HDMI Version 1.3: HDMI version 1.3 was released June 22, 2006 and has a
total bandwidth of 10.2 Gbit/s and a maximum resolution of 2560x1600p75
(2560x1600 at 75Hz). (Note: There are some Speed HDMI 1.3 cables that can
support ALL HDMI 1.4 features except for HDMI Ethernet.)

Figure 2: HDMI 1.3 cable.3

o HDMI 1.3 Deep Color: HDMI 1.3 supports 30-bit, 36-bit, and 48-bit Deep
Color in the following formats: xvYCC, sRGP, CbCr.

1
Source: http://en.wikipedia.org/wiki/HDMI, February 2, 2010
2
Source: http://www.hdmi.org/manufacturer/terms.aspx, February 2, 2010
3
Source: http://shop.factorydirectav.com/images/hdmi3.jpg, February 2, 2010

Intel Confidential—Internal Use Only  Page 7 


o HDMI 1.3 Audio: HDMI 1.3 supports output of DTS-HD Master Audio as
well as Dolby TrueHD.

o HDMI 1.3 3D: HDMI 1.3 currently only supports 3D stereoscopic playback
up to 1080i resolution.

 HDMI Version 1.4: HDMI version 1.4 was released May 28, 2009 and has a
resolution up to 4096x2160p24 which is used in digital theaters but with HDMI
1.4 has the ability utilized at home. In addition to HDMI 1.3 features HDMI 1.4
has the following enhancements:

Figure 1: Worlds first HDMI 1.4 cable4

o HDMI 1.4 Ethernet: Supports 100 Mb/s Ethernet between HDMI devices.

o HDMI 1.4 Automotive Connection System: HDMI 1.4 introduces a cable


which is specifically made for connections within an automotive system and
can withstand the stresses of the automotive environment. HDMI 1.4 also uses
a Type E connector for use within a vehicle.

o HDMI 1.4 Audio: HDMI 1.4 adds support for an Audio Return Channel to
the specifications of HDMI 1.3.

o HDMI 1.4 3D: HDMI 1.4 supports Full HD 3D playback in 1080p24 (1080p
at 24Hz) and in many other formats.

 DisplayPort: DisplayPort is a royalty free standard for transferring digital data.


One key differentiating factor of DisplayPort versus HDMI or DVI is that it
supports both external (PC to TV) and internal (Laptop Display) connections.
DisplayPort is capable of using HDCP content protection in order to protect and
encrypt data transmission between devices.

4
Source: http://en.wikipedia.org/wiki/File:101783-UKHDMI-6-GB.jpg, February 2, 2010

Intel Confidential—Internal Use Only  Page 8 


Figure 3: DisplayPort cable5

 DVI: DVI is a digital video output connection which is used mostly with
personal computer displays. DVI is a video only format and contains no ability to
transfer audio. DVI is capable of using HDCP content protection.

o Dual-Link DVI: Dual-Link DVI provides greater bandwidth in the form of a


second data link for displays which output at high resolution.

Figure 4: DVI connection, top, and HDMI 1.3 connection, bottom6

 Video Graphics Array (VGA): VGA is a graphics output standard introduced by


IBM in 1987. VGA has 15 pins and can output 640x480 video with no audio.
VGA is the minimal standard which all PC graphics devices currently support,
although support may be phased out as digital displays become more common.
Expect native VGA support to be phased out in favor of newer display standards
like DisplayPort in the next few years.

Figure 5: VGA cable7


5
Source: http://www.belkin.com/pressroom/releases/uploads/assets/media/hi-res/F2CD00X-XX.jpg
6
Source: http://www.smelectronics.com/images2/S-HDMI-DVI-1.jpg, February 2, 2010
7
Source: http://www.i-chart.co.nz/images/belkin-VGA-cable.jpg, February 22, 2010

Intel Confidential—Internal Use Only  Page 9 


Display Types 
Videos are displayed on many different devices which can be broadly categorized as
either televisions or PC monitors. Although there are some similarities between
televisions and monitors, there are some important differences. Monitors are typically
displays which are connected directly with a PC with their only purpose being the display
of the data which is transferred to them. Monitors therefore assume that all technology
related to the post-processing of any image is completed by the PC to which it is
connected. Televisions typically make no assumption that the driving input takes care of
any video post-processing or any other type of video enhancement. Modern televisions
usually have their own video post-processor and TV tuner hardware embedded as part of
its hardware platform.

There are many display types which videos are viewed on. Displays range from bulky,
heavy, legacy Cathode Ray Tubes to ultra-thin and light weight LED-backlit LCD
displays:

 Cathode Ray Tube (CRT): Invented in 1897, the CRT is the oldest display type
which can display media8. CRTs were the standard viewing display for video until
relatively recently. CRTs are primarily being replaced by LCD displays.

 Liquid Crystal Display (LCD): LCDs are the most commonly found types of
display today. LCDs are known to be lightweight, portable, and are able to be
constructed in greater sizes than realistic with a CRT.9

 LED-backlit LCD: LED-backlit LCDs are currently also called LED TVs by
television makers such as Samsung and Panasonic. LED-backlit LCDs should not
be confused with true LED displays. LED-backlit LCDs are much thinner than
current Cold Cathode Fluorescent Lamp-backlit LCDs because the amount of
space required for LED backlighting is much smaller than CCFL. LED-backlit
LCDs also have lower power consumption, greater dynamic contrast, and a wider
color gamut than traditional CCFL-backlit LCD displays.

 Digital Light Processing (DLP): DLP displays contain an optical semiconductor


called a DLP chip which was invented by Texas Instruments. The DLP chip has
up to two million mirrors (1920 x 1080 resolution (1080p) = 2073600 pixels)
which are coordinated to reflect an image onto a screen.

 Liquid Crystal on Silicon (LCoS): LCoS is a technology similar to DLP but


instead of using mirrors, LCoS uses liquid crystals. Compared to other display
technologies, LCoS technology allows for lower cost, higher resolution displays.
 
Content Protection 

8
Source: http://en.wikipedia.org/wiki/Cathode_ray_tube#History, February 8, 2010
9
Source: http://en.wikipedia.org/wiki/Liquid_crystal_display, February 2, 2010

Intel Confidential—Internal Use Only  Page 10 


Content protection is the ability for devices to protect the copyright of content by
encrypting the data when it is transferred from one device to another such as a PC to its
associated monitor. There are several different content protection schemes which help
protect the copyright of content and provide data encryption:

 Protected Audio video Path (PAVP): PAVP protects the data path within a PC
during playback by encrypting the compressed video data when it sends it to the
chipset and ensures that PCs which have hardware decode acceleration are
utilized. PAVP reduces processor utilization by off-loading decode functions from
the CPU to the chipset.

 High-Definition Content Protection (HDCP): HDCP protects audio and video


content over a number of different interfaces from being copied. HDCP currently
supports and protects content being transferred over DVI, HDMI, UDI, GVIF,
and DisplayPort as of version 1.3.

 DPCP: DPCP is a 128-bit AES encryption content protection scheme designed


specifically for DisplayPort and was developed by Philips.

Other Display Technologies 
These are technologies which may be located on the display or describe different aspects
of the displays such as specific technologies which it may have which affect the video
quality:

 Dual Simultaneous HDMI: Dual Simultaneous HDMI is the ability for a video
device to simultaneously output two HDMI signals at once to two different
displays or other devices.

 Bit Color Depth: The Bit Color Depth is the number of bits which are used to
represent the color encoding of a single pixel in an image. The bit color depth is
related to total number of colors able to be displayed by Colors = 2^BCD.

 Brilliant Color: Brilliant Color is a technology from Texas Instruments. Displays


typically only have three color channels: red, green, and blue. Brilliant Color adds
three more: cyan, magenta, and yellow for greater color accuracy.

 De-Flicker Filtering: Flickering occurs on screens which display interlaced


content. When interlaced content is displayed, half the scene is shown and then
the other half of the scene is shown immediately following the first half. In a
display, the refresh rate of the display is generally fast enough that to the human
eye the difference between the two is not noticed and it is perceived as a solid,
blended image. Flickering occurs when the image does not appear to blend into a
single image but is still running fast enough that there only appears to be a flicker
in the image. De-flicker filtering removes flicker from interlaced media playback
on a display.

Intel Confidential—Internal Use Only  Page 11 


 Spatial Temporal Dithering: Dithering is the process of making an image which
has a small color palate (i.e. 4-bit color) have the illusion of being in a much
larger color palate (i.e. 32-bit color). Dithering inserts colors which are part of the
palate in such a way that to the human eye the way in which they blend together
gives the illusion of a much richer color palate than is actually present in the
image. In the figure below, the image of the cat on the left uses a very limited
color palate while the image on the right is the same image with dithering applied.

Figure 6: Dithering10

 Underscanning/Overscanning: Underscanning is where the image displayed is


smaller than the actual screen area of the display. Underscanned images have
black margins between the edge of the image and the edges of the display area.
Graphics devices which output underscanned images output an image at a lower
resolution so it can fully fit onto the display area. Overscanning, on the other
hand, occurs when the image output to the display is larger than the display and
only a portion of the full image is seen.

5. The Technology Behind High Quality Video Playback 
A number of key terms come up in any discussion on video capabilities on a PC platform.
It is important to understand basic definitions from color theory as well as basic
definitions relating to video post-processing in order to understand what technologies
such as Intel® Clear Video HD can actually do. This section will help with understanding
the concepts of video playback, the video decoding pipeline, and the technologies in the
video decoding pipeline on a graphics solution which affect a user’s video playback
experience.

Video Concepts 
The following concepts relate to the way in which color is described in a video as well as
ways in which video can be changed or enhanced by video post processing.

 Hue: Hue describes the overall color tone as the color closest to red, green, blue,
yellow, or any combination of two of the four. Figure seven shows an image in
six different hue tones.

10
Source: http://en.wikipedia.org/wiki/Dither#Which_types_to_use, February 3, 2010

Intel Confidential—Internal Use Only  Page 12 


Figure 7: Image with different hue tones11

 Colorfulness: Colorfulness is the perception of how much Hue is present in an


image. An image with an excess of Hue would be described to be very colorful, in
figure seven the image on the middle right would be described to be more colorful
than the image on the top left.

 Chroma: Chroma refers to how colorful a color is in reference to white. In figure


eight the image on the right has its chroma increased 50% in relation to the image
on the left.

Figure 8: Chroma12

 Saturation: Saturation describes if the color in an image is intense or dull. In


figure nine the original image is top left. Increasing saturation is seen from top
right to bottom right. Decreasing saturation is seen from top left to bottom left.

Figure 9: Image at different saturation levels13


11
Source: http://en.wikipedia.org/wiki/File:Hue_shift_six_photoshop.jpg, January 26, 2010
12
Source: http://en.wikipedia.org/wiki/File:Surfing_in_Hawaii_unmodified.jpg, January 27, 2010

Intel Confidential—Internal Use Only  Page 13 


 Lightness: Lightness describes if the color in an image is light or dark. Lightness
is related to the brightness in an image. One example of color going from dark to
light is grayscale as seen in figure 10.

Figure 10: Grayscale14

 Brightness: Brightness describes how much an object seems to be radiating or


reflecting light. Figure 11 shows a full image with half of it having increased
brightness.

Figure 11: Image with increased brightness on the right side versus left15

 Contrast: Contrast describes the level of blending of objects in an image. Images


with high contrast have very apparent differences in color and it is easy to tell the
difference between one object and another. In images with low contrast it is
harder to distinguish one object from another and in some cases they blend
completely together. Figure 12 shows six different levels of contrast with
decreasing contrast from top left to bottom left and increasing contrast from top
right to bottom right.

13
Source: http://commons.wikimedia.org/wiki/File:Saturation_change_photoshop.jpg, January 26, 2010
14
Source: http://www.imagingassociates.com.au/color/testpatterns.jspx, January 26, 2010
15
Source: http://www.borisfx.com/images/fx/brightness_contrast.jpg, January 26, 2010

Intel Confidential—Internal Use Only  Page 14 


Figure 12: Image at different levels of contrast16

 Gamma: Gamma is a variable which is changed to adjust the brightness in an


image. Figure 13 shows four different levels of gamma and their effect on an
image.

Figure 13: Image at different levels of gamma17

 Artifact: Artifacts are departures from the expected or desired output of a


display. Artifacts are anomalies which are not linked to a software failure but to
the failure of the hardware to correctly output or display the video as it was
originally created. Examples of artifacts range from bad pixels (wrong color), to
jagged lines in an image, to completely corrupt screens which are unusable.
 
Video Codecs and Formats 
Video compression is necessary to reduce the amount of storage space required for video.
Not only does compressed video takes less space to store, but it requires less bandwidth
to distribute, transmit or download from the PC to the TV or over the Internet. There are
two types of video compression: lossy compression and lossless compression. In a lossy
compression algorithm some data is lost in the compression process in order to maximize
the storage space reduction. The data that is lost cannot be recovered upon

16
Source: http://commons.wikimedia.org/wiki/File:Contrast_change_photoshop.jpg, January 26, 2010
17
Source: http://en.wikipedia.org/wiki/File:GammaCorrection_demo.jpg, January 26, 2010

Intel Confidential—Internal Use Only  Page 15 


decompression. In lossless compression, the original video is completely retrievable from
the compressed video without any data loss. Typically, videos compressed into a lossy
compression format take less storage space than videos compressed with a lossless
compression algorithm. There are, however, some newer lossy compression algorithms
are very efficient such that the amount of data lost is very small while still realizing a
large storage space benefit.

The common algorithms used for compressing video files are called Codecs. Codec
stands for COmpression-DECompression and is a full algorithm for the encoding and
decoding of video and audio data. Video encoding is the process of taking the raw video
format and compressing it to a smaller size using a Codec. Video decoding is the
playback process where the compression is essentially reversed and the video file is sent
to the display.

A related video concept is called transcoding. Video transcoding is the process of


converting one compressed video format to another. This involves first decoding the file
and then encoding it in the new format. One usage example would be converting a
downloaded video from VC-1 format to H.264 in order to view the video on an H.264
only enabled player.

In addition to Codecs, software programs sometimes use a wrapper around a codec such
as DivX which uses H.264 as its codec, but stores other file information in the wrapper.
The following video codecs and wrappers are typically used in video distribution:

 DivX: DivX is a video codec software package which takes care of the encoding
and decoding of videos and audio. The current version of DivX is based on the
H.264 codec whereas previous versions relied on older codec’s such as MPEG-2.

 MPEG-4 AVC: MPEG-4 AVC (Advanced Video Coding) is a group of


technologies defining the compression of video and audio digital data.18

 H.264: H.264 is also known as MPEG-4 part 10, H.264 is a codec used in modern
video products such as Blu-ray Disc.

 MVC: MVC is an extension to the H.264 video format which allows for the
transmission of stereoscopic 3D video.

 MPEG-2: MPEG-2 is a group of technologies defining the compression of video


and audio digital data. MPEG-2 is the standard for digital video broadcasting over
cable and satellite as well as movies on DVD.

 DV: Digital Video format is used primarily in video cameras.

18
Source: http://ip.hhi.de/imagecom_G1/assets/pdfs/csvt_overview_0305.pdf, January 27, 2010

Intel Confidential—Internal Use Only  Page 16 


 VC-1: VC-1 is a video codec which is an alternative to the MPEG-4 H.264
standard. Developed originally by Microsoft and distributed as an open standard.
VC-1 is known for its ability to be decoded faster than H.264. VC-1 is found in
Blu-ray Discs, Windows Media Video 9, and Microsoft Silverlight.19

Typically, a video container is used to pack all files together which are vital to video
playback including the subtitles, video streams, audio streams, and chapter information.
Examples of video containers are .mov and .avi. In order to playback videos, the files in
the container must be decoded in order to be displayed.

Video Compression and Decompression Techniques 
The following algorithms are found in video codecs and are responsible for the
compression and decompression of video. For example, in the video encoding-decoding
pipeline, a Discrete Cosine Transform (DCT) is applied to the video on the encoding side
when the video is first stored onto a disc. To watch the video, the Inverse Discrete Cosine
Transform is applied to the video stream from the disc undoing the Discrete Cosine
Transform so that the video can be output to a display and viewed.

 Inverse Discrete Cosine Transform (iDCT): Allows for the decompression of


audio and video files which were compressed with a discrete cosine transform.

 Context-Adaptive Variable-Length Coding (CAVLC): Lossless compression


technology found in H.264, MPEG-4, and AVC video encoding.

 Context-Adaptive Binary Arithmetic Coding (CABVC): Lossless compression


technology found in H.264, MPEG-4, and AVC video encoding.

The Video Decoding Pipeline 
The video decoding pipeline is the process which is applied to playback a video stored on
a disc, video being broadcast over cable, video streaming from the Internet, or any other
method of viewing videos. The video decoding pipeline takes the encoded video from all
of these sources and applies the correct decoding algorithm based on the video’s codec,
applies video post processing techniques, scales the video to the appropriate aspect ratio
and resolution, and outputs the video to the display. A high level block diagram of the
video decoding pipeline is shown below in figure 14.

19
See http://en.wikipedia.org/wiki/Microsoft_Silverlight

Intel Confidential—Internal Use Only  Page 17 


Figure 14: The video decoding pipeline

Video Decoding Technologies  
The term video decoding can be used interchangeably with the term video playback.
Video decoding refers to the process of taking compressed video and uncompressing it to
its original form to be sent to a display and historically was always performed on the
CPU. Most graphics devices have adopted fixed function hardware in order to accelerate
specific video codecs to take the decoding burden off of the CPU. If a codec is not
specifically accelerated by the graphics, the CPU still decodes the video, but most
mainstream video codecs such as H.264, MPEG-2, and AV-1 are accelerated by both
integrated and discrete graphics solutions. Video decoding occurs in real-time poor video
decoding hardware will cause the video to stutter which has a direct negative effect on
the user experience.

There are a variety of video processing techniques that can be applied to a video to make
the video playback experience on a PC as good as possible. If a user starts watching a
video and experiences stutter or sees many jagged lines and blurring effects on the screen
the user’s overall experience will be impacted dramatically. Video post processing and
decoding technologies are meant to make the video appear as close as possible to the
original video when it is decoded from its compressed state and output to a display. This
section details technologies which are located in the video decoding pipeline which affect
the way in which a video is sent to a display, and post-processing technologies which
affect the quality of the video. These technologies are applied to video in addition to the
decoding algorithm. This section also represents a list of common technology names
which are seen in current specifications and marketing feature lists.

 Picture-in-Picture (PIP): Picture-in-Picture is the ability of the hardware to


output two video streams simultaneously, one HD and one SD, to a display. One
example of typical use of PIP is a sports enthusiast watching one sporting event in
HD while superimposing a smaller video of another sporting event in SD
somewhere else on the display. Most recently, PIP also refers to the ability to
output two HD streams at the same time.

Intel Confidential—Internal Use Only  Page 18 


 Dual-Stream Playback: Dual-Stream Playback is the ability of the hardware to
output two video streams at the same time. Dual-Stream Playback is used in 3D
stereoscopic video where one stream being output to the display is the left eye and
the other stream being output to the display is the right eye.

 Motion Compensation: Motion Compensation is a technology used in video


compression and decompression. Motion compensations works under the idea that
consecutive frames in a video have very little difference between them. Motion
compensation takes a reference frame and then makes it possible to reconstruct a
number of following frames by recording the direction in which the frame
changes. Recording the direction of movement instead of the entire frame takes
less space to store. Upon decompression of the video, the video decoder takes the
reference frame and the directional information to reconstruct the following
frames accurately even though they were not specifically stored. There are many
different algorithmic variations of motion compensation but the concept stays the
same: Global, Block, Variable Block-Size, Quarter and Half Pixel, and
Overlapped Block Motion Compensation are different motion compensation
algorithms to name a few.

Video Post‐Processing Technologies 
Video processing refers to the portion of the video decoding pipeline which takes the
decoded video stream, runs post-processing procedures on the video stream, and then
sends the resulting processed video stream to a display. One example benefit of video
post-processing is playing standard definition video on a high definition display. Many
digital standard definition videos contain extra noise as a result of their conversion from
analog to digital format, and when displayed on an HD display, that noise is especially
noticeable. When the video is sent through a noise reduction filter as part of the video
post-processing pipeline, the noise in the video is reduced and the resulting video is
clearer after the filter than before.

The following technologies are found in video post-processors which are found on almost
all modern integrated and discrete graphics solutions. These technologies affect the
perceived quality of video in various ways from enhancing or correcting the color of an
image to increasing the apparent sharpness of an image. All technologies in this section
are meant to give an overall better perceived video quality to the viewer of the video. The
compute requirements for these technologies are dependent on the specific hardware
implementations of each technology. A comparison of the capabilities of various
integrated and discrete graphics solutions will be published by the end of Q2’10 in a
video playback competitive positioning guide on competition.intel.com.

 Telecine / Inverse Telecine: Telecine is the process in which a motion picture


taken at 24 frames per second on film is converted into the National Television
System Committee standard 30 frames per second stored in 60 interlaced frames
digitally for distribution on television or DVD. Inverse Telecine is the process of
detecting the 3:2 pulldown (see Pulldown Detection) from telecine videos and

Intel Confidential—Internal Use Only  Page 19 


converting the video back into its original 24 frames per second version to be
viewed on a display as it was originally intended.

 Pulldown Detection: A special case of de-interlacing deals with pulled down


content. Used when broadcasting a typical Hollywood movie over NTSC TV, 3:2
pull down converts 24 progressive frames per second into 60 interlaced fields per
second. This is accomplished by splitting the progressive source content into
fields, and then replicating the fields every other frame. Converting film that runs
at 24 frames per second to 30 frames (60 fields of video) presents an obvious
conversion problem. Splitting each film frame into two fields will yield 48 fields
per second. As analog television runs at approximately 60 fields per second,
simply transferring each film frame onto each video frame would result in a film
running about 25% faster than intended. So not only must the progressive film be
interlaced, the frame rate must also be converted. The first frame is stored as 3
fields (with one field being used twice), and the second frame being stored as 2
fields. Hence, the name, 3:2 pulldown.

Likewise, other film-to-TV specific cadences emerge when dealing with film
content that has been edited, or filmed at various other frame rates such as those
used for documentaries and anime content. The number of cadences that could be
encountered is further multiplied when considering both NTSC (60Hz) as well as
PAL (50Hz) target television refresh rates. Playing back such an encoded stream
using typical de-interlacing methods misses an opportunity to achieve
significantly enhanced visual quality. By detecting the repetitive 3:2 cadence, it is
possible to recreate the original progressive frames. By working with the original
progressive content artifacts are minimized.

In addition to the 3:2 pulldown cadence, other cadences are often introduced into
video streams as the result of editing, non-NTSC broadcasts, and numerous other
influences.20

 Film Mode Detection and Cadence Detection: Film Mode Detection and
Cadence Detection are the same as Pulldown Detection.

Figure 15: Cadence Detection21

20
Source:
http://smcr.intel.com/SMCRDocs/WW3708_CSE_BoulderCreek_Montevina_Introduction_Rev1_0.ppt
21
Source: http://www.hqv.com/index.cfm?page=tech.cadence, January 28, 2010

Intel Confidential—Internal Use Only  Page 20 


 Cadence Correction: The cadence of a video is the same as the Pulldown or Film
Mode of a video, i.e. 3:2 Cadence Correction is the ability of the hardware to
change the cadence of a video, i.e. 3:2 to 2:2:2:4, or 5:4, etc., so that it can be seen
at a different cadence without loss of data or negatively affecting the user
experience.

 De-Interlacing: Many modern displays use progressive scanning in order to


display video; however, Cathode Ray Tubes (CRTs) use interlacing to display
video and the majority of video is still stored and broadcast in interlaced formats.
In order to be displayed on progressive displays, interlaced content must be first
converted to progressive. De-interlacing is the method in which interlaced video
is converted to progressive video. De-Interlacing is one of the most important
aspects of a video processor as a poorly de-interlaced video can have a large
negative effect on the user experience. There are many different types of de-
interlacing in which the algorithm to produce the results is different but the
concept is the same. Some types of de-interlacing are Vector Adaptive Per-Pixel,
Advanced Pixel Adaptive, Spatial-Temporal, Motion Adaptive, and Non-Motion
Adaptive De-Interlacing.

Figure 16: De-Interlacing22

 Noise Reduction: When working with analog video streams, capturing,


converting, and duplicating the content will inevitably inject analog noise into the
stream, thus degrading the overall video quality. Digital video streams can also
exhibit similar artifacts as a result of their original capture or their subsequent
compression. Noise artifacts are most noticeable in regions of the image that
contain large areas of solid colors.

Traditional de-noise algorithms often suppress fine detail within an image by


mistaking the detail for noise. However, it is possible to leverage motion detection
algorithms to dramatically reduce the appearance of randomized noise in video
streams while accurately preserving fine detail. By realizing that noise artifacts
are nondeterministic in their motion, de-noise filters are able to differentiate
between noise and valid video data. Figure 17 shows an image before noise
reduction on the top and with noise reduction applied on the bottom.23

22
Source: http://www.hqv.com/index.cfm?page=tech.de-interlacing, January 28, 2010
23
Source:
http://smcr.intel.com/SMCRDocs/WW3708_CSE_BoulderCreek_Montevina_Introduction_Rev1_0.ppt

Intel Confidential—Internal Use Only  Page 21 


Figure 17: Noise Reduction24

 ProcAMP Color Control: ProcAMP Color Control is an Intel® technology


package which allows for the adjustment of hue, saturation, brightness, and
contrast in an image.

 Total Color Control: Total Color Control allows for adjustment of saturation
levels for six colors: Red, Green, Blue, Magenta, Yellow, and Cyan. Figure 18
shows an example of total color control.

Figure 18: Total Color Control

 Color Vibrance: Color Vibrance is the ability of hardware to even out the color
of more saturated and less saturated areas in an image by increasing the apparent
color of the less saturated areas. Figure 19 shows an image on the left and the
same image on the right with increased color vibrance. The increase of color
vibrance is most noticeable in the color of the dog’s fur.

24
Source: http://www.ambery.com/di10visccohd.html, January 29, 2010

Intel Confidential—Internal Use Only  Page 22 


Figure 19: Color Vibrance25

 Color Correction: Color Correction is the process in which hardware alters the
overall color of the light, or the color temperature. A higher color temperature
provides a bluish hue to an image whereas a lower color temperature gives off a
yellow or red hue.

 Color Space Conversion: Color Space Conversion is the process in which one
color space is converted to another. Color space is the description on how all
colors can be created. For example, in a PC monitor which emits light the color
space RGB is used because all colors can be created from a combination of Red,
Green, and Blue. In PC printing, however, a different color space is used to create
all the colors. CMYK (cyan, magenta, yellow, black) is the color space used in
printers in order to print. Color space conversion it is the ability to convert a color
space such as RGB to another color space like CMYK in order for the different
hardware to maintain the same representation of an image.

 Dynamic Tone Enhancement (Flesh Tone): Tone enhancement, specifically


flesh tone enhancement, refers to the ability to recognize a specific range of
typical flesh tones in an image and enhancing the areas in which those colors exist
in order to give flesh a more realistic and vibrant appearance. Figure 20 shows an
original image on the left and the same image with skin tone enhancement applied
on the right.

Figure 20: Skin Tone Enhancement (Flesh Tone)

25
Source: http://www.amazon.com/Photoshop-CS4-Missing-Lesa-Snider/dp/0596522967, Feb 8, 2010

Intel Confidential—Internal Use Only  Page 23 


 Adaptive Contrast Enhancement (Dynamic Contrast): Adaptive contrast
enhancement increases the contrast of local areas of an image by decreasing the
overall brightness of the image and amplifying the local features resulting in an
overall greater contrast enhancement. Adaptive contrast enhancement can be used
in videos where the setting of the video is dark in order for objects to be seen.
Figure 21 shows an image without adaptive contrast enhancement on the left and
with adaptive contrast enhancement on the right.

Figure 21: Adaptive Contrast Enhancement

 Edge Enhancement: Edge Enhancement increases the contrast of edges in a


video which increases its apparent sharpness. Sharpness enhancement filters
reduce the appearance artifacts by identifying and operating on the edges within
an image. By applying noise reduction algorithms specifically on shape edges and
improving contrast ratios in these specific regions, it is possible to mitigate
artifacts that typically accompany high scale ratios.26 In figure 22, the image on
the left has edge enhancement applied whereas the image on the right does not.

Figure 22: Edge Enhancement

 Video Scaling (Up-Conversion/Down-Conversion): Video scaling is the


process of converting a video from one resolution and/or aspect ratio to another
resolution and/or aspect ratio. There are quite a few different types of video
scaling which are only algorithmic differences but the concept is the same:
Advanced Video Scaling, High Quality Video Scaling, Horizontal and Vertical
Scaling, Panel Fitting, and Polyphase Scaling.

26
Source:
http://smcr.intel.com/SMCRDocs/WW3708_CSE_BoulderCreek_Montevina_Introduction_Rev1_0.ppt

Intel Confidential—Internal Use Only  Page 24 


Up-Conversion is the process in which a movie in one resolution is up-converted
to another resolution i.e. 480p SD is converted to 720p HD. Up-Conversion first
scales the original video to the new resolution and then fills in the missing data.
Missing data is filled in by scanning the image and determining what pixels fit
best in what locations. Down-Conversion is where a higher resolution video is
scaled down to a lower resolution to be viewed on a lower resolution display.

 De-Blocking: When viewing lower quality video on a large monitor, jagged


edges sometimes appear between different ‘blocks’ of the video which gives the
appearance of pixilation in video playback. De-Blocking filters smooth out sharp
edges between blocks when block coding techniques such as discrete cosine
transform are used.

Figure 23: De-Blocking27

 Whites Processing (Blue Stretch): Blue stretch increases the blue component of
white and near white pixels in video while avoiding hard transitions and without
affecting flesh-tone colors.28

 Video Gamma Control: Video gamma control is necessary because many


monitors and displays have different gamma ranges. Video gamma control
ensures that video will be output at the correct gamma levels so that the video
appears the same regardless of what display it is seen on and does not have a wide
variation on gamma levels due to the wide variation of gamma levels on displays.

 Chroma Subsampling Format Conversion: Chroma subsampling format


conversion is the conversion between one rate of chroma subsampling to
another.29 When a video stream is output the data is conveyed in two forms,
lightness (luminance), and color (chroma). The human eye perceives changes in
luminance much more than it perceives changes in color. Chroma subsampling is
a scheme in which the full luminance information is sent but color information is
either filtered or averaged to reduce bandwidth requirements. Image changes for
the human eye are barely perceptible when using chroma subsampling.

27
Source: http://www.via.com.tw/en/images/products/chipsets/p4-series/img_videodeblock.jpg
28
Source: http://www.faqs.org/patents/app/20090115906, February 8, 2010
29
Source: http://www.poynton.com/PDFs/Chroma_subsampling_notation.pdf, February 9, 2010

Intel Confidential—Internal Use Only  Page 25 


 Detail Enhancement: Detail enhancement increases the apparent detail, or
sharpness, in an image. Blurry areas of video become sharp when detail
enhancement is correctly implemented. Figure 24 shows an image with detail
enhancement on the bottom and without detail enhancement on the top.

Figure 24: Detail Enhancement30

 Bad Edit Correction: Bad edits occur when a video is edited and the edits also
make changes to the pulldown cadence of the video. Bad edit correction is where
the hardware detects the changes in the pulldown cadence and is able to recover
the original 24 FPS video so that the changed cadence doesn’t affect the viewing
experience.

6. The Auditory Experience 
High quality audio which is well synced with the video playback is important in order to
enable the best user experience possible when watching video. In many cases, having a
crisp and clear video is not enough to fully immerse the user in the video viewing
experience, such as Hollywood movies. If a user views online video, however, that user
may have less need for high quality audio than someone who is using their computing
solution to drive a home theater with surround sound speakers.

Audio Codec’s and Formats 
The following audio codec’s and formats affect the quality of audio which can be
transferred from the PC to the monitor, television, or speaker system. Many modern PCs
are able to perform dynamic video range control which enables the hardware to limit
range the audio volume in a video. Dynamic video range control increases the volume of
quiet scenes in a video and decreases the volume of loud scenes in a video so that the
entire video stays in the specified ‘Dynamic Range’ and certain scenes are not too quiet
or too loud.

30
Source: http://www.tomshardware.com/reviews/avivo-vs-purevideo,1492-7.html, February 1, 2010

Intel Confidential—Internal Use Only  Page 26 


Audio transfer can range from scaled-down versions of a soundtrack to the original studio
recording at full fidelity. Poor audio quality can ruin a video viewing experience, even if
the user is viewing very high quality video. The following audio formats are found in
video playback:

 Blu-ray: This is a comprehensive list of audio formats which are allowed in Blu-
ray audio playback:31 Blu-ray is a standard format for high-definition video
playback.

o Dolby Digital / AC-3: Dolby Digital is the most commonly found audio
format used in DVD’s and is the base standard for Blu-ray.

o Dolby Digital Plus: Dolby Digital Plus is an expanded version of Dolby


Digital which allows for a higher bit rate audio, more efficient compression of
audio data, and supports up to 7.1 channels of surround sound audio (seven
speakers and one subwoofer).

o DTS: DTS is an audio format found on DVD’s and is a required specification


for Blu-ray high definition playback hardware. The Blu-ray specification for
DTS allows for higher bit rate audio which allows for greater sound fidelity.

o DTS-HD High Resolution: DTS-HD High Resolution audio format is an


expanded version of DTS which allows for a higher bit rate and more efficient
compression of audio data.

o PCM: PCM is a complete copy of the studio master audio track and when
used in video discs, and is stored in an uncompressed form. PCM has the best
sound quality of any format due to a lack of compression; however, it takes a
significantly larger amount of space to store.

o Dolby TrueHD / DTS-HD Master Audio: Dolby TrueHD and DTS-HD


Master Audio are both lossless compression codecs. The lossless compression
of the audio track means that they take up less space than a PCM track but
upon decompression they are identical to the PCM studio master.

 Other Formats: Two other prominent audio formats used in audio encoding are
MP3 used in devices such as MP3 players and AAC which is used on devices
such as the iPhone, iPod, and PlayStation3.

o MP3: MP3 is a lossy audio compression format which is the non-official


standard in portable audio player playback. MP3 is also known as MPEG-1
Audio Layer 3. MP3s can also be found in digital audio distribution online
such as online music download websites. MP3 was recognized for being able

31
Source: http://www.highdefdigest.com/news/show/1064, February 3, 2010

Intel Confidential—Internal Use Only  Page 27 


to reproduce audio quality close to the original soundtrack and at the same
time provide good compression to save a significant amount of storage space.

o AAC: AAC is a lossy compression codec for digital audio. AAC is seen as the
successor to MP3 as the standard audio encoding format. AAC is part of both
the MPEG-2 and MPEG-4 specifications.

Audio Connections 
Audio is distributed along with video as either part of the same connection or part of a
separate connection. Two connections used in modern video playback are HDMI and
TOSLINK. These two connections do not represent the entire range of audio connections.
Other connections such as coax cables are used in audio data transfer but are not typically
used in high definition audio data transfer as part of the video playback experience.

 HDMI: Along with carrying video data, HDMI connections are also capable of
carrying Dolby TrueHD and DTS-HD Master Audio. HDMI is one of the only
connection types which allow for the transfer of both audio and video data. VGA
and DVI connections, both useful for transferring video data, are not able to
transfer audio data.

 TOSLINK: Toslink is an optical fiber connection created by Toshiba and is used


mostly in audio data transfer. Toslink can carry PCM audio but it cannot carry
Dolby Digital Plus, TrueHD, or DTS-HD.

7. Future Technologies 
The video industry is continuously evolving with new technologies enhancing the end-
user experience. This section explores several new technologies in the immediate future
for video: wireless display (Intel® Wireless Display), higher resolution for video
playback (Quad HDTV), new storage mediums for video (Holographic Versatile Discs),
and new ways to play more realistic video (Stereoscopic 3D).

Intel® Wireless Display 
Intel® Wireless Display is already in production and gives an easy way to wirelessly link
different devices such as laptops and televisions in order to playback video. Intel
Wireless Display allows for the ease of sharing video content streaming seamlessly on a
large screen television at the same time as on a laptop. Intel Wireless Display also has
possible use in business applications where the projector is seamlessly linked with a
laptop which allows for easier usage than before. Intel Wireless Display is just the start of
new ways to deliver video to the user. For more information on Intel Wireless display,
see the Intel Wireless Display Product Brief.

 
 
 
 

Intel Confidential—Internal Use Only  Page 28 


Quad HDTV 
Video experience was improved dramatically when storage technology made it possible
for the masses to move from standard definition to high definition video because of the
dramatic increase in resolution. The next evolutions in video viewing will more than
likely include higher resolutions than today. One possible next step for the video industry
is adopting very high resolutions such as Quad HDTV 2160p. 2160p has four times the
resolution of 1080p today (3840 x 2160 pixels vs. 1920 x 1080 pixels). Four times the
resolution means that television makers can either make displays which are four times
larger at the same pixels per inch, or have four times as many pixels per inch at the same
size. Going to higher resolutions has its problems, however, and those problems are most
notably bandwidth limitations and storage limitations on discs.

Holographic Storage Technology 
Recently, wide scale 1080p distribution was made possible with the advent of disc
storage technology which could hold an entire feature length movie in high definition on
just one disc. Quad HDTV 2160p could possibly be enabled with the same advancement
in storage technology with technologies such as Holographic Versatile Discs.
Holographic Versatile Discs are an optical storage technology which takes advantage of
the ability to not only record information on the surface of a disc like Blu-ray but HVDs
can record information in the entire depth of the disc as well. Using the entire volume of
a disc to record information allows for orders of magnitude more data storage than the
current Blu-ray format. Dual-Layer Blu-ray Discs have a capacity of 50GB whereas a
Holographic Versatile Disc such as the one which Maxell is releasing in 2010 have up to
1.6TB of capacity.32

Stereoscopic 3D Video Playback 
Stereoscopic 3D video playback is a fast growing segment in video viewing. Stereoscopic
3D has also been referred to as stereo 3D, stereographic 3D, 3D stereo, NVIDIA® 3D
Vision™, NVIDIA 3D Vision Surround, or just 3D. Stereoscopic 3D video is video
playback where the illusion of a 3D image is created on a display when viewed with
specific glasses which are typically either polarized, or controlled with proprietary device
drivers and connected to a PC. Many movie theaters are already playing 3D movies and
the next step for the industry is to bring the 3D movie experience to the home theater.
Stereoscopic video enabled televisions are already being developed and manufactured by
many television companies and PC platforms are moving towards enabling stereoscopic
3D playback for personal PCs and home theater systems. 3D monitors are beginning to
be embedded in laptops, all-in-ones, as well as appearing as standalone monitors for
desktops.

Although the ability to create the illusion of 3D images has been around for quite a long
time, (the first still 3D photograph, or stereogram, was created in 184033) it hasn’t been
until the last decade that wide scale production of 3D Hollywood movies has been
possible due to the extreme amounts of compute power which is required to render a
Hollywood movie in 3D. As a result of the increase in 3D movies being seen in movie

32
Source: http://www.maxellcanada.com/pdfs/c_media/optical_stor_tech.pdf, February 9, 2010
33
Source: Welling, William. Photography in America, page 23

Intel Confidential—Internal Use Only  Page 29 


theaters, home stereographic 3D video playback is gaining in momentum. Intel® Core™
processors are poised to be able to provide the full spectrum of the home 3D experience,
from creating and editing 3D video content, to viewing it and sharing it with friends and
family. Expect stereo 3D video technology to gain in popularity throughout 2010 and
2011.

This section discussed only four new and current technologies or advances which are
pushing the boundaries of the current video experience, but it can be expected that there
might be many more.

8. Summary 
Video playback is one of the most common forms of entertainment on a PC. Videos come
in many different formats and from many different channels of distribution, making PC
video playback capabilities very important to achieving the best possible experience.
Intel® Core™ 2010 Processors with Intel® HD Graphics contain Intel® Clear Video HD
Technology which enables smooth HD video playback, premium audio quality, and
implements many of the video post-processing technologies described in this document.

Intel Confidential—Internal Use Only  Page 30 

You might also like