Professional Documents
Culture Documents
Contents
1. Introduction ................................................................................................................. 3
2. Video Distribution Methods ........................................................................................ 3
3. What Enables Video Playback on a PC? ..................................................................... 4
4. How is Video Displayed? ............................................................................................ 5
5. The Technology Behind High Quality Video Playback ............................................ 12
6. The Auditory Experience........................................................................................... 26
7. Future Technologies .................................................................................................. 28
8. Summary.................................................................................................................... 30
This document focuses on a PC’s ability to enhance and provide the best user experience
when viewing videos on a display. Many of the technologies presented in this document
can be used to describe different aspects of photograph viewing and editing; however, the
purpose of this document is to educate and be a reference for a wide audience on the
concepts of video playback and related technologies. This document will describe video
post-processing, display, and audio technologies as well as possible future directions of
video technology. This document should be used in conjunction with other material
available on competition.intel.com to provide a detailed resource enabling effective
conversations about PC video playback capabilities.
2. Video Distribution Methods
The primary difference in how a consumer views his or her videos is the delivery method
of the video content. Common delivery methods for video are physical storage mediums
(DVDs and Blu-ray discs), television (cable and satellite providers), online video, or
personal content a consumer created with a camcorder.
Both standard definition (SD) and high definition (HD) videos can be distributed on
physical storage mediums which are read and played by a Blu-ray player, HD-DVD
player, DVD player, or VCR. Videos distributed on physical storage mediums are
typically professionally created movies but can also be documentaries, home videos,
television series, or other types of video. Physical storage mediums are portable and can
generally be used on any television or PC with hardware capable of reading the storage
medium.
Online video is any video content which is downloaded or streamed from the Internet in
order to be viewed, generally (although not exclusively) on a PC. Streamed video can be
either live or pre-recorded, but in both cases, the video is ‘streamed’ from the Internet for
immediate viewing. Downloaded video content is pre-recorded and downloaded fully
onto a user’s PC to be viewed at a later time. Examples of online content providers are
Personal video content is created by the end user. Examples of personal video content are
wedding videos, home movies, home videos, recording of sporting events, etc. Personal
video content is not typically recorded in a professional manner and video post-
processing plays a greater role in providing the best viewing experience possible.
With a PC, a user can enjoy video distributed in many ways: online video streaming and
downloading, physical storage mediums like Blu-ray or DVD movies, creating and
editing personal video content, viewing 3D movies with a 3D enabled display, and
watching live TV from a satellite or broadband cable connection by using a TV tuner
card.
3. What Enables Video Playback on a PC?
A PC’s video playback capability typically resides as a part of a PC’s graphics solution or
2D graphics engine. The 2D graphics engine should not be confused with the PC’s 3D
graphics capability, which is the ability to render and output objects to a display in real-
time for immediate viewing. There is a common misconception that 3D gaming and
graphics performance is tied to video playback performance, however, watching a video
does not require real-time rendering since it is playing back pre-recorded content.
Graphics solutions, in general terms, have two parts to them: the 3D graphics capability
and the 2D video capability. On most graphics solutions the 3D and 2D capabilities are
entirely separate as many of the 2D capabilities such as the video decode pipeline and
other video post-processing technologies are implemented as fixed function hardware.
Fixed function hardware is hardware which is designed specifically to perform one task.
In contrast, 3D graphics is very similar to the CPU in that it uses general purpose
hardware such as the shader pipelines in order to perform a wide range of tasks. Some
graphics solutions do use the shader pipelines and other 3D graphics components to run
common video post-processing procedures but because the 3D hardware is general
purpose, performance is likely to lag behind using specific fixed function hardware.
Standard definition is the most common resolution in which video is displayed. The
resolution of standard definition video is 480p (640 x 480 pixels) and is still the most
widely used viewing resolution for television. An example of a standard definition
display is a cathode-ray tube TV, popular before the introduction of LCD flat panel
displays.
High definition videos are displayed at any resolution higher than standard definition.
Current high definition displays come in resolutions of 720p, 1080i, or 1080p. One
example of a high definition display is a PC monitor that supports resolutions above
640x480, also referred to as HDTVs. Bandwidth requirements for transmitting high
definition content scales with the amount of pixels being transferred over the connection.
For example, the bandwidth required to transfer 1080p video is approximately double
that of 720p video and more than four times the bandwidth required for standard
definition 480p video. Video broadcasting companies have standard definition and high
definition versions of many channels; however, full HD 1080p is not yet being used by
broadcast companies due to bandwidth limitations.
Low resolution devices are a broad category of devices which have display resolutions
lower than standard definition 480p. SD and HD videos have to be scaled down in order
to be properly displayed on these low resolution devices. Examples of devices with low
screen resolution are Mobile Phones, PDAs, and Smartphones. Low resolution devices
are becoming one of the main avenues for video consumption as they allow the user to
view videos anywhere instead of being confined to the living room or home office.
In addition to the common display resolutions above, many videos also come in other
resolutions. In order to watch a video in its intended resolution, the proper display must
be used. For example, if the desired output and visual experience is a 1080p high
definition video, then an HD display that supports 1080p resolution must be used. If a
smaller display than 1080p is used then only a small portion of the video (equal to the
resolution of the smaller display) is seen. To address this, the original video must be
scaled down from the HD resolution to the resolution of the smaller display (resulting in
data loss). This is generally called downscaling. If a 720p HD video is shown on a
display with 1080p resolution then the video must be scaled up from the original 720p
resolution to the resolution of the larger display resulting in less overall video quality.
This capability is generally called upscaling.
In summary, there are many different video display resolutions; however, video
processing can take care of displaying video on any size of display by upscaling, or
downscaling, the video to the proper resolution which the display supports.
Aspect Ratio: The aspect ratio is important in a display because it describes the
format in which video is seen or the ratio at which the screen is drawn. Typical
aspect ratios are widescreen 16:9, or standard screen 4:3. A widescreen aspect
ratio of 16:9, for example, means that for every 16 pixels of width there are 9
pixels of height in an image displayed on it.
Gamut: The gamut of a display describes the total amount of colors the display is
able to produce affecting the quality of images presented on the display. Gamut
describes the range of colors that can be created by a three color Red, Green, and
Blue (RGB) display. For example, a 30-bit display with 10-bits of color
information each for Red, Green, and Blue can display 2^30 or approximately
1.073 billion unique colors.
Touch: Touch displays recognize the touch of human skin and have the ability to
register the exact location of the touch and map the location with specific
functions such as buttons where the touch acts much like a PC mouse. From a
compute resource perspective, a touch display is not much different than a mouse
since the touch display provides the XY coordinates instead of the mouse.
Display Inputs
The following is a list of commonly found display inputs and connection types used when
connecting a computing solution to a display in order to transmit video and audio data.
Intel® Clear Video HD technology currently supports all of the following technologies
with the exception of HDMI 1.4, however, that capability will be supported in the future.
This list is by no means a comprehensive list of all connection types that are capable of
transmitting video.
High Definition Multimedia Interface (HDMI): HDMI media transmission allows for
the transfer of high-definition video, eight channels of digital audio, and a Consumer
Electronics Control (CEC) connection which allows HDMI devices to control one
another when needed. HDMI can encrypt signals using HDCP in order to provide content
protection between the transmitting device and receiving device.1 HDMI is fast becoming
the preferred connection type for high definition media. In order for a company to license
HDMI, it must pay a royalty to HDMI Licensing, LLC.2
HDMI Version 1.3: HDMI version 1.3 was released June 22, 2006 and has a
total bandwidth of 10.2 Gbit/s and a maximum resolution of 2560x1600p75
(2560x1600 at 75Hz). (Note: There are some Speed HDMI 1.3 cables that can
support ALL HDMI 1.4 features except for HDMI Ethernet.)
o HDMI 1.3 Deep Color: HDMI 1.3 supports 30-bit, 36-bit, and 48-bit Deep
Color in the following formats: xvYCC, sRGP, CbCr.
1
Source: http://en.wikipedia.org/wiki/HDMI, February 2, 2010
2
Source: http://www.hdmi.org/manufacturer/terms.aspx, February 2, 2010
3
Source: http://shop.factorydirectav.com/images/hdmi3.jpg, February 2, 2010
o HDMI 1.3 3D: HDMI 1.3 currently only supports 3D stereoscopic playback
up to 1080i resolution.
HDMI Version 1.4: HDMI version 1.4 was released May 28, 2009 and has a
resolution up to 4096x2160p24 which is used in digital theaters but with HDMI
1.4 has the ability utilized at home. In addition to HDMI 1.3 features HDMI 1.4
has the following enhancements:
o HDMI 1.4 Ethernet: Supports 100 Mb/s Ethernet between HDMI devices.
o HDMI 1.4 Audio: HDMI 1.4 adds support for an Audio Return Channel to
the specifications of HDMI 1.3.
o HDMI 1.4 3D: HDMI 1.4 supports Full HD 3D playback in 1080p24 (1080p
at 24Hz) and in many other formats.
4
Source: http://en.wikipedia.org/wiki/File:101783-UKHDMI-6-GB.jpg, February 2, 2010
DVI: DVI is a digital video output connection which is used mostly with
personal computer displays. DVI is a video only format and contains no ability to
transfer audio. DVI is capable of using HDCP content protection.
There are many display types which videos are viewed on. Displays range from bulky,
heavy, legacy Cathode Ray Tubes to ultra-thin and light weight LED-backlit LCD
displays:
Cathode Ray Tube (CRT): Invented in 1897, the CRT is the oldest display type
which can display media8. CRTs were the standard viewing display for video until
relatively recently. CRTs are primarily being replaced by LCD displays.
Liquid Crystal Display (LCD): LCDs are the most commonly found types of
display today. LCDs are known to be lightweight, portable, and are able to be
constructed in greater sizes than realistic with a CRT.9
LED-backlit LCD: LED-backlit LCDs are currently also called LED TVs by
television makers such as Samsung and Panasonic. LED-backlit LCDs should not
be confused with true LED displays. LED-backlit LCDs are much thinner than
current Cold Cathode Fluorescent Lamp-backlit LCDs because the amount of
space required for LED backlighting is much smaller than CCFL. LED-backlit
LCDs also have lower power consumption, greater dynamic contrast, and a wider
color gamut than traditional CCFL-backlit LCD displays.
8
Source: http://en.wikipedia.org/wiki/Cathode_ray_tube#History, February 8, 2010
9
Source: http://en.wikipedia.org/wiki/Liquid_crystal_display, February 2, 2010
Protected Audio video Path (PAVP): PAVP protects the data path within a PC
during playback by encrypting the compressed video data when it sends it to the
chipset and ensures that PCs which have hardware decode acceleration are
utilized. PAVP reduces processor utilization by off-loading decode functions from
the CPU to the chipset.
Other Display Technologies
These are technologies which may be located on the display or describe different aspects
of the displays such as specific technologies which it may have which affect the video
quality:
Dual Simultaneous HDMI: Dual Simultaneous HDMI is the ability for a video
device to simultaneously output two HDMI signals at once to two different
displays or other devices.
Bit Color Depth: The Bit Color Depth is the number of bits which are used to
represent the color encoding of a single pixel in an image. The bit color depth is
related to total number of colors able to be displayed by Colors = 2^BCD.
Figure 6: Dithering10
5. The Technology Behind High Quality Video Playback
A number of key terms come up in any discussion on video capabilities on a PC platform.
It is important to understand basic definitions from color theory as well as basic
definitions relating to video post-processing in order to understand what technologies
such as Intel® Clear Video HD can actually do. This section will help with understanding
the concepts of video playback, the video decoding pipeline, and the technologies in the
video decoding pipeline on a graphics solution which affect a user’s video playback
experience.
Video Concepts
The following concepts relate to the way in which color is described in a video as well as
ways in which video can be changed or enhanced by video post processing.
Hue: Hue describes the overall color tone as the color closest to red, green, blue,
yellow, or any combination of two of the four. Figure seven shows an image in
six different hue tones.
10
Source: http://en.wikipedia.org/wiki/Dither#Which_types_to_use, February 3, 2010
Figure 8: Chroma12
Figure 11: Image with increased brightness on the right side versus left15
13
Source: http://commons.wikimedia.org/wiki/File:Saturation_change_photoshop.jpg, January 26, 2010
14
Source: http://www.imagingassociates.com.au/color/testpatterns.jspx, January 26, 2010
15
Source: http://www.borisfx.com/images/fx/brightness_contrast.jpg, January 26, 2010
16
Source: http://commons.wikimedia.org/wiki/File:Contrast_change_photoshop.jpg, January 26, 2010
17
Source: http://en.wikipedia.org/wiki/File:GammaCorrection_demo.jpg, January 26, 2010
The common algorithms used for compressing video files are called Codecs. Codec
stands for COmpression-DECompression and is a full algorithm for the encoding and
decoding of video and audio data. Video encoding is the process of taking the raw video
format and compressing it to a smaller size using a Codec. Video decoding is the
playback process where the compression is essentially reversed and the video file is sent
to the display.
In addition to Codecs, software programs sometimes use a wrapper around a codec such
as DivX which uses H.264 as its codec, but stores other file information in the wrapper.
The following video codecs and wrappers are typically used in video distribution:
DivX: DivX is a video codec software package which takes care of the encoding
and decoding of videos and audio. The current version of DivX is based on the
H.264 codec whereas previous versions relied on older codec’s such as MPEG-2.
H.264: H.264 is also known as MPEG-4 part 10, H.264 is a codec used in modern
video products such as Blu-ray Disc.
MVC: MVC is an extension to the H.264 video format which allows for the
transmission of stereoscopic 3D video.
18
Source: http://ip.hhi.de/imagecom_G1/assets/pdfs/csvt_overview_0305.pdf, January 27, 2010
Typically, a video container is used to pack all files together which are vital to video
playback including the subtitles, video streams, audio streams, and chapter information.
Examples of video containers are .mov and .avi. In order to playback videos, the files in
the container must be decoded in order to be displayed.
Video Compression and Decompression Techniques
The following algorithms are found in video codecs and are responsible for the
compression and decompression of video. For example, in the video encoding-decoding
pipeline, a Discrete Cosine Transform (DCT) is applied to the video on the encoding side
when the video is first stored onto a disc. To watch the video, the Inverse Discrete Cosine
Transform is applied to the video stream from the disc undoing the Discrete Cosine
Transform so that the video can be output to a display and viewed.
The Video Decoding Pipeline
The video decoding pipeline is the process which is applied to playback a video stored on
a disc, video being broadcast over cable, video streaming from the Internet, or any other
method of viewing videos. The video decoding pipeline takes the encoded video from all
of these sources and applies the correct decoding algorithm based on the video’s codec,
applies video post processing techniques, scales the video to the appropriate aspect ratio
and resolution, and outputs the video to the display. A high level block diagram of the
video decoding pipeline is shown below in figure 14.
19
See http://en.wikipedia.org/wiki/Microsoft_Silverlight
Video Decoding Technologies
The term video decoding can be used interchangeably with the term video playback.
Video decoding refers to the process of taking compressed video and uncompressing it to
its original form to be sent to a display and historically was always performed on the
CPU. Most graphics devices have adopted fixed function hardware in order to accelerate
specific video codecs to take the decoding burden off of the CPU. If a codec is not
specifically accelerated by the graphics, the CPU still decodes the video, but most
mainstream video codecs such as H.264, MPEG-2, and AV-1 are accelerated by both
integrated and discrete graphics solutions. Video decoding occurs in real-time poor video
decoding hardware will cause the video to stutter which has a direct negative effect on
the user experience.
There are a variety of video processing techniques that can be applied to a video to make
the video playback experience on a PC as good as possible. If a user starts watching a
video and experiences stutter or sees many jagged lines and blurring effects on the screen
the user’s overall experience will be impacted dramatically. Video post processing and
decoding technologies are meant to make the video appear as close as possible to the
original video when it is decoded from its compressed state and output to a display. This
section details technologies which are located in the video decoding pipeline which affect
the way in which a video is sent to a display, and post-processing technologies which
affect the quality of the video. These technologies are applied to video in addition to the
decoding algorithm. This section also represents a list of common technology names
which are seen in current specifications and marketing feature lists.
Video Post‐Processing Technologies
Video processing refers to the portion of the video decoding pipeline which takes the
decoded video stream, runs post-processing procedures on the video stream, and then
sends the resulting processed video stream to a display. One example benefit of video
post-processing is playing standard definition video on a high definition display. Many
digital standard definition videos contain extra noise as a result of their conversion from
analog to digital format, and when displayed on an HD display, that noise is especially
noticeable. When the video is sent through a noise reduction filter as part of the video
post-processing pipeline, the noise in the video is reduced and the resulting video is
clearer after the filter than before.
The following technologies are found in video post-processors which are found on almost
all modern integrated and discrete graphics solutions. These technologies affect the
perceived quality of video in various ways from enhancing or correcting the color of an
image to increasing the apparent sharpness of an image. All technologies in this section
are meant to give an overall better perceived video quality to the viewer of the video. The
compute requirements for these technologies are dependent on the specific hardware
implementations of each technology. A comparison of the capabilities of various
integrated and discrete graphics solutions will be published by the end of Q2’10 in a
video playback competitive positioning guide on competition.intel.com.
Likewise, other film-to-TV specific cadences emerge when dealing with film
content that has been edited, or filmed at various other frame rates such as those
used for documentaries and anime content. The number of cadences that could be
encountered is further multiplied when considering both NTSC (60Hz) as well as
PAL (50Hz) target television refresh rates. Playing back such an encoded stream
using typical de-interlacing methods misses an opportunity to achieve
significantly enhanced visual quality. By detecting the repetitive 3:2 cadence, it is
possible to recreate the original progressive frames. By working with the original
progressive content artifacts are minimized.
In addition to the 3:2 pulldown cadence, other cadences are often introduced into
video streams as the result of editing, non-NTSC broadcasts, and numerous other
influences.20
Film Mode Detection and Cadence Detection: Film Mode Detection and
Cadence Detection are the same as Pulldown Detection.
20
Source:
http://smcr.intel.com/SMCRDocs/WW3708_CSE_BoulderCreek_Montevina_Introduction_Rev1_0.ppt
21
Source: http://www.hqv.com/index.cfm?page=tech.cadence, January 28, 2010
22
Source: http://www.hqv.com/index.cfm?page=tech.de-interlacing, January 28, 2010
23
Source:
http://smcr.intel.com/SMCRDocs/WW3708_CSE_BoulderCreek_Montevina_Introduction_Rev1_0.ppt
Total Color Control: Total Color Control allows for adjustment of saturation
levels for six colors: Red, Green, Blue, Magenta, Yellow, and Cyan. Figure 18
shows an example of total color control.
Color Vibrance: Color Vibrance is the ability of hardware to even out the color
of more saturated and less saturated areas in an image by increasing the apparent
color of the less saturated areas. Figure 19 shows an image on the left and the
same image on the right with increased color vibrance. The increase of color
vibrance is most noticeable in the color of the dog’s fur.
24
Source: http://www.ambery.com/di10visccohd.html, January 29, 2010
Color Correction: Color Correction is the process in which hardware alters the
overall color of the light, or the color temperature. A higher color temperature
provides a bluish hue to an image whereas a lower color temperature gives off a
yellow or red hue.
Color Space Conversion: Color Space Conversion is the process in which one
color space is converted to another. Color space is the description on how all
colors can be created. For example, in a PC monitor which emits light the color
space RGB is used because all colors can be created from a combination of Red,
Green, and Blue. In PC printing, however, a different color space is used to create
all the colors. CMYK (cyan, magenta, yellow, black) is the color space used in
printers in order to print. Color space conversion it is the ability to convert a color
space such as RGB to another color space like CMYK in order for the different
hardware to maintain the same representation of an image.
25
Source: http://www.amazon.com/Photoshop-CS4-Missing-Lesa-Snider/dp/0596522967, Feb 8, 2010
26
Source:
http://smcr.intel.com/SMCRDocs/WW3708_CSE_BoulderCreek_Montevina_Introduction_Rev1_0.ppt
Whites Processing (Blue Stretch): Blue stretch increases the blue component of
white and near white pixels in video while avoiding hard transitions and without
affecting flesh-tone colors.28
27
Source: http://www.via.com.tw/en/images/products/chipsets/p4-series/img_videodeblock.jpg
28
Source: http://www.faqs.org/patents/app/20090115906, February 8, 2010
29
Source: http://www.poynton.com/PDFs/Chroma_subsampling_notation.pdf, February 9, 2010
Bad Edit Correction: Bad edits occur when a video is edited and the edits also
make changes to the pulldown cadence of the video. Bad edit correction is where
the hardware detects the changes in the pulldown cadence and is able to recover
the original 24 FPS video so that the changed cadence doesn’t affect the viewing
experience.
6. The Auditory Experience
High quality audio which is well synced with the video playback is important in order to
enable the best user experience possible when watching video. In many cases, having a
crisp and clear video is not enough to fully immerse the user in the video viewing
experience, such as Hollywood movies. If a user views online video, however, that user
may have less need for high quality audio than someone who is using their computing
solution to drive a home theater with surround sound speakers.
Audio Codec’s and Formats
The following audio codec’s and formats affect the quality of audio which can be
transferred from the PC to the monitor, television, or speaker system. Many modern PCs
are able to perform dynamic video range control which enables the hardware to limit
range the audio volume in a video. Dynamic video range control increases the volume of
quiet scenes in a video and decreases the volume of loud scenes in a video so that the
entire video stays in the specified ‘Dynamic Range’ and certain scenes are not too quiet
or too loud.
30
Source: http://www.tomshardware.com/reviews/avivo-vs-purevideo,1492-7.html, February 1, 2010
Blu-ray: This is a comprehensive list of audio formats which are allowed in Blu-
ray audio playback:31 Blu-ray is a standard format for high-definition video
playback.
o Dolby Digital / AC-3: Dolby Digital is the most commonly found audio
format used in DVD’s and is the base standard for Blu-ray.
o PCM: PCM is a complete copy of the studio master audio track and when
used in video discs, and is stored in an uncompressed form. PCM has the best
sound quality of any format due to a lack of compression; however, it takes a
significantly larger amount of space to store.
Other Formats: Two other prominent audio formats used in audio encoding are
MP3 used in devices such as MP3 players and AAC which is used on devices
such as the iPhone, iPod, and PlayStation3.
31
Source: http://www.highdefdigest.com/news/show/1064, February 3, 2010
o AAC: AAC is a lossy compression codec for digital audio. AAC is seen as the
successor to MP3 as the standard audio encoding format. AAC is part of both
the MPEG-2 and MPEG-4 specifications.
Audio Connections
Audio is distributed along with video as either part of the same connection or part of a
separate connection. Two connections used in modern video playback are HDMI and
TOSLINK. These two connections do not represent the entire range of audio connections.
Other connections such as coax cables are used in audio data transfer but are not typically
used in high definition audio data transfer as part of the video playback experience.
HDMI: Along with carrying video data, HDMI connections are also capable of
carrying Dolby TrueHD and DTS-HD Master Audio. HDMI is one of the only
connection types which allow for the transfer of both audio and video data. VGA
and DVI connections, both useful for transferring video data, are not able to
transfer audio data.
7. Future Technologies
The video industry is continuously evolving with new technologies enhancing the end-
user experience. This section explores several new technologies in the immediate future
for video: wireless display (Intel® Wireless Display), higher resolution for video
playback (Quad HDTV), new storage mediums for video (Holographic Versatile Discs),
and new ways to play more realistic video (Stereoscopic 3D).
Intel® Wireless Display
Intel® Wireless Display is already in production and gives an easy way to wirelessly link
different devices such as laptops and televisions in order to playback video. Intel
Wireless Display allows for the ease of sharing video content streaming seamlessly on a
large screen television at the same time as on a laptop. Intel Wireless Display also has
possible use in business applications where the projector is seamlessly linked with a
laptop which allows for easier usage than before. Intel Wireless Display is just the start of
new ways to deliver video to the user. For more information on Intel Wireless display,
see the Intel Wireless Display Product Brief.
Holographic Storage Technology
Recently, wide scale 1080p distribution was made possible with the advent of disc
storage technology which could hold an entire feature length movie in high definition on
just one disc. Quad HDTV 2160p could possibly be enabled with the same advancement
in storage technology with technologies such as Holographic Versatile Discs.
Holographic Versatile Discs are an optical storage technology which takes advantage of
the ability to not only record information on the surface of a disc like Blu-ray but HVDs
can record information in the entire depth of the disc as well. Using the entire volume of
a disc to record information allows for orders of magnitude more data storage than the
current Blu-ray format. Dual-Layer Blu-ray Discs have a capacity of 50GB whereas a
Holographic Versatile Disc such as the one which Maxell is releasing in 2010 have up to
1.6TB of capacity.32
Stereoscopic 3D Video Playback
Stereoscopic 3D video playback is a fast growing segment in video viewing. Stereoscopic
3D has also been referred to as stereo 3D, stereographic 3D, 3D stereo, NVIDIA® 3D
Vision™, NVIDIA 3D Vision Surround, or just 3D. Stereoscopic 3D video is video
playback where the illusion of a 3D image is created on a display when viewed with
specific glasses which are typically either polarized, or controlled with proprietary device
drivers and connected to a PC. Many movie theaters are already playing 3D movies and
the next step for the industry is to bring the 3D movie experience to the home theater.
Stereoscopic video enabled televisions are already being developed and manufactured by
many television companies and PC platforms are moving towards enabling stereoscopic
3D playback for personal PCs and home theater systems. 3D monitors are beginning to
be embedded in laptops, all-in-ones, as well as appearing as standalone monitors for
desktops.
Although the ability to create the illusion of 3D images has been around for quite a long
time, (the first still 3D photograph, or stereogram, was created in 184033) it hasn’t been
until the last decade that wide scale production of 3D Hollywood movies has been
possible due to the extreme amounts of compute power which is required to render a
Hollywood movie in 3D. As a result of the increase in 3D movies being seen in movie
32
Source: http://www.maxellcanada.com/pdfs/c_media/optical_stor_tech.pdf, February 9, 2010
33
Source: Welling, William. Photography in America, page 23
This section discussed only four new and current technologies or advances which are
pushing the boundaries of the current video experience, but it can be expected that there
might be many more.
8. Summary
Video playback is one of the most common forms of entertainment on a PC. Videos come
in many different formats and from many different channels of distribution, making PC
video playback capabilities very important to achieving the best possible experience.
Intel® Core™ 2010 Processors with Intel® HD Graphics contain Intel® Clear Video HD
Technology which enables smooth HD video playback, premium audio quality, and
implements many of the video post-processing technologies described in this document.