Professional Documents
Culture Documents
by Philip De Lancie
As a distribution medium, DVD-Video offers the potential for quality that is
far closer to the original than VHS. But to realize that potential, the look and
feel that has been painstakingly created in production and post must be main-
tained through the DVD title preparation process. That process might include
steps that are familiar, such as transfer from film to video (telecine), standards
conversion (NTSC to PAL or vice versa), editing for home video release, or
even shooting additional material for value-added “featurettes.” But it will
also include the relatively new and often unfamiliar step of video compres-
sion, which is a critical determinant of the ultimate quality. Film-makers
armed with an understanding of compression and the factors that affect it will
know what to expect when they see the results on their own work, and will be
better positioned to influence the fidelity with which that work is translated to
DVD.
Why Compression?
Video compression basically means using fewer bits to store and transmit dig-
ital video information. The data-rate of uncompressed “studio quality” digital
video (ITU-R BT.601-5) is upwards of 100 Megabits per second, which vastly
exceeds the speed at which a DVD player can retrieve video information from
a disc (9.8 mbps). Storing a two hour program at this rate takes over 90
Gigabytes, while the storage capacity of DVD ranges from 4.7 to 17 GB. The
answer to this dilemma is MPEG compression. DVD-Video supports both
MPEG-1 (also used for Video CD in Asia) and MPEG-2. MPEG-2 is universally
regarded as yielding higher image quality, and is the norm for most DVD-
Video titles.
One underlying assumption of MPEG-2 compression is that motion pictures
contain lots of redundancy, both within each frame and between a series of
consecutive frames. Another is that there is some information in each frame
that may be discarded without noticeably affecting the way that picture is per-
ceived when played back. MPEG-2 reduces the overall volume of data both by
discarding such “un-needed” information and by storing redundant data
more efficiently.
To realize these efficiencies, MPEG-2 first performs intra-frame compression,
similar to the techniques used in still-image formats such as JPEG. Next comes
inter-frame compression, in which a series of adjacent frames are compared,
and only the information necessary to describe the differences between succes-
sive frames is retained. When the encoded material is played back, a decoder
extrapolates from the stored information to re-create a complete set of discrete
frames.
The result of the MPEG-2 encoding process is a video stream. The basic unit of
the stream is a “Group of Pictures” (GOP), made up of three picture types: I, B,
and P. I-pictures (intra) are compressed using intra-frame techniques only,
meaning that the information stored is complete enough to decode the frame
without reference to any adjacent frames. For B (bi-directional) and P (predic-
tive) pictures, however, only “difference information” (frame-to-frame
changes) is stored, which generates much less data. These pictures can only be
reconstructed by referring to the I-pictures around them, which is why the dif-
ferent picture types are grouped into GOPs.
VBR Encoding
Video that has been professionally encoded in the MPEG-2 format may be vir-
tually indistinguishable from the uncompressed video source. The extent to
which the original image quality is maintained depends mostly on two factors:
the bit-rate used and the nature of the material.
Remember that MPEG-2 works by storing redundant information more effi-
ciently. A relatively static scene (two people talking in front of a wall) will have
far greater frame-to-frame redundancy than a fast-paced action scene (a high-
speed car chase). With fewer redundant bits to discard, the action scene will
require a higher bit-rate. Looked at another way, a low bit-rate will cause more
compression artifacts (blockiness, for example) in the action scene than in the
static scene.
If a program has been off-line edited on a non-linear system that uses compres-
sion, Deelo suggests that it be edited on-line to create the source master for the
DVD release. At the least, the NLE output should be evaluated before the
project is compressed. “Look for smooth gradations from dark to light,” Deelo
says, “such as in the falloff from a light shining on a wall. You don’t want to
see any banding. And watch for ‘mosquito-ing’ around titles—weird spider
effects where the edges are not clearly defined.”
Even when an NLE system does not use compression, Deelo says outputting
to D1 or Digital Betacam may still be preferable to “transcoding” directly from
the edited NLE file to MPEG. That’s not due to any weakness in software-
based encoding, which he says can yield results comparable to hardware
encoding. Instead, Deelo is concerned about pre-processing, which is used to
optimize the video signal before compression. “DVNR [digital video noise
reduction] is not just grain removal,” he says. “There is also a series of filtering
and enhancements we do, such as brick-wall filtering to take out high frequen-
cies before they hit the encoder.” Deelo has not found software-based pre-pro-
cessing that he feels gives as good results as hardware DVNR.
Perhaps the most welcome observation that Deelo has to offer is that film mak-
ers can set aside some of the concerns they have acquired from seeing their
work prepared for NTSC broadcast or VHS release. He says he has had cine-
matographers tell him that “they like bright, saturated colors, but they know
that when they are in telecine they can’t use the colors they want, because they
are making a master for television. But that’s not true for DVD. Go for the
color, go for the sharpness; make the master look how you want it to look. If
you like what you see coming out of telecine, we can preserve those qualities
when we encode it for DVD.”