You are on page 1of 12

A Guide to


Brian A. Rutz
JVC Professional Products Company, West Coast Region

Why are we doing this?

From the beginning, our color television standard, NTSC, was a compromise. It was
necessary to add color to the existing black and white television signal to maintain
compatibility with the thousands of black and white television sets already in the
The creators of NTSC had no idea what was to become of this fledgling industry. In the
mid 1950s, video recording became a practical reality with the introduction of the Ampex
two-inch videotape machine. Though at first only capable of black and white recording,
the door had been opened to what is the huge business of video acquisition and postproduction as we know it today.
Analog video by its very nature compromises picture quality. Processes such as editing,
tape to tape color correction and duplication can quickly degrade picture integrity.
Manufacturers of video equipment have for years been trying to improve overall
performance but the limitations of analog recording are legion.
Surely, being able to record the signal in the digital domain holds the promise of much
better performance. But is digital in itself a panacea?
Getting it to Digital
We must realize we live in an analog world. Light is analog, as is sound. Even television,
as we know it, is analog. We want to record our video in the digital domain in order to
realize the apparent advantages of a digital signal.
We must bridge the gap between the analog and digital worlds. To do this we must first
digitize the signal.
The operation is fairly simple:
1. Capture the original picture. This may be a live camera shot or something
already recorded on, for example, an analog videotape
2. Sample the input signal. This simulates the analog signal in the digital domain
3. Quantize the signal. This gives each sample a numeric value.
4. Compress the signal. The overall amount of data is reduced to a more
reasonable size.
5. Record the signal. Once digitized, the signal may be recorded on a
videocassette or a computers hard disk drive (HDD).
Though the process for recording an analog signal in the digital domain is fairly simple, all
digital is not created equal.

Understanding Todays Digital Video

Page 1

It is important to understand, just because its digital, it doesnt mean its perfect. As in
analog recording, problems arise which affect the quality of the recorded image. Digital
recordings can show artifacts in areas of motion and subtle colors. Different methods
and amounts of compression, known as concatenating compression, can further degrade
the picture. This may be seen, for example, when a signal recorded with a high amount of
compression is decompressed then re-compressed for transmission or for transfer to a CDROM or DVD, or for down-loading into a non-linear editing system.
Bridging the Technologies
The first step in the digitizing process is sampling. The trick is to sample the signal
rapidly to produce the illusion of remaining in the analog domain. Remember, we live in
an analog world with varying light levels and colors.
So, just how many samples are required to maintain this illusion? Clearly, too few samples
will result in major artifacts when we try to recover our original picture. On the other
hand, too many samples will result in a huge amount of data, which would be at best, very
difficult to handle.
When we try to determine the number of samples we require we must look at the
components of the video signal. The most important component is the luminance as it
gives us all the detail absolutely necessary in the picture. As a result, we must sample
luminance at a very high rate, 13.5 Megahertz (million times per second).
Color errors are less recognizable so we can get away with fewer samples, or so it would
seem. But life is not that simple!
Sampling Standards
Todays component digital technology provides us with two methods of sampling and
recording a digital signal. We have a choice between 4:2:2 and 4:1:1 and we hear these
numbers quite often. But just what exactly do
they mean?
The Numbers
Quite simply, they refer to the ratio of the
number of luminance (Y) samples to the samples
of each of the two color difference signals used
in component digital recording. (While
luminance is always referred to as Y, the color
Luminance R-Y (Cr)
B-Y (Cb)
components may be referred to as R-Y,B-Y or
Samples Samples
Cr, Cb or sometimes U, V).
Component Digitization
These are two technologies for the digital era
and each has its suitable application. It is,
however, important to realize there are
differences between them and one must be aware of trying to do too much with too little.


Understanding Todays Digital Video

Page 2

Earlier, we mentioned the luminance signal is sampled at 13.5MHz. Therefore, using the
noted ratios, in a 4:1:1 component digital sample, the color information must be sampled
3.375MHz. or the number of luminance samples for each of the color difference signals.
Conversely, a 4:2:2 sampling ratio results in a color sampling frequency of 6.75MHz or
half that of the luminance sample.
Perhaps a graphic will help you
understand this process.
Figure 1 shows the upper lefthand corner of a television
picture. We see there are 720
picture elements or pixels per
line and there are 483 active
lines in our TV standard.
In the first pixel we sample the
luminance, Cr (R-Y) and Cb (BY). In the second pixel we
sample only the luminance. In
the third, we once again sample
luminance and the two color
components and in the fourth
pixel we sample only the
luminance. As you can see, there are four luminance samples for every two Cr and Cb
samples. This is a ratio of 4:2:2.
Now look at 4:1:1.
Figure 2 shows graphically the
sampling structure of a 4:1:1
component digital signal. Once
again, there are 720 pixels per
line and 483 lines.
In the first pixel we sample
luminance (Y) and the color
components Cr and Cb. In the
next three pixels we sample
only the luminance; four Y
samples for each Cr and Cb
sample or 4:1:1.
As can be readily seen, there is
a significant reduction in color
information or detail as
compared to a 4:2:2 component digital sample. It is important to understand this is
information, which has been discarded and therefore cannot be recovered later.

Understanding Todays Digital Video

Page 3

So what does this mean?

Quite simply, the color depth of a 4:2:2 component digital signal is twice that of a 4:1:1
signal and, from the standpoint of color bandwidth, is twice that of todays popular
component analog formats. This means better color performance, particularly in areas
such as special effects, chroma keying, alpha keying (transparencies) and computer
generated graphics.
As can be seen in Figure 3, the greater the number of samples, the more closely the
resultant information resembles the original video waveform. When we connect the
dots we can easily determine which of the two waveforms is most desirable.

4:2:2 component digital video is one thing to which all equipment manufacturers can and
do agree. The bottom line is 4:2:2 has clearly superior signal integrity. It has more robust
chroma performance and is fully compatible with the higher end digital domain. It is
worth noting the ITU-R BT.601-4 Standard defining 4:4:4 and 4:2:2 component digital
recording has been with us for many years and most high quality digital devices from
character generators to non-linear editing (NLE) systems are built upon this standard.
It is also important to remember once a signal is sampled at 4:1:1 it can never become
4:2:2 even when dubbed to a 4:2:2 digital videotape format or dumped to an NLE.
The next step in the digitizing process is quantizing. In digital video we do not record
video as we do in the analog world, but rather a series of numbers which give us a
reference as to what the initial analog video signal was. As you can imagine, this is critical
if we are to fully recover our original picture.
In the Digital S format, JVC utilizes 8-bit processing which assigns one of 256 (2 to the 8th
power) levels to the signal value of each sample. When we look at a video signal
displayed on a waveform monitor we can see the signal at any moment in time and can

Understanding Todays Digital Video

Page 4

determine the voltage of the signal very precisely. Quantizing does much the same thing
except, instead of assigning a voltage value, a numeric value is given.
The quantizing number could be considered as being somewhat analogous to a map
reference. When you know the reference number or coordinates of the place you are
trying to locate, the job of finding it is much easier.
The next figure (fig.4) shows the approximate relationship between the quantizing value
and the waveform display.

Dealing with Data

We have now sampled our original analog signal and have quantized it to give it a numeric
value. We now have a digital signal in the form of data and what a huge amount of data it
To determine the amount of data in a 4:2:2 component digital signal the calculation is as

Y component: 720 pixels/line X 482 lines/frame X 30 frames/sec. X 8 bits/pixel = 83.0Mbs

Cr component: 360 pixels/line X 482 lines/frame X 30 frames/sec. X 8 bits/pixel = 41.5Mbs

Cb component: 360 pixels/line X 482 lines/frame X 30 frames/sec. X 8 bits/pixel = 41.5Mbs

Total 4:2:2 bit rate = 166Mbs

To determine the amount of data in a 4:1:1 component digital signal substitute 20.75 Mbs
(half as many samples) for the chroma components of the formula and the result is:
Total 4:1:1 bit rate = 124.5Mbs


Understanding Todays Digital Video

Page 5

Dealing with this amount of data can be problematic when it comes to storing it whether it
be on videotape or a hard disk. It would seem, therefore, it would make sense to reduce
the overall size of the data to make it easier to handle. Once again, life is not quite that
We must consider carefully the amount and the method of compression used. Higher
levels of compression will adversely affect picture performance. A number of years ago it
was determined 4:1 was the dividing line between lossless (less than 4:1) and lossy
(more than 4:1) compression. This is a subjective analysis.
The following chart serves to demonstrate various degrees of compression, the associated
data rate and the amount of data stored on hard disk drives of differing sizes.

There are a number of different types of compression in use today. Some of the more
commonly used are:
DCT - Discrete Cosine Transform used in the Digital S, Digital Betacam and the DV
JPEG - Joint Photographic Experts Group developed for still frame transmission such
as news-wire photos.
M-JPEG - a motion oriented variant to JPEG used in most non-linear editing systems.
MPEG - an inter-frame compression type meaning only certain frames are fully
compressed while the intervening frames are predicted as to their content. It is used
primarily in CD-ROM systems.
MPEG-2 is also an inter-frame compression method used for distribution, for example
in satellite transmission (up-link and down-link), digital television broadcasting and
DVD. Inter-frame compression makes frame accurate editing very difficult.

Understanding Todays Digital Video

Page 6

Contribution Quality
As noted above, the higher the compression rate the poorer the picture performance. In
his book Your Essential Guide to Digital, one of the Snell and Wilcox Handbook
Series, author John Watkinson writes the following:
If post-production is going to be done, then a contribution
quality compression will be needed, allowing a comfortable
performance margin. Around 30 to 40 Mbs (minimum) delivers
contribution quality.
This means functions such as editing, the addition of special effects or titles and color
correction require an off-tape or off-disk data rate better than 40Mbs. In general terms,
the higher the data rate and the lower the compression, the better the quality.
We have charted some of the current digital videotape formats and their respective data

The issue has been further defined by the SMPTE, which describes contribution quality, as
4:2:2 sampled video at a minimum data rate of 50Mbs. This is the required input signal
for MPEG-2 transmission (SDTV) and DVD authoring.
So what have we learned?
Digital videotape formats such DV and its industrial variants utilizing 4:1:1 sampling and a
high rate of compression are better suited to image acquisition, but with careful attention
to those picture elements which may cause artifacts and picture quality problems. These
include small moving items such as leaves, scenes of high detail such as crowds of people,
rapid subject or camera motion and the faithful reproduction of computer generated
graphics. The DV formats are primarily limited to simple image mixing and low multigeneration signal paths.

Understanding Todays Digital Video

Page 7

On the other hand, 4:2:2 studio quality component digital formats are ideally suited for
all applications from industrial teleproduction through to programs for television
broadcast. High quality image manipulation such as off-tape chroma keying, alpha keying,
computer graphics and animation are easily handled. Multi-layer imaging in excess of 20
layers is possible and the 4:2:2 formats are ideal for archiving and backing-up digitized
video from high quality non-linear editing systems.
Digital Video Production, Transmission and Multimedia
The obvious purpose of field acquisition is to gather on videotape the raw material needed
to finish or post-produce a video production. This is accomplished through traditional
linear editing, non-linear editing using computer hard disk drives (HDD), or a combination
of both methods.
In the analog video world we are constantly aware of the limitations of our format
of choice and utilize it to its fullest extent. One-inch videotape with its direct color
recording method offered great multi-generation performance but was not nearly as
portable as the component analog video formats widely used in production today. These
latter formats in turn offered some advantages over S-VHS and the other professional
formats. But each and every one has its application or applications in production.
The same is true in the digital domain.
Earlier in this Guide, the sampling structure for both 4:1:1 and 4:2:2 component digital
video was explained. We learned as well about the ITU-R BT.601-4 serial digital video
standard, commonly referred to as CCIR 601 or 601, which is, by definition,
uncompressed. This structure is shown in Figure 1.
Linear Post-Production
Conventional post-production devices designed for digital video are built around the 601
standard. These include character generators, digital video effects (DVE) systems, color
correctors and switchers. With the advent of lower cost digital videotape formats, more
and more of these devices are coming on the market.
Each of these products requires an input signal compliant with the 601 digital video
standard, which defines 4:2:2 (and 4:4:4) sampling only. The SDI (Serial Digital
Interface) available on the D1, D5, Digital Betacam and Digital S 4:2:2 component digital
production formats is fully compliant with this standard. What then of the 4:1:1 formats?
Can these not be used with these higher-end digital video production devices?
There are half as many color samples in 4:1:1 sampled video (Fig. 2) as compared to the
4:2:2 sampling structure (Fig.1). Clearly, the signal as it is recorded is not compliant with
the 601 standard. This in itself does not preclude the use of 4:1:1 formats with serial
digital peripherals.

Understanding Todays Digital Video

Page 8

The manufacturers of the professional products based on the DV (consumer) format offer
an optional SDI, serial digital interface, which allows these formats to be used with digital
video equipment. The 4:1:1 sampling structure as recorded must be altered to make it
appear as a 4:2:2 (601) signal and it must be uncompressed.
It is therefore necessary to
interpolate this missing
information and create new color
data based on the original two
color samples. Figure 7
demonstrates the structure of the
reconstituted 4:1:1 digital video
at the output of the SDI.
As the post-production process
continues, each time this
interpolated 4:2:2 signal is
recorded, it must be re-sampled
to 4:1:1 and re-compressed at a
ratio of 5:1, the format standard.
If it is again necessary to record
this re-sampled video, it will require the signal be uncompressed and converted to another
interpolated signal, processed through the system (switcher, DVE, etc.), re-sampled to
4:1:1 and compressed once again. This can cause problems with the integrity of the
original video, most especially computer graphics and animation.
Non-linear Post-Production
Todays non-linear editing systems are based upon the CCIR-601 serial digital video
standard. This of course means any video must be sampled at 4:2:2 regardless of the
signal source. Digital video must be uncompressed at the input of the NLE, assuming the
NLE has a serial digital interface (SDI) which, by definition, must meet the 601 standard.
In a non-linear editing system, once the video is in the NLE there may be little or no
degradation to quality with the exception of the amount of compression applied to the
video, the type of compression used, and any required rendering of effects. Higher
compression means poorer quality.
The SDI output from the NLE will be an uncompressed digital signal conforming to the
601 standard. The sampling structure will be the same as that shown in Figure 7 if 4:1:1
sampled digital video was the original source material.
A computer interface standard, IEEE-1394, has been developed by Apple Computer and
has found an application in video. Some DV-based equipment has the option of using this
standard as a way of getting compressed 4:1:1 sampled video in the form of a data stream,
into and out of an NLE system equipped with an IEEE-1394 (often referred to as Firewire
or I-link) interface. This reduces the potential for signal degradation. Once incorporated

Understanding Todays Digital Video

Page 9

into the data stream the signal cannot be modified (video levels, audio levels, etc.) during
the transfer process. A clone is made of the data in a way similar to copying from one
computer disk to another.
The non-linear editing system must convert the data from the IEEE-1394 interface into its
own native digital video format in order to perform effects, add titles, etc. This is done to
the 601 digital standard so re-sampling as demonstrated in Figure 7 may be necessary.
Transmission and Multimedia
Television broadcasters are now making widespread use of digital video to route
programming material from place to place via satellite. This is done using MPEG-2, a
distribution standard developed to improve upon the quality of MPEG-1 used extensively
in multimedia production of CD-ROMs.
MPEG-2 is a set of tools developers
may use in a number of different ways,
based upon certain profiles at specific
levels. For example, the MPEG-2
standard for transmission and
multimedia (DVD) is defined as Main
Profile at Main Level and is expressed as
MPEG-2 MP@ML. This denotes a
4:2:0 sampling structure (MP) based
upon the 601 serial digital video
standard. This is demonstrated in Figure
8. The Betacam SX format uses
MPEG-2 4:2:2P@ML (4:2:2Profile at
Main Level), another example of the
tools within the MPEG-2 structure.
When compared to the CCIR 601 4:2:2 REFERENCE MASTER (Figure 1), it can be
easily seen how the MPEG-2 sampling structure is closely aligned with this standard. The
experts who developed MPEG-2 felt the color information required for television program
transmission could be accommodated with just half the vertical color resolution of 601.
The color information (R-Y, B-Y) sampled from one pixel is shared among four luminance
MPEG-2 employs a compression method different from DV, Digital Betacam and Digital
S videotape formats. While compression on the tape formats is fixed at a specific rate,
(10:1 for Betacam SX, 5:1 for DV, 3.3:1 for Digital S, and 2:1 for Digital Betacam),
MPEG-2 allows for continuously variable compression making it a very economical
method of transmission. Because MPEG-2 is very destructive to the original video, a
minimum off-tape data rate of 50Mbs and a 4:2:2 sampling structure is required for
inputting to the MPEG-2 process.
MPEG-2 is also being used with DVD for program distribution (movies) and multimedia.

Understanding Todays Digital Video

Page 10

We have noted the MPEG-2

sampling structure is based upon the
601 digital video standard. When a
DV-based format is used in the
creation of a master videotape
designed for MPEG-2 transmission
or for incorporation into a DVD, it
first must be re-sampled to create
the interpolated color pixels
required. The resulting sampling
structure is shown in Figure 9.
Every second pair of color pixels
(horizontally) is interpolated from
the first two color pixels and then
shared among four luminance pixels. If prior to MPEG-2 encoding the video has been resampled into a CCIR-601 compliant signal for postproduction, be it linear or non-linear, it
is difficult to accurately recreate color information, particularly with computer graphics
and animation.
MPEG-2 is, in the digital domain, a method of distribution, much as VHS is in the
analog world. We need the highest quality master to ensure excellent results when the
video is dubbed to our distribution format.
The MPEG-2 structure for distribution is very different from that of videotape formats
using MPEG-2. The result of converting one form of MPEG-2 to another is problematic,
resulting in less than acceptable picture quality for program production applications. In
fact, the actual recorded results are not up to the quality level expected from todays
component analog formats.
John Watkinson, referring to the findings of the Fall 1998 SMPTE/EBU Joint Committee
on Digital Video report writes:
1. MPEG is optimized to be a delivery technology where asymmetrical coding is an
2. Production recording works best with symmetrical coding. Thus at low bit rates DV
(4:1:1/25) outperforms the MPEG based format (Betacam SX) and at 50Mb/sec., D-9
outperforms MPEG.
3. The requirement for an MPEG production chain is a myth.
4. DV is a cost effective choice for acquisition and low-complexity production
5. D-9 (JVC Professionals 4:2:2 digital format) essentially delivers the same
performance as Digital Betacam and represents a cost-effective choice for mainstream
television production.

Understanding Todays Digital Video

Page 11

Digital videotape formats offer a great deal of potential and can eliminate many of the
problems long associated with analog formats. We must be very much aware of what is
going to be the final application to which the digital video we acquire in the field is to be
put and choose a format carefully as the implications, though different, are as critical as
they are in the analog domain.
The JVC Professional Digital S System
With the Digital S format, recently given the SMPTE designation D-9, JVC has achieved
what was considered impossible just a few years ago; high quality, 4:2:2 component digital
recording at a very affordable price. With compression at a mild rate of 3.3:1 an off-tape
data rate of 50Mbs, and a two-hour recording capability, the need for a production format
providing contribution quality is comfortably met with D-9.
D-9 was designed to satisfy the requirements of a great variety of video recording
applications and is a perfect complement to professional analog recording products and
systems currently in service. The format possesses exceptional features, performance, and
specifications to satisfy the video recording needs and requirements of broadcasters and
video professionals alike at a significantly new level of economy unknown to digital video
tape recording before now.
JVC has demonstrated further extensions of the D-9 format to address the requirements of
digital television broadcasting (ATSC). Current machines are compatible with most of the
SDTV (Standard Definition) television standards. Future machines will be capable of
100Mbs for the 480-60P, 720P and 1080i HDTV (High Definition) standards while
maintaining complete upward compatibility with current recordings.
JVC Professional Products Company
West Coast Region
5665 Corporate Ave. Cypress, CA 90630
Telephone: 800-995-4582 714-527-7500
Fax: 714-952-2391

Understanding Todays Digital Video

Page 12