You are on page 1of 26

RE:Match

RE:Match is a set of plug-ins to match one view of video or film to another; to transfer the
appearance of a clip to be consistent with another so that it looks as if it was shot with the same
camera and settings.

RE:Match consists of three plug-ins: RE:Match Stereo and RE:Match Color (which also
includes RE:Match Basic). The Stereo plug-in assumes that the input is a pair of sequences
where the two views are shot near each other. The tool is, to a degree, insensitive to alignment.
The RE:Match Stereo plug-in assumption is that there are sufficient pixels in common to make
it worthwhile to perform a dense pixel correspondence (often called a disparity map, or motion
estimation) during the matching process. The RE:Match Color plug-in assumes that there are
two (or more) cameras shooting the same scene, maybe at the same time, but are too different in
terms of point-of-view for a map of pixel correspondences to be of good use… so the matching
process works on a more whole-image statistical basis.

While RE:Match Stereo gives you control over local area and pixel matching processing,
RE:Match Color gives you more controls over the global (whole image) corrections. RE:Match
Color can also be used before RE:Match Stereo in certain circumstances. In a way it’s the
expanded version of the Global Correction menu in RE:Match Stereo. We’ve provided some
practical examples within this manual.

Documentation for RE:Match Color exists in a separate PDF document.


RE:Match Stereo Controls

You have a sequence with two views, most probably shot with a stereoscopic rig. Other
specialized applications are possible but we will use the term stereo.

This tool requires two inputs to render a frame. The plug-in will return a red frame or do nothing
until the two views are connected.

We will call the main input (the view we want to change) the Source. Our result (destination)
will be the Source transformed to match another sequence representing another view (the
Reference or Target). So, the other view (the view we transfer color and texture from is the
Reference or Target – what we match to) we call the Source or the Reference view. We will
avoid using the words “Left” and “Right” as sometimes you want to change the left view,
sometimes the right one, and sometimes both.

In an application like After Effects or Premiere Pro the Target is the sequence you apply the
plug-in to, and the sequence we are trying to match to needs to be specified as a menu item (like
in After Effects or Premiere Pro) or via an image well (Final Cut Pro or Scratch). In a
compositing application with an A/B, Foreground/Background user interface model (like Flame
or Nuke for example) our Main Input will be on the first input and you need to connect the
second input for the Match To Reference view.

In Nodal Editors, pay attention to which input is Match To

First-time usage: Apply the plug-in to an image sequence that needs to be adjusted to match
another sequence. Then set the other image you are trying to match. First try the different global
correction options (from the Global Mode menu), and if it looks like the matching process still
needs some work, then move on to the local mode controls to tweak the result. Remember to
play with the Mismatch Threshold % value to control the refinement process (the restoration of
fine details). Now we will explain the options in detail.
Match To Reference
You have a sequence, and you wish to “Get the Color From” another sequence so they match.
The Reference input allows you to specify the other source sequence.

Global Mode
The first step is to pick one of the menu options from this menu, which consists of a variety of
matching techniques that look at the whole image at one time, or said another way, Global
correction. Global modes in general include such techniques as brightness, contrast, gamma and
histogram corrections. The Global correction step of this plug-in might just work well enough for
many sequences pairs without having to continue into the rest of the controls. Avoid Global
modes that, for a particular sequence and Global Mode menu option, results in crushing the
luminance more than desired, or creates over-brights artifacts. We provide a way to turn off the
global processing because it’s sometimes desirable to perform the global image matching with
another tool or plug-in as a first pass (and this includes using RE:Match Color).

None: bypasses the global step.

Mean Shift: matches the color distribution of the two images by shifting the levels (exposure) of
one to the other.

Gain + Offset: scales and offsets (bias) the pixels of an image so that each of the channels in RGB
have the same range of values.

RGB Histogram, LAB Histogram:


color match based on relative distribution. The two RGB methods work in RGB color space and
the two LAB methods work in LAB color space.

Wide Baseline Histogram: If the images matched are shot at different time but more or less shoot
the same thing except the framing is a bit different but is not shot at same time you can use this
new matching technique. It’s longer to process as a normal histogram matching as we build
correspondances between images.

Use the LAB methods when the matching problem can benefit from changes from something
more like a hue offset. Note that the LAB formula is not very good with values too far under 0
and over 1.0.

The Mean Shift and Gain + Offset modes are often adequate for simple exposure level
differences. If the difference between the two images is not linear in behavior then use one of the
two Histogram Transfer modes. An example nonlinear case would be a difference in gamma.
However there are cases where a histogram match will fail too, in which case you may have to
revert one of the non-histogram modes, which are more exposure oriented in nature, and then do
more tweaking using the Local Match Modes, described below.

Note: Histogram matching is known to fail especially when a large area of a particular color is
present in only one image.

Local Matching
There are many image sequences where a global match cannot properly match two image
sequences. Let’s imagine in each of two sequences of a stereo pair that we have a green ball in
the foreground and the sky in the background. In one of the sequences the green ball is blue-ish
green, and in the other sequence the green ball is red-ish green. However, the cameras shooting
the scene were able to capture the same blue color in the sky. If we perform a simple exposure,
brightness or contrast global color correction, there is probably no correction that will match
both the sky and the ball in both images. If you perform the global correction to match the sky,
the ball in the image will necessarily be a different color. If you match the two balls, then the sky
will change color in one of the images. This is because you are applying the same global color
correction to each pixel.

Of course we are oversimplifying the process in describing this because a histogram correction in
LAB space may be possible to make the two images match. But hopefully you get the idea:
taking the whole image into account may not make it possible to make the images match,
because there may be local variations that differ from what the global color correction process
takes into account.

Because global color matching doesn’t always provide satisfactory results, RE:Match Stereo
provides several options to further process each pixel individually by taking into account a local
area around the pixel. How does this work? First, the plug-in internally calculates the best
correspondence of each pixel in the target image to a pixel in the source image (this is automatic.
Pixels at the edges may not be mapped to corresponding pixels in the source image because of
the difference in placement of the two cameras or lenses. One camera might even have a
different zoom causing differences at the edges due to a larger portion of the scene to be captured
in one view from the other. The pixel-to-pixel correspondences are made using optical flow,
which is commonly used for disparity mapping or motion estimation.
In a perfect world every pixel in the target image would have a pixel correspondence in the
source image that represents the same part or edge of an object. In this perfect world we could
just copy over the source pixel to the corresponding pixel in the target. Because the source
image pixel has the desired color, and because every pixel in the target has an exact, and correct,
pixel correspondence in the source, then we could accomplish the color matching by simply
copying over the pixels from the source to its corresponding pixel in the target!

Source Image Target Image


Best correspondence for each pixel is calculated automatically within plug-in. In a perfect
world we would have correct pixel matches and could copy each pixel over from the
source to its corresponding place in the target in order to color correct the target.

However, instead of copying each pixel directly, local tiles around each pixel are used to weight
the transfer of the color from the source pixel to the target pixel. Larger local tile sizes will
produce “softer” color transfers, and smaller local tile sizes will allow more color variation.

size of local tile is set using


Local Block Radius setting.

local tile

local tile

Source Image Target Image


Instead of copying over each pixel directly, local tiles are used to weight the transfer
between the source and target pixels… this gives a more spatially coherent result.

Control of the local tile sizes is discussed in the next section.


Local Block Radius
The Local Block Radius determines the size of the local tile used at each pixel. The range is 0 to
200 pixels

Setting the radius to zero will essentially cause the plug-in to transfer, pixel-to-corresponding-
pixel, the color differences from the source to the target. If the Local Block Radius is set higher
than zero, then instead of directly copying each pixel, local tiles around each pixel are used to
weight the transfer of the color from the source pixel to the target pixel. Larger radii will
produce softer color transfers, and smaller radii will allow more color variation.

Setting the radius to a small value may cause too much color variation to occur in the target
image, because smaller sized local tiles are being used. The smaller the tile, the more local color
correction is allowed to change from pixel to pixel. The larger the radius, the smoother the local
color correction will be across the image, but may introduce a smoothness to the color correction
that can be seen as soft areas, or even bleeding of color past an object’s boundaries.

Imagine the radius being set to the size of the image, then you basically would be performing
global color, because at each pixel ALL other pixels are being considered when doing the local
color corrections. As such you want to strike a balance for local color correction by setting the
Local Block Radius small enough to use smaller local tiles in order to allow for differing objects,
or other parts, in the scene to be considered separately… without being so small that the color
correction changes from pixel to pixel in a way that introduces more artifacts.

The purpose of radius setting is to try to attenuate unwanted color differences. More attenuation
of color change occurs at higher radii. Setting this value too high may introduce more
smoothness than is desired. A value of 1,2,3 is usually used, and higher values can be useful, but
make sure to keep an eye out for softness that may occur. Remember: a value of 0 essentially
turns off local color correction, so you can animate the radius from 0 (off) to a more approriate
radius value if the local color correction is only useful on certain frames.

Local Match Mode:


The various Local Match Mode settings will help you to deal with shots that a global match can’t
handle completely by itself. In general, this is often due to cameras capturing different color
variations of the same object(s) in the two views, or parts of some objects that may be present in
one view and not the other. However, local matching is also necessary to deal with undoing
“optical physics” issues/phenomena caused by the optical capture system. These artifacts can be
specular highlights, shadows, reflections, unbalanced (differently-sized, or -shaped) flares as
well as on-lens artifacts.
The objective is that the two views should have the same look (something stereo viewing
requires), and in addtion to color variations in the two views, correction might also require
making flares in two views be the same shape, or a specular highlight in two views have the
same size, etc.

Off: This turns off Local Matching. This option is useful when the Global Match mode does its
job well enough.

Global Match Refinement Only: This mode is primarly to complement the wide baseline
histogram mode. It allows you to filter out (refine) the global match misbehaving results.applies
local color correction at each pixel based on the automatically calculated correspondence to the
source image, and using the local tile size as specified by the Local Block Radius.

At this point you might wonder if all the pixel correspondences made by the plug-in are good…
that is, that a correspondence for a pixel represents the same part of an object (or even the same
object!)… and how do we work around the problems that may occur if we don’t take local errors
of the automatic pixel correspondences into account? With large disparities between images
there will be more errors in areas of object occlusion because parts of some objects will be seen
in one view and not in the other. The larger those areas of parts of objects that are in one view
and not the other, the more pixel correspondence errors will occur.

For now, let’s assume we have a map where there are good pixel correspondences between the
two images, and bad pixel correspondences (presumably because the pixels of the
correspondences don’t represent the same object, or the same part of an object). We call the
areas of bad pixel correspondences the Mismatched Areas. In the Mismatched Areas it’s not
appropriate to apply local color matching because the mismatched pixels do not represent the
same object (or part of the same object).

However, we also can’t ignore those areas, because if we just apply local color matching to areas
with good correspondences, then the nearby pixels of the Mismatched Areas will stand out
because they were not processed by any local color correction method. Good news: the plug-in
can propagate the changes we made to the pixels with good correspondences to the Mismatched
Areas using a refinement process, of which there are two methods:

Less Details -- Refine Mismatched Areas Propagate the local color corrections of the matched
areas to the mismatched areas, using a fast technique. While this setting is the faster of the two
refinement settings, this setting may not be detailed enough for some sequence pairs.

More Details -- Refine Mismatched Areas: Propagate the local color corrections of the matched
areas to the mismatched areas using a more aggressive technique that is longer to process and
requires more memory. Note however, that with some sequence pairs the faster refinement
setting may work better. This is particularly true when the initial color differences are way off.
‘More’ is not always better!

So here’s an example of a stereo pair.

Left and right eye views.

Note that the image on the left has less blue sky between the tree and yellow part of the train.

Closeup of lower right of left and right views.

When we performed a global color transformation to make one more the the other, it did not do a
good enough job. If we were to only do optical flow it would produce the following image:

Note the tree details are not kept accurate because the pixel-to-pixel correspondence is bad at the
tree edges (because different parts of the tree are visible in each frame).

Using a map of Mismatched Areas (creation of the map is described in the next section), we are
able to override the pixel-to-pixel correspondences at those pixels and propagate the changes
made in the good areas to the Mismatched Areas, keeping the detail of the target image in the
Mismatched Areas, but propagating the color corrections used on the rest of the image.

Image on the left is a map of Mismatched Areas, shown in red. The image on
the right is the final color corrected image, using one of the two local modes that
use refinemented into Mismatched Areas. Note that in Mismatched Areas that
the tree details of the target image are preserved properly, while color correcting
those pixels to be consistent with the rest of the color corrected image .

Note that we did not try to get all the pixels of the tree in the Mismatched Areas map, just the
ones that had bad transfer of details from the source image due to unreliable pixel-to-pixel
correspondences.

Optimization Note: The More Details -- Refine Mismatched Areas setting can use a lot of memory when processing
floating point images.

Mismatch Threshold %
When you select a mode with local area refinement, the Mismatch Threshold % slider becomes
active.

The Mismatch Threshold % setting controls the Mismatched Area calculation.

Here’s what happens to determine Mismatched Areas:


1) Global color correction is applied internally to the target images.

2) Pixel correspondences are determined automatically between the source and globally color
corrected target image.

3) Each pixel, the value for the target pixel, and its corresponding source pixel, are subtracted from
each other.

4) If any of the R,G or B channels are greater than Mismatched Threshold %, then the pixel is
marked as part of the Mismatched Area. For example, when working on 8 bit images, a value of
1% for Mismatch Threshold % would mean that any difference of 2.55 or above (1% of 255) of
R, G or B channels would mark the pixel as Mismatched.

A value of 100% essentially marks all pixels as good (because no difference will be bigger than
the value range for the pixel values), which is essentially the same doing a straight optical flow
transfer… but for the part of the edges of the frame not visible in the other image which then are
still treated as Mismatched Areas. A value of 0% will mark all pixels as part of Mismatched
Areas, which would be the same as turning off all local matching, leaving you with just global
matching. So you don’t want Mismatch Threshold % set too low (e.g. 0.5% might restore too
much of the original details (after global transfer), nor too high (which would mark correponding
pixels in source and target as matching when they do not).

The value of Mismatch Threshold % that will produce the best results will vary per shot. You
may even need to animate this value over a shot to get the best result, but we suggest that you do
not vary Mismatch Threshold % dramatically across the duration of a shot, although we have not
seen much temporal instability problems.

So the next question is this: we can now control which pixels are part of the Mismatched Areas,
but how do we visualize which pixels are part of the Mismatched Areas? The display of
Mismatched Areas for the refinement modes of Local Match Mode are controlled by the Local
Mode menu, described next.

We will talk about how to display the Mismatched Areas in the next section, but here are 3
Mismatched Area pixel displays using the example above, with 4%, 8% and 15% settings for
Mismatch Threshold %.

Mismatch Threshold % of 4% (left), 8% (center) and 15% (right). We pick 8%


because it represents the pixels that are showing the most detail problems when
performing local color matching. And 8% in this case does not include pixels
that were not causing us problems with the local color transfer. Note that we
don’t need to get the whole tree, but just the edges of the tree because that’s
where we are seeing the most problems.

Local Mode Function


The menu Local Mode Function controls the display when using one of the Local Match Mode
settings other than None. In particular Local Mode Function controls several visualization
display modes meant to help you figure out what is happening with the automatic pixel
correspondence calculation, and the shape and size of the Mismatched Areas controlled by the
Mismatch Threshold % setting.

Normal: This is the setting you will generally use to render out your RE:Match Stereo color
corrected images. This is the mode that uses the Global Mode for global matching, and local
matching using the Local Match Mode, with its corresponding Local Block Radius setting, and
then further refining the local matching of pixels with good corresponding pixels into
Mismatched Areas (if using one of the Local Match Modes that use refinement) as determined by
the Mismatch Threshold %.

No Edge Fill. This is a special mode to help you visualize pixels that do not have
correspondences in the other images that often happen at the edges of the frame. Pixels in the
target image that do not have a correspondence in the source image will be rendered with zero
alpha. Most often, edge pixels may not have corresponding pixels in the source image because of
the camera / lens system alignment. Of course if you are compositing the result of RE:Match
Stereo over something else, then be careful to know that pixels at the edges will be able to see the
pixels from the image being composited over, of the the zero-alpha pixels being created for target
pixels at the edges with no correspondence in the source image.

Maintain Original Alpha This mode restores the original alpha. During the default process
RE:Map Stereo might introduce pixels of differing alpha than from the original target image.
Maintain Original Alpha is useful if you have an image with a frame of pixels with zero alpha,
Without this setting you can end up with erroneous pixels outside the actual real image data by
the nature of the process. Note: it may be useful to transfer pixels from the source image to areas
that do not have matches in the target image. Select this mode to maintain the original alpha of
the target image.

The following two modes help you visualize the Mismatched Areas. As described previously,
Mismatched Areas are determined by the Mismatch Threshold % slider.

Display Mismatch (to Red Ch): Displays Mismatched Area pixels using a red overlay. Note that
Mismatched Areas are where the refinement will occur for the Local Match Mode settings that
use refinement. Note that even though pixels are shown either mismatched (red overlay) or
matched (no red), internally the plug-in uses a bit larger area to help blend away visible seams at
the edge of a Mismatched Area. Note that pixels with no correspondence from the target image
to the source image will always be marked as part of the Mismatched Areas regardless of the
value of Mismatch Threshold %.

Display Match Only (To Alpha): This setting uses the same pixels of the previous setting
(matched and unmatched), but instead of display Mismatched Area pixels with a red overlay, it
sets the alpha channel of those pixels to zero (fully transparent). Of course if you are compositing
the result of this image over something else, then be careful to know that pixels in the image
underneath will be seen through the zero-alpha pixels being created.

RECAP

Conceptually you have 3 levels of controls with RE:Match Stereo. First, global image correction
is applied. Then there is an intermediate step which does local correction at each pixel based on
automatic pixel correspondences in combination with local tiles of Local Block Radius size.
Then there is a refinement process that will allow propagation of the local correction process
from “good areas” into the areas that are considered Mismatched Areas, which often brings back
fine detail that can be lost, or improperly processed by the local matching process.
Exclusion Areas
There may be times when you have an artifact that you want to remove from the target image,
like a water drop on the lens, and you want to replace the details of the area with what is in the
source image (in this case, no water drop). Or you might have a lens flare in the source image
that is either not present in the target image, or is a different shape. In this case you might want
to have the lens flare in the target image be the shape of the corresponding flare from the source
image (where correspondences are determined by the automatic pixel correspondence method).

For these types of artifacts or details, you really want to turn off the local refinement process for
the pixels that represent these artifacts, even though they are properly regarded as Mismatched
Areas. By forcing the areas of these artifacts to be considered “matched,” the details in the
source image will be carried over to the corresponding locations in the target image (and are
appropriately color matched to be consistent).

For these details and artifacts you want to carry over into the target image, it is necessary to tell
RE:Match Stereo, “for the Mismatched Areas calculated using Mismatched Threshold %, please
exclude these pixels from the Mismatched Areas”. These pixels are specified using a matte
created by you; or what we call an Exclusion Matte.

The Exclusion Matte is usually a garbage matte, because it is often not necessary to specify the
Exclusion Matte with a tight boundary of the areas where you want details “copied over” from
the source image. We use the term “copied over” loosely, because the process includes copying
details from the source image in a way that’s consistent with the results being produced in the
target image, and not a straight copy from the source image.

What follows is an example of an Exclusion Matte in use. In this case we are trying to make the
image on the right match the image on the left.

.
Goal: match the image on the right to the image on the left.

Global matching by itself did not give a satisfactory result. So we decide to use one of the Local
Match Modes. Because of the many water droplets, there are mismatches in the pixel-to-pixel
correspondences between the two images. So we decide to use one of the refinement options for
Local Match Mode that takes mismatches into account, and set Mismatch Threshold %
appropriately to include the areas where we are having problems.

Mismatched Areas are determined (left). Result (right) using a refinement option of Local Match Mode, with Mismatched Areas.

The plug-in properly marks the lens flare as a detail that should not be carried over to the target,
because it’s “too different” from what’s in the target image. However, we’ve decided we’d really
like to see the lens flare in the image on the right. So we create a loose matte for that area.

Exclusion Matte (left). For demonstration purposes we show it over the image (right).

By excluding the pixels of the Exclusion Matte from the Mismatched Areas, we achieve the
following map for Mismatched Areas:

New Mismatched Areas exclude pixels of Exclusion Matte


The result: the original left image (on left), and the result of the original right image with proper color correction, and with lens
flare transferred over from left image to the proper corresponding place in the target.

Use Exclusion Matte:

Turning on Use Exclusion Matte activates the use of the Exclusion Matte. Note that this
checkbox is animatable, so it is possible to animate it on and off, and only provide Exclusion
Mattes for frames that require additional help (for example a flare that lasts 5 frames).
Alternatively you can provide empty alpha matte channels for these frames.

Exclusion Matte:

This allows you to specify which clip, layer or sequence is used for the Exclusion Matte.

Exclusion Channel:

Which channel from that image to use as the Exclusion Matte.

Reinclude Threshold:

This operates just like Mismatch Threshold % parameter, but adds pixels of the Exclusion Matte
back into the Mismatched Areas. Reinclude Threshold allows you to include more pixels from
the Exclusion Matte back into the Mismatched Areas, but use a differing threshold percentage
(presumably higher) than used for the rest of the image outside the Exclusion Matte, giving you
more control over how the Exclusion Matte is used to modify the pixels included in the
Mismatched Areas.

Note this slider only affects the color match refinement not the displacement estimation
calculated internally.

Exclusion Matte Mode:

The Exclusion Matte can actually be used for two purposes: 1) to help the disparity estimation
(that is, the optical flow used internally to establish pixel correspondence) and 2) to help in
refining the color matching in areas.
Note that you can use the Exclusion Matte to tell the plug-in not to include pixels in the disparity
calculation (the part that establishes pixel correpondence). This is often useful when there is an
artifact, like a drop of water on the lens, or a lens flare, that gets in the way of establishing good
pixel correspondences between the two images.

You can also use the Exclusion Matte to affect just the local match refinement process, but not
the pixel correspondence calculation. If the purpose of the Exclusion Matte is to be used to affect
the local refinement, then you can use a fat garbage matte that you can refine using the
Reinclude Threshold slider. Then make sure that you fully cover the region you are trying to
affect (e.g. if a lens smear make the matte larger than the smear).

For Disparity Estimation Only: A practical example would be a big finger shadow on the edge of
one of the images resulting in a large chunk of black pixels. To define an exclusion matte just
exclude these pixels from the displarity calculation that can help the alignment of the rest of the
image.

For Color Match Refinement Only: In this case the the Exclusion Matte will only be used for
local color match refinement and the disparity calculation will ignore the Exclusion Matte.

Both (Disparity + Refinement): The same matte will be used for both.

Please see the case studies that we present for RE:Match Stereo below, we also have some
tutorials on help.revisionfx.com
Using RE:Match Stereo

Here we will provide some concrete examples.

Case 1: Clamped Regions


Here below we have a frame pair. The main two things that stand out is the sky is a different
shade of blue and the front of the locomotive on the left is clamped in the red channel (is
somehow over exposed).

Because the image on the left is the one with incorrect color we will consider that side the
sequence to fix. Below is the setup we used here.

Global matching did not work by itself, and the Local Match Mode that performs no refinement
into Mismatched Areas created problems with the tree in the lower right corner, because the
pixel-to-pixel correspondences were incorrect there because differing parts of the tree are visible
in each view. So we set the Mismatch Threshold % so that we get the following map of
Mismatched Areas (shown in red).
Mismatched Areas displayed

When we use one of the Local Match Modes that use refinement into Mismatched Areas, we
then correct the image.

On left, Local Match Mode with no refinement into Mismatched Areas.


On right, better results using a Local Match Mode with refinement into Mismatched Areas.

What we had to adjust here is as follows: First we set the global correction mode so that the sky
in both images is closer in color. Because the pixel-to-pixel correspondences are not perfect
everywhere, we need to use a Local Match Mode that takes this into account by using one of the
refinement modes that pushes changes into Mismatched Areas. To determine the Mismatched
Areas, we have to set the Mismatch Threshold % value appropriately to include the areas where
mismatches are occurring, but not include too many extraneous pixels where we see no problems
due to the pixel tracking process.
Case 2: Iridescent Paint and Reflections
In this next example we have a problem with the color of the car in the two views, and with
reflections on the hood of the car that is too different as well.

What is going on here is that the paint reflects color differently based on the angle of the light
reflection and the camera. It’s a neat effect in reality, but it’s something that can cause issues in
viewing stereo pairs. Because the car is prominent in the scene, a global color correction to make
the two cars match in color will create a difference in color of the objects of the background (that
aren’t originally different in color). So we need to continue processing using one of the Local
Match Modes.

However, when we use local matching, the pixel-to-pixel correspondences have an issue because
of the size and shape of the reflections in the hood (and a few other places). In this case we want
the Mismatch Threshold % to select pixels where there are details in the car that are different,
particularly the size and shape of the reflections of the buildings in the hood of the car.
Actually here the building reflection on the hood is also part of the challenge. We needed here to
set the parameter so the main mass of the hood is not flagged but the fine details of the
reflections are.

Case 3: On-Lens Smears


Now let’s look at a more advanced problem. What happened in this case is that we shot the scene
outside and it started to rain. The drops were wiped out but it turned out that is still shows when
playing back (and this on-lens issue is extremely bad, and disorienting, for stereo viewing).

Images in this sequence provided by © Ami Sun, Karen Marcelo - Survival Research Labs
“Obviously” there is a water drop on the left, at the top (it’s shaped like a triangle):

So what to do? In this case we don’t really have a color matching issue per se, but need to
replace part of a picture with texture from the other artifact-free image. That’s when the
Exclusion Matte becomes essential.

So notice here that the Exclusion Matte controls have been activated. Below (showing on a
different frame time) we show what we needed to cover as Exclusion areas for this pair.
In the picture above, the Exclusion Matte is shown in red. These are the pixels that we wish to
replace with details from the source image. In this case the matte does not need to be animated,
because the problem is with the lens itself, so the problem remains stationary throughout the
sequence.

And now the results (below). Our lens smears are gone and these shots can move from the trash
bin to something that can be useful.
Also in this case (shown below) notice that the footage was not only interlaced but was really
noisy as well. We actually used our tool DE:Noise before this. It might be a good idea when the
process is about replacing texture to clean up the image before.
Original frame before DE:Noising.
Case 4: Matching Specular Highlights
Let’s now shift to a case where we have multiple issues at once.

We want to make the image on the right match the image on the left

What is going on here? Look at the sky in the two images, do you see that the blue is different?
What else? In the right image, there was something on the lens that created a black cutout on the
top-left corner. The specular highlights on the windshield are totally different shapes. Another
thing that might be hard to see at this size: look at the shadow area in front of the car. On the left
there are 2 rays in the reflection from the car, and that reflection has only one ray and is, in fact,
almost totally missing on the right image.

To make it clear: here are the things to look for, circled in red:

It turns out that in reality our real eyes and vision work well when we encounter the anomalies of
reflections in the real world between our two eyes. However, projected stereo imagery works
better when light reflects the same within both views, otherwise stereo fusion may not be
performed well by your eye. In reality the specular highlights might indeed be totally different
due to a function of optical physics, but we often desire consistency of these types of artifacts for
better viewing of stereo imagery.

Well, first we use a Global Mode that gets a match between the two images as close as possible,
then we use one of the Local Match Mode options that uses refinement in Mismatched Areas.
Note that in the areas of the reflection on the hood and in the shadow, and in the upper left corner
that has black, we’d like to “pick up” the details from the source image so that the reflections are
the same shape and size (but in the proper place), and that the black area gets more or less
replaced by details from the source.

We can do that by using the Exclusion Matte shown here:

Then we can see the result of the correction process:

On the left is an original image. On the right is the corrected version of the right image, with the black corner replaced, sky
corrected, and reflection on the car and in the shadow appropriately matching the left image version.

Note that if we wanted to use the size and shape of the reflections from the right image and pass
them on to the left image we could have done the matching in two passes: the first pass to match
the right to the left, replacing the black corner using an Exclusion Matte that singled that area
out.

Color match right image to match left side, using a matte so that pixels in the right image, on the upper left corner, are replaced
by the details from the left image. Exclusion matte shown in red.

Then we could use another pass of RE:Match Stereo to match the images in the other direction,
and would use an Exclusion Matte for the reflections that we wish to retain from the image on
the right.

Now we match in the other direction. Exclusion matte shown in red.

RE:Match is a trademark of RE:Vision Effects, Inc. Copyright 1999-2017 RE:Vision Effects,


Inc.

You might also like