Professional Documents
Culture Documents
Once you've checked out the basics in SynthEyes, explore our revolutionary
new Synthia™ English-language instructible assistant, via the IA button at top right
and the Help/Synthia PDF manual. Command SynthEyes to perform tasks that would
otherwise require programming, or just use its voice recognition and response to save
time. Perfect for those strange one-off fixits.
Unless you are using the demo version, you will need to follow the registration
and authorization procedure described towards the end of this document and in the
online tutorial.
IMPORTANT: to minimize the chances of data loss, please read the sections
on auto-save and on file name versioning, then configure the preferences to
correspond to your desired working style!
To help provide the best user experience, SynthEyes has a Customer Care
center with automatic updates, messages from the factory, feature suggestions, forum,
and more. Be sure to take advantage of these capabilities, available from SynthEyes’s
help menu.
Be sure to check out the many video tutorials on the web site. We know many of
you are visual learners, and the subjects covered in this manual are inherently 3-D, so a
quick look at the tutorials, and playing along in the program as you read, can make the
text much more accessible.
buttons do"). If you see something and don't know what it does, or want to
find out if there's a mouse operation to do what you want, the Reference
section is where to look. The User Manual does not contain all the
information in the Reference manual; a significant amount of information
may be found only in the reference manual.
Look around. You will not know to look for information about something in
the manual unless you know what SynthEyes can do and what is in the
manual. So even if you start out by reading only the portions needed to
get started, it is a good idea to skim briefly through the table of contents
(bookmarks) and even manual itself from time to time so you know what is
in there. Side benefit: you won't be making as many new-feature requests
that SynthEyes already does!
It's a PDF. This may be the perfect time to justify an eBook reader to your
boss/significant other. You can save the manual and read it at your
leisure.
Additional Manuals
This is the main SynthEyes User Manual. There are additional specialized
manuals as well. All are available from the Help menu within SynthEyes, or in the folder
associated with the application, ie \Program Files\Andersson Technologies
LLC\SynthEyes, /Applications/SynthEyes, or /opt/SynthEyes.
SynthEyes User Manual. Covers basic operation, automatic and supervised
tracking, solving, coordinate systems, exporting, etc etc etc, plus reference
material. You're reading it now.
Planar Tracking Manual. Covers 2- and 3-D Planar Tracking and specialized
exports for planar tracking.
Geometric Hierarchy Tracking Manual. More advance manual showing
how to set up hierarchies of object tracks.
Camera Calibration Manual. Shows how to calibrate cameras using a
variety of techniques, such as calibration grids and vignetting.
Phase Reference Manual. Describes the solver phase nodal system in more
detail, and covers the operation of each of the types of phases.
Synthia User Manual. Synthia is SynthEyes's advanced instructible
assistant; Synthia can save you a lot of time and trouble doing odd jobs.
Sizzle Scripting Language Manual. Describes the Sizzle programming
language used for import, export, and tool scripts within SynthEyes. It's a
simple language that should be easy to pick up for anyone with a little
experience with scripting languages.
SyPy Python Reference Manual. Describes the SyPy module for scripting
SynthEyes from within any Python interpreter: a standalone interpreter, a
development IDE, windowing environment, or 3rd party animation application.
Contents
Quick Start: Automatic Tracking
Quick Start: Supervised Tracking
Quick Start: 3-D Planar Tracking
Quick Start: Stabilization
Shooting Requirements for 3-D Effects
Types of Shots: Gallery of Horrors
Basic Operation
Opening the Shot
Automatic Tracking
Supervised Tracking
Fine-Tuning the Trackers
Checking the Trackers
Nodal Tripod-Mode Shots
Lenses and Distortion
Running the 3-D Solver
3-D Review
Cleaning Up Trackers Quickly
Setting Up a Coordinate System
Post-Solve Tracking
Advanced Solving Using Phases (Pro, not Intro)
Perspective Window
Exporting to Your Animation Package
Realistic Compositing for 3-D
Building Meshes
Texture Extraction
Optimizing for Real-Time Playback
Troubleshooting
Combining Automatic and Supervised Tracking
Stabilization
Rotoscoping and Alpha-Channel Mattes
Object Tracking
Note: Screenshots in this manual and online tutorials are usually from
Windows; the macOS and Linux interfaces are slightly different in appearance
due to different fonts and window close buttons, but not function. The user
interface colors and other minor details in the manual and tutorials may be
from older SynthEyes or operating system versions or have different
preferences settings. (There are many user-configurable elements.) And the
layout changes based on the size of the SynthEyes window—if full screen,
based on your monitor resolution. So just go with the flow and look around if
need be!
Tip: The Max RAM Cache GB preference controls how much RAM will be
used to cache the shot for fastest access. See Opening the Shot for more
information.
You can reset the frame rate from the 24 fps default for sequences to the NTSC
rate by hitting the NTSC button , though this is not critical. The aspect ratio,
1.333 (ie 4:3) is correct for this shot.
On the top room selector,
So that you can learn match-moving better, answer No for now. You can adjust
this later via the "Autoplace after solving" preference in the Solver section of Edit/Edit
Preferences. You can also change with an automatic placement is generated for a
specific scene via the checkbox on the Summary panel.
A series of message boxes will pop up showing the job being processed. Wait for
it to finish. This is where your computer’s speed pays off. On a 2010 Mac Pro, the shot
takes about a second to process in total, including portions before this dialog. A 2011
MacBook Air takes about 3 seconds.
Once you see Finished solving, hit OK to close this final dialog box. SynthEyes
will switch to a five-viewport configuration that continues to show the solver results.
(Experienced users can disable switching with a preferences setting).
Tip: don't worry if it looks like some controls are cut off the bottom of the
control panel at left. You're not missing anything. Those are secondary copies
of the controls found on the Lens panel. On a big screen they're all visible.
You'll see many trackers: in the camera view, a green diamond shows the 2-D
location in the image on the current frame, and a small yellow x (tracker point) shows
its location in 3-D space. In the 3-D views, the green x's show 3-D tracker locations.
You can zoom in on any of the views, including the camera view(s), using the
middle mouse scroll and middle-mouse pan to see more detail. (You can also right-drag
for a smooth zoom.)
If you are trying to use a laptop trackpad, or are having other problems using the
middle-mouse button, please see the section on Middle Mouse Button Problems. Note
that tracking is an intensive high-precision task; trackpads are designed only for basic
browsing and word processing, so an external mouse is highly recommended for all but
the most trivial 3-D tracking.
The status line will show the zoom ratio in the camera view, or the world-units
size of any 3-D viewport. You can Control-HOME to re-center all four viewports. See the
Window Viewport Reference for more such information.
The error curve mini-view at bottom right shows color-coded tracking error over
the length of the shot, and an overall value.
In the main viewports, look at the Left view in the lower left quadrant. The green
marks show the 3-D location of features that SynthEyes located. In the Left view, they
fall on a diagonal running top-left to lower-right. [If the points are flat on the ground
plane, you previously enabled the automatic placement tool; you can still continue with
this tutorial.] The trackers are located relative to the initial default camera position: the
images contain NOTHING to say that the trackers should be located anywhere in
particular in the scene. You can move the camera and trackers all around in 3D, and
everything will still line up perfectly in the camera view (and SynthEyes gives you a
manual alignment method to do exactly this).
Since most of these points are on the ground in the scene, we’d like them to fall
on the ground plane of the animation environment. SynthEyes provides tools to let you
eyeball it into place, or automatically place it, but there’s a more accurate way…
Begin by clicking the *3 button at top right of the coordinate system panel.
Next, click on the tracker labeled 1 (above) in the viewport. On the control panel, the
tracker will automatically change from Unconstrained to Origin.
In this example, we will use trackers (1 and 2) aligned left to right. The coordinate
system mini-wizard (*3 button) handles points aligned left to right or front to back. (By
default, it is LR, you could click again to get FB.)
Click the tracker labeled 2, causing it to change to Lock Point. The X field above
it will change to 20.
Select the tracker labeled 3, slightly right of center. A dialog box will appear that
we'll discuss in a moment. The tracker's settings will have already changed from
Unconstrained to On XY Plane (ie the ground plane).
Why are we doing all this? The choice of trackers to use, the overall size
(determined by the 20 value above), and the choice of axes is arbitrary, up to you to
make your subsequent effects easier. See Setting Up the Coordinate System for more
details on why and how to set up a coordinate system. Note that SynthEyes’ scene
settings and preferences allow you to change how the axes are oriented to match other
programs such as Maya or Lightwave: ie a Z-up or Y-up mode. This manual’s examples
are in Z-Up mode unless otherwise noted; the corresponding choices for one of the Y-
Up modes should be fairly evident.
After you click the third tracker you will be prompted (“Apply coordinate system?”)
to determine whether the scene should be re-solved to apply your new settings. Select
Yes. SynthEyes changes the solving mode (on the Solver panel) from Automatic to
Refine, so that it will update the match-move, rather than recalculating from scratch.
SynthEyes recalculates the tracker and camera positions in a flash; click OK on the
solving console that pops up.
Afterwards, the 3 trackers will be flat on the ground plane (XY plane) and the
camera path adjusted to match, as shown:
You could have selected any three points to define the coordinate system this
way, as long as they aren’t in a straight line or all bunched together. The points you
select should be based on how you want the scene to line up in your animation
package.
Switch to the 3-D tab . Select the magic wand tool on the panel. Change
the mesh type drop-down at left of the wand to create a Pyramid instead of a Box.
Zoom in the Top viewport window so the tracker points are spread out.
In the Top viewport, drag out the base of a rectangular pyramid. Then click again
and drag to set its height. Use the move , rotate , and scale tools to make
the box into a small pyramid located in the vacant field. Click on the color swatch
under the wand, and select a sandy pyramid color. Click somewhere empty in the
viewport to unselect the pyramid (bright red causes lag in LCDs).
On the View menu, turn off Show Trackers and Tracker Appearance/Show 3-D
Points, and switch to the camera viewport. You can do that by changing the Layout
Hit Play . If it plays too quickly, select View/Normal Speed. Note that there will
appear to be some jitter because drawing is not anti-aliased. It won’t be present when
you render in your 3-D application. SynthEyes is not intended to be a rendering or
modeling system; it operates in conjunction with a separate 3-D animation application.
(You can create motion-blurred anti-aliased preview movies from the Perspective
window.)
Hit Stop . Rewind to the beginning of the shot (say with shift -A).
By far, the most common cause of “sliding” of an inserted object is that the object
has not been placed at the right vertical position (height) over the imagery. Sliding is a
user error, not a software error. You should compare the location of your insert to that of
other nearby trackers, in 3-D, adding a tracker at key locations if necessary. You will
also think you have sliding if you place a flat object onto a surface that is not truly flat.
Normally we would place the pyramid more carefully.
Tip: it can be difficult to assess an insert when the shot itself is unstable. To
help, click to select a tracker near your insert. Hit the 5 key to turn on "Pan To
Follow." The camera view will center on this tracker as you play. Zoom the
view to see even better. Click 5 again to turn this mode off.
Click on … at upper right and change the saved file type to Quicktime Movie or
MP4. Enter a file name for the output movie, typically in a temporary scratch location.
Click on Compression Settings and select something appropriate. Click OK to close the
compression settings. Back on the Preview Movie Settings, turn off Show Grid, and hit
Start. The preview movie will be produced and played back in the Quicktime or
Windows Media Player.
You can export to your animation package at this time, from the File/Export
menu item. Select the exporter for your animation package from the (long) menu list.
SynthEyes will prompt for the export location and file name; by default a file with the
same name as the currently-open file (flyover in this case), but with an appropriate file
extension, such as .fbx for a Filmbox file. See the link above (to the section on
exporting) for more application-specific details, generally the defaults will be a good
start.
You can have SynthEyes produce multiple exports in one operation using the
multi-export configuration dialog.
Note: the SynthEyes Demo version only exports the first six frames and last
frame of the sequence. You can only learn SynthEyes from within SynthEyes.
Use Preview Movie to see the entire shot within SynthEyes, including your
own imported models.
You can save your tracked scene now (or at any earlier time) using File/Save. If
you have auto-save turned on, you will be prompted at the first completion of the save
interval to specify a file name.
This completes this initial example, which is the quickest, though not necessarily
the best, way to go. You’ll notice that SynthEyes presents many additional views,
controls, and displays for detecting and removing tracking glitches, navigating in 3-D,
handling temporarily obscured trackers, moving objects and multiple shots, etc.
In particular, after auto-tracking and before exporting, you should always check
up on the trackers, especially using Clean Up Trackers and the graph editor, to correct
any glitches in tracking (which can result in little glitches in the camera path), and to
eliminate any trackers that are not stable. For example, in the example flyover, the truck
that is moving behind the trees might be tracked, and it should be deleted and the
solution refined (quickly recomputed).
SynthEyes also offers an automated tool for setting up a coordinate system: the
Place button on the Summary panel. If you click it now, it will replace the coordinate
system you have already set up, without affecting the apparent size and placement of
the pyramid (the pyramid can be left untouched if you turn off Whole affects meshes on
the viewport's right-click menu). Although it is quick and easy, the Place button does not
offer the same accuracy as the three-trackers method (which itself is quite quick). The
Place button can be run automatically from the AUTO track and solve button, if the
corresponding preference is turned on.
See Realistic Compositing for 3-D for information about how to make things you
add to the scene look like they belong there.
Tip: Be sure to watch the pnline tutorial “Supervised Tracking Master Class”
(https://www.ssontech.com/tutembed/SuperMaster.html) which is an
extended presentation of many details of supervised tracking.
Switch to the Trackers room and click Create ( ). It will bring up the
tracking control panel.
Tip: You can create a tracker at any time by holding down the ‘C’ key and left-
clicking in the camera view.
Begin creating trackers at the locations in the image below, by putting the cursor
over the location, pushing and holding the left mouse button, and adjusting the tracker
position while the mouse button is down, looking at the tracker “insides” window on the
control panel to put the “point” of the feature at the center. Look for distinctive white or
black spots in the indicated locations.
Important: Normally you should always specifically configure the tracker size,
search area size, and tracking prediction mode on the Track menu. See the
Supervised Tracking section for details.
You'll also see a tracker mini-view pop up on the camera view. You can control
when and where that pops up via preferences.
Tip: the name of each supervised tracker will be shown in the camera,
perspective, and 3D viewports, by default. You can have none of the names
shown, or the names shown for all trackers, using the controls on the View
menu.
After creating the first tracker, click the green swatch under the tracker mini-
view window and change the color to a bright yellow to be more visible. Or, to do this
after creating trackers, control-A to select them all, then click the swatch.
Tip: there are two layouts for the tracking control panel, a small one for
recommended for laptops, and a larger one recommended for high-resolution
displays, selected by the Wider tracker-view panel checkbox in the
preferences. In between, take your pick!
Once the eleven trackers are placed, you can turn off the creation tool by clicking
it again. Type control-A (command-A on Mac) to select all the trackers. On the tracker
control panel, find . In this spinner, called “Key Every,” push and drag
the value from zero to 20. This says you wish to automatically re-key the tracker every
20 frames to accommodate changes in the pattern.
Tip: you should have at least six trackers visible on each frame of the shot,
with substantial amounts of overlap if they do not last throught the shot. For
good results, keep a dozen or so visible at all times, spread out throughout
the image, not bunched together. If the shot moves around a lot, you may
need many more trackers to maintain satisfactory coverage.
Hit the Play button , and SynthEyes will track through the entire shot. if you
select an individual tracker, you'll be able to see a tracking figure of merit at the bottom
right in the error curve mini-view—this is only available here before the tracker is solved
(from the graph editor later). The values don't matter, only the shape.
On this example, the trackers should stay on their features throughout the entire
shot without further intervention. You can scrub the time bar back and forth to verify
that. You will notice that one tracker has gone off-screen and been shut down
automatically. (Advanced feature hint: when the image has black edges, you can adjust
the Region-of-interest on the image preprocessing panel to save storage and ensure
that the trackers turn off when they reach the out-of-bounds portion.) If necessary, you
can reposition a tracker on any frame, setting a key and teaching the tracker to follow
the image from that location subsequently.
After tracking, with all the trackers still selected (or hit Control/command-A), click
the Lock ( ) button to lock them , so they will not re-track as you play around (…or
get messed up).
Now you will align the coordinate system. This is the same as for automatic
tracking, except performed before any solving. See Setting Up the Coordinate System
for more details on why and how to set up a coordinate system. Switch to the
Coordinates room using the toolbar. You might as well switch the Layout to Camera
since the trackers aren't solved yet, and there is nothing to see in the 3D views.
This is a similar guide picture to that from auto-tracking, though the trackers are
in different locations. Click the *3 button, then click on tracker #1. Click the *3 button,
now reading LR, to change it to FB. Click tracker #2. Click tracker #3. You'll get a
popup saying that the coordinate system will be applied once the scene is solved, click
OK.
Now switch to the Solver room. Hit the Go! button. A display panel will pop up,
and after a fraction of a second, it should say Finished solving. Hit OK to close the
popup. You could add some objects from the 3-D panel at this time, as in the automatic
tracking example.
You can add some additional trackers now to increase accuracy. Use (or
shift-F) to go to the end of the shot, and change to backward tracking by clicking the big
on the playbar. It will change to the backwards direction . Click to the Trackers
room, and turn on the Create ( ) button.
Tip: When you ‘play’ the scene, SynthEyes updates the tracking data only for
trackers that are configured for the same tracking direction as the playback
itself. See the directional indicator under the tracker mini-view to see the
direction for given tracker(s).
Create additional trackers spread through the scene, for example only on white
spots. Switch their tracker type from a match tracker to a white-spot tracker ,
using the type selection flyout button on the tracker control panel. (Note that the Key-
every spinner does not affect spot-type trackers.)
You'll see a red X in the tracker mini-view that shows the optimum location for the
spot tracker, center on it and it will snap into place. You should adjust the Size spinners
of the tracker as needed, so that the white spot fills approximately 2/3 of the field of
view—otherwise the spot you are interested in will be affected by surrounding features.
Re-center afterwards.
Hit Play to track them out. The tracker on the rock pile may get off-track in the
middle since there are many similar rocks. You can either correct it by dragging and re-
tracking, but it will be easiest for this one to keep it as a larger match-type tracker.
Scrub through the shot to verify they have stayed on track, then control-A to select them
all, and turn on the lock .
Switch to the Solver control panel, change the top drop-down box, the solving
mode, from Automatic to Refine, and hit Go! again, then OK.
Go to the 3-D Panel, click on the create wand , change the object type
from Box (or Pyramid) to Earthling, then drag in the Top view to place an earthling to
menace this setting. Click a second time to set its size. In the following example, the
tracker on the concrete pad was used to adjust the height of the Earthling statue—using
trackers as reference is generally required to prevent sliding, unless they were used to
set up the coordinate system. You can use pan-to-follow mode (hit the 5 key to turn it on
or off) to zoom in on the tracker (and nearby feet) to monitor their positioning as you
scrub. The statue was also rotated to face appropriately.
Tip: if you don't see the Planar room, you have upgraded and have your own
room preferences. Right-click in the room area and select Reset All to
Factory. For more info see "I Don't See the Planar Room" in the planar
tracking manual.
On the planar panel, click the creation wand at top left. Then click 3 times
around the corners of the window group at top left center, starting at top-left, then
proceeding to top right and bottom right; the bottom left corner will be last, in a moment.
(Right-click to cancel any time in the middle of this process.)
As you go to create the fourth point, at lower left, you'll notice that much more
interesting things start to happen: a pyramidal display appears, and the fourth corner no
longer follows the cursor and seems to have a mind of its own!
The pyramid is literally that, a 3-D pyramid sitting on top of the rectangle you are
creating in 3-D. As you see, many of the corner locations are impossible or non-useful,
and the pyramid will flicker around wildly to different locations:
Assuming you've placed the first three points reasonably, when you place the
fourth properly, you'll see something that makes sense:
Tip: If your initial corners aren't well placed, you may not be able to place the
fourth where it should go. Hold down ALT/command while moving the fourth;
SynthEyes will adjust the first three slightly for you.
You'll also notice that the tracker mini-view is showing the interior of the planar
tracker, compressed into a nice little box. The 3-D Aspect value gives the aspect ratio of
the planar tracker in the 3-D environment (not the image!).
Now that you've placed the four corners, you can adjust them as needed, either
by moving them in the camera view, or by dragging the corners inside the tracker mini-
view. You can drag the whole tracker, or move the center of the tracker mini-view.
You can re-aim the tracker by dragging the apex, or change the FOV or aspect
ratio with the spinners. There's a lot of functionality waiting for you in the Planar
Tracking manual.
For this quick-start, place the cursor over one of the four corners, hold down shift,
and scale the tracker up to encompass some more of the surrounding window structure.
Release shift as you drag to allow the axes to scale independently. Then drag the whole
thing a little to center it up. (Note that as you scaled or moved the planar tracker, the
FOV and 3-D aspect don't change. The mouse operations occur in the 3-D environment;
they are not 2-D effects.)
With our 3-D planar tracker positioned, it's ready to go, so hit the Play button
at center bottom of the display, or immediately above the channel selector icon ,
above the tracker mini-view. SynthEyes will track it through the shot. The additional
rectangle that appears is the search region.
We now have a 3-D track. Click the lock icon to prevent redundant
calculations and inadvertent changes.
You can scrub the shot in the camera view. You can see the stabilized pattern in
the tracker mini-view, with some motion due to the non-planar covers above the
building's windows, which SynthEyes has ignored.
Change the Layout to Quad, and scrub some more. You'll see the red rectangle
moving around—that is your 3-D plane moving in space (relative to the camera).
Change the Layout to Perspective and scrub, there's your 3-D tracker moving.
Click Lock on the mouse bar at top of the Perspective view.
Wait! Now there are two rectangles! One is the 2-D rectangle, it is located
correctly in the image. The other is the 3-D rectangle, it does not match because the
SynthEyes camera Camera01 (and thus perspective view) is not using the Field of View
required by the planar tracker.
Again, the planar tracker requires a specific field of view, from its Own FOV
spinner. Any 3-D camera view must have exactly that same FOV, if the 2-D and 3-D
rectangles are to be able to match up.
Hold down the control key and click the Planar Options tool gear . More
about that in a moment, but for now the Planar Tracking script bar will pop up (also
available normally from the Scripts menu).
Click the Save FOV as Known button (to run that script). Presto! Now there is
only a single rectangle visible. Where did the 3-D rectangle go? It's right there now,
exactly lined up with the 2-D rectangle.
The script copied the Own FOV into the Camera's Known field of view track. You
can bounce back and forth between the Lens and Planar panels to verify that. With the
right FOV in place, both rectangles match up.
Now that the FOVs match, we're ready for a 3D export. Clicking the After Effects
3-D button will launch the exporter (or via the File/Export menu). You can enter "tower"
as the file name, and then control panel for the After Effects exporter will appear. It's a
sophisticated thing in its own right, see the section After Effects 3-D Javascript
Procedure for details. With the default settings (set your AE version!), the scene will be
exported and launch automatically in After Effects.
That will give you a 3-D composition in After Effects with a 3-D camera and a
moving Placeholder layer that is exactly your planar tracker; you can drop new imagery
or comps into it. Further details are outside the scope of this quick-start.
You can also click AE Corner Pin on the Planar Tracking script bar (or on the
File/Export menu). With this export, you will get a 2-D composition inside After Effects,
which may be simpler if you're more familiar with AE's 2-D environment and only need
to do a simple effect. (This export can export both 2-D and 3-D planar trackers.)
Of course, SynthEyes supports many more applications. Instead of special
exports for each application, click the Export Prep button on the script bar (this is the
script Planar Export Preparation).
Export Prep can set up the scene as a moving camera with a single stationary 3-
D planar tracker, or as a stationary camera with multiple moving 3-D planar trackers. Or
you can run it twice for both.
For the quick-start, select Animate Camera and leave the Plane Orientation at
Front. Click OK. Change the Layout to Quad Perspective and then scrub through the
shot. As you see, the 3-D planar tracker is now stationary, glued to the front view. The
camera is now moving. The planar tracker has been augmented with an identically-
sized and -placed mesh object. (In moving-object mode, there are additional SynthEyes
moving objects, with the mesh parented to the moving object.)
This is a much better configuration for a gravity- and velocity-dependent particle
effect, such as an animation of smoke pouring from the window. You can move the
scene around using the Whole button on the 3-D panel if desired.
This 3-D planar tracker scene, as augmented by the Export Prep script, can be
exported through a wide variety of SynthEyes's existing exporters, such as Cinema 4D,
Filmbox, etc.
Of course, this quick-start has been just that, touching on some of the
fundamentals. The planar tracking feature set is very deep. Here are some of the issues
that require more attention, and that are covered in the planar tracking manual:
Occlusion, where actors, vehicles, etc pass between the camera and the
planar surface being tracked.
Variable lighting conditions
Multiple 3-D trackers in the same scene (all the FOVs must match, since
the camera can only have one at a time)
You can see a additional controls on the Planar Options panel, found underneath
the main planar panel on high-resolution displays, or by clicking the gear icon (without
holding control).
The feature set includes quite a few different planar tracker modes, both 2-D
modes and various 3-D modes, including for zoom cameras. You can also supply your
own on-set measurements for the rectangle dimensions (notice the 3-D Size spinner?)
and known camera FOV.
Keep in mind that 3-D planar trackers are just one more tool in the toolbox, not a
panacea or replacement for a solid 3-D track. Planar trackers use much smaller
amounts of information from the scene, and consequently have inherently less
accuracy. But for a lot of quick effects, they can be a great tool!
Again, for full details see the separate Planar Tracking Manual, which can be
found via the Help menu of SynthEyes.
On the toolbar, click on Create/edit pins, FOV is fixed, Width is fixed, and Depth
is fixed, to configure the pinning operation. ("FOV is fixed" will become "FOV changes"
etc).
Holding down Control (to snap to vertices), drag each of the six main corners of
the box into position on the image. Right-drag to zoom in and out and middle-pan as
needed to do this accurately. You can and should reposition the pins once created, but
don't hold down control when doing so—that will delete them. See the tooltip of the
Create/edit pins button for details.
Note that there's a white top 2"/5 cm high at the top back of the truck; your box
should match the top of that, so that it lines up with the top of the side. You can note the
top of the truck by looking at how it occludes the other truck in the background.
After right-clicking Pan 2D to reset the zoom, here's the aligned result.
The box is now aligned on the first frame, and we're ready to track it. Click the
red x on Pinning to close it, then click the GeoH Toolbar button.
On the GeoH toolbar, click GeoH Surface Lasso. Then shift-click the box. A new
moving object will be created, ready for GeoH tracking, and the GeoH panel will light up.
You can close the GeoHTrack toolbar.
To configure which joints should be tracked, click off the X, Y, Z, Pan, Tilt, and
Roll buttons, which changes them from locked to unlocked. All six joints are unlocked
and will be computed as a result of the tracking process.
We're now ready to track. Click the Play button on the GeoH panel or the main
toolbar, and the tracker will track through the shot. On this demo shot the result is quick
and easy.
You can look at the path using the Quad Perspective view, or switch to the
Graphs room (click the word, not the icon).
This shot is processed easier and more accurately with GeoH tracking than with
planar tracking because of the specific fixed relationship between the side and back of
the truck. If you track either of them individually with a planar track, the track can jitter
unchecked in rotation or depth. With GeoH tracking, the other side provides a cross-
check, if you like, that constrains what either side can do. It is that cross-checking that
makes GeoH tracking powerful.
While here the reference mesh is a simple box, GeoH tracking works with
arbitrary-shaped meshes (ie typically imported from your modeling software),
performing efficiently even with tens or even hundreds of thousands of facets.
The Geometric Hierarchy Tracking manual goes into much more detail, including
limitations of GeoH tracking, how to examine and refine the tracking, and how to
configure hierarchical setups.
Click the Full Automatic button on the summary panel to track and
solve the shot. If we wanted, we could track without solving, and stick with 2-D tracks,
but we’ll use the more stable and useful 3-D results here.
Select the Shot/Image Preparation menu item (or hit the P key).
In the image prep viewport, drag a lasso around the half-dozen trackers in the
field near the parking lot at left (see the Lasso controls on the Edit menu for rectangular
lassos). We could stabilize using all the trackers, but for illustration we’ll stabilize this
particular group, which would be typical if we were adding a building into the field.
Important: it is a common novice mistake to set the filter frequency too low
for a given shaky shot in hopes of magically making it super-smooth. When a
shot contains major bumps, much or all of the entire source image can go off
screen. Start with a higher cut frequency, and reduce it gradually so you can
see the effect.
The image prep window is showing the stabilized output, and large black bands
are present at the bottom and left of the image, because the image has been shifted (in
a 3-D way) so that it will be stable. To eliminate the bands, we must effectively zoom in
a bit, expanding the pixels.
Hit the Auto-Scale button and that is done, expanding by almost 30%, and
eliminating the black bars. This expansion is what reduces image quality, and it should
always be minimized to the extent possible.
Use the frame number spinner in the playbar at the bottom center of the image
preparation dialog to scrub through the shot. The shot is stabilized around the purple
“point of interest” at left center.
You can see some remaining rotation. You may not always want to make a shot
completely stone solid. A little motion gives it some life. In this case, merely attenuating
the jitter frequency becomes ineffective because the shot is not that long.
To better show what we’re going to do next, click the Final button at right, turning
it to Padded mode. Increase the Margin spinner, below it, to 0.125. Instead of showing
the final image, we’re showing where the final image (the red outline) is coming from
within the original image. Scrub through the shot a little, then go to the end (frame 178).
Now, change the Rotation mode to Peg also. Instead of low-pass-filtering the
rotation, we have locked the original rotation in place for the length of the shot. But
now, by the end of the shot the red rectangle has gone well off the original imagery. If
you temporarily click Padded to get back to the Final image, there are two large black
missing portions.
Hit Auto-Scale again, which shrinks the red source rectangle, expanding the
pixels further. Select the Adjust tab of the image preparation window, and look at the
Delta Zoom value. Each pixel is now about 160% of its original size, reducing image
quality. Click Undo to get back to the 129% value we had before. Unthinkingly
increasing the zoom factor is not good for images.
If you scrub through the shot a little (in Padded mode) you’ll see that the image-
used region is being forced to rotate to compensate for the helicopter’s path, orbiting the
building site.
For a nice solution, go to the end of the shot, turn on the make-key button at
lower right, then adjust the Delta Rot (rotation) spinner to rotate the red rectangle back
to horizontal as shown.
Scrub through the shot, and you’ll see that the red rectangle stays completely
within the source image, which is good: there won’t be any missing parts. In fact, you
can Auto-scale again and drop the zoom to about 27%.
Click Padded to switch back to the Final display mode, and scrub through to
verify the shot again. Note that the black and white dashed box is the boundary of the
original image in Final mode.
There's some slight blurring introduced by resampling the image. You can
compare different methods for that: click to the Rez tab, and switch the Interpolation
method back and forth from Bi-Linear to 2-Lanczos. You can see the effect of this
especially in the parking lot.
Tip: the interpolation method gives you a trade off between a sharper image
and more artifacts, especially if the image is noisy. Bi-linear produces a softer
image with fewer artifacts, Mitchell-Netravali is a little sharper, and then the
comes Lanczos-2 and the sharpest, Lanczos-3.
To playback at speed, hit OK on the Image Prep dialog. You will probably receive
a message about some (unstabilized) frames that need to be flushed from the cache; hit
OK.
You’ll notice that the trackers are no longer in the “right” places: they are in the
right place for the original images, not the stabilized images. We’ll later see the button
for this, but for now, right-click in the camera view and turn off View/Show trackers and
View/Show 3-D Points.
Hit the main SynthEyes play button , and you will see a very nicely stabilized
version of the shot.
By adding the hand-animated “directorial” component of the stabilization, we
were able to achieve a very nice result, without requiring an excessive amount of zoom.
[By intentionally moving the point of interest , the required zoom can be reduced
further to under 15%.]
If you look carefully at the shot, you will notice some occasional strangeness
where things seem to go out of focus temporarily. This is the motion blur due to the
camera’s motion during shooting.
Doubtless you would now like to save the sequence out for later compositing with
final effects (or maybe a stabilized shot is all you needed). Hit P to bring the image prep
dialog back up, and select the Output tab . Click the Save Sequence button.
Click the … button to select the output file type and name. Note that for image
sequences, you should include the number of zeroes and starting frame number that
you want in the first image sequence file name: seq001 or seq0000 for example. After
setting any compression options, hit Start, and the sequence will be saved.
There are a number of things which have happened behinds the scene during
this quick start, where SynthEyes has taken advantage of the 3-D solve’s field of view
and many trackers to produce better results than traditional stabilizing software.
SynthEyes has plenty of additional controls affording you directorial control, and
the ability to combine some workflow operations that normally would be separate,
improving final image quality in the process. These are described later in the
Stabilization section of the manual.
Tip: Do not check the Zoom checkbox for all your shots, “just in case.” Zoom
processing is noisier and less robust.
Click the Run Auto-tracker button to generate trackers, but not solve yet.
Scrub through the beginning of the shot, and you will see a few trackers on the
moving tree branches at left. Lasso-select them (see the Lasso controls on the Edit
menu), then hit the Delete key.
Click Solve.
After solving, hit shift-C or Track/Clean Up Trackers. Click Fix to delete a few
high-error trackers.
To update a tripod-type shot, we must use the Refine Tripod mode on the Solver
panel. Change the mode from Tripod to Refine Tripod. Hit Go!
Look in the Top and Left views and notice how all of the trackers are located a
fixed distance away from the camera.
SynthEyes must do that because in a tripod shot, there is no perspective
available to estimate the distance. You can easily insert 3-D objects and get them to
stick, but aligning them will be more difficult. You can use SynthEyes’s single-frame
alignment capability to help do that.
For illustration now, go to the 3-D panel and use the create tool to create a
cylinder, box, or earthling in the top view. No matter where you create it, it will stick if
you scrub through the shot. You can reposition it using the other 3-D tools, move ,
rotate , and scale , however you like. You can change the number of segments in
the primitive meshes with the # button and spinners on the 3-D panel, immediately after
creation or later.
Once you finish playing, delete all the meshes you have created.
If you have let the shot play at normal playback speed, you’ve probably noticed
that the camera work is not the best.
Hit the P key to bring up the image preprocessor. Use the frame spinner at
bottom to go to the end of the shot.
Lasso-select the visible trackers, those in and immediately surrounding the text
area.
Now, on the Stabilize tab, change the Translation and Rotation stabilize modes
to Filter. As you do this, SynthEyes records the selected trackers as the source of the
stabilization data. If you did this first, then remembered to select some particular
trackers, or later want to change the trackers used, you can hit Get Tracks to reload the
stabilization tracking data.
Decrease the Cut Freq(Hz) spinner to 1.0 Hz.
Click Auto-Scale. If you click to the Adjust tab, you will see that it is less than a
5% zoom.
Go to the Rez tab , experiment with the Interpolation if you like; the
default 2-Lanczos generally works well at a reasonable speed.
Hit OK to close the Image Preprocessor.
Switch to the Camera View.
Type J and control-J to turn off tracker display.
Select Normal Speed on the View menu.
Use the output tab on the image preprocessor to write the sequence
back out if you wish.
If you want to insert an object into the stabilized shot, you need to update the
trackers and then the camera solution. On the Image Preprocessor’s Output tab, click
Apply to Trkers once. Close the image preprocessor, then go to the Solver panel,
make sure the solver is in Refine Tripod mode, and click Go!
Low Perspective (Object) shot. Like Nearly-Nodal, but when an object is being
tracked. In bad situations, Inverted Perspective can result. Addressed with
Known lenses and Model-based tracking.
Mixed tripod. A shot with both translating and tripod sections: the camera moves down
a dolly track, reaches the end, then pans around for a while, for example. Use
Hold Mode features.
Low-information tripod shot. Typically a nearly-stationary hand-held shot. The
camera moves (reorients) somewhat. There's insufficient information to
determine field of view, use a Known lens.
Traveling shot. The camera moves forward continuously for long distances, for
example on the front of a car. If the motion is very straight, estimating field of
view will be difficult (think of the original Star Trek open). Possible cumulative
errors may require external (GPS) survey data to resolve.
Survey shot. The shot is a collection of digital stills, rather than film/video. Typically
used to measure out a set from a wide range of vantage points. Special survey
shot features simplify handling these.
Zoom shot. The camera zoom was changed during the shot. Turn on zoom processing
on the Summary or Solver panel.
Hidden Zoom. Watch out: sometimes shots will have hidden zooms, for example,
intentionally to compensate when a dolly runs out of track, or unintentionally, as a
result of focus breathing. Turn on zoom processing on the Summary or Solver
panel.
Green-screen shot. The background is a big green (or blue) backdrop. It must have
trackable features on it, and out in the foreground! Beware situations where the
camera movies in to focus on an actor... leaving no markers in sight. The green-
screen panel can help auto-track these quickly.
Stereo shot. You're given left-eye and right-eye cameras to track. Some people think
stereo should be less than twice as hard, but really it's more like three times as
hard, since you must match each camera to the world, plus match the two
cameras to each other.
Occluded shot. Typically, actors or cars are moving around in front of the background.
The camera must not include any trackers on moving objects like this (if they are
to be tracked, they will be tracked separately as moving objects). Delete
unwanted trackers or use the roto-masking or alpha facilities to handle these.
Windy shot. The shot contains grass or trees with branches blowing in the wind, or
shots with moving water. Such features are not reliable and must not be tracked.
Use Zero-weighted trackers to approximately locate them, roto- or alpha-
masking, or in an emergency, many short-lived trackers.
Flickery shot. The shot contains rapid changes in overall light level, typically from
explosions or emergency lights, possibly from candlelight or torches. Handle
using high-pass filtering in the image preprocessor.
Noise Amplifier. All the features being tracked are much further away than the object
being inserted. No matter what, physics says the insert will be unstable: there's
no information to say what happens up close. Filtering will be required.(An
advanced technique is to put a tracker on an up-close inserted object, retrack it,
manually adjust that track, then refine the overall solve.)
Fisheye shot. The shot features visible lens distortion (of any kind). These require
substantial additional workflow. If at all possible, shoot and calibrate using lens
grid autocalibration.
Perfect Push-In. The camera moves forward in a straight line exactly along the optic
axis—the direction it is looking. Most likely to occur on vehicle or dolly shots.
There is no information available in these shots to estimate field of view. You
will need to supply a value manually, from an on-set measurement or a best
guess.
Planar Shot. All the trackable features lie on, or nearly on, a single plane in 3-D,
typically a flat floor or back wall (especially for green-screen shots). That's a
problem, because there is no 3-D in that tracker data, basically the trackers are
all redundant: this is actually the underlying mathematics used for planar four-
corner pinning. A supplied field of view is helpful. Be sure to track any off-plane
features, ie using supervised tracking.
Rolling-shutter shot. A shot from a CMOS camera that contains enough rolling shutter
distortion that it must be handled carefully. Rolling shutter is present in all CMOS
shots and must frequently be addressed. In very noticable forms, you'll see
vertical lines made diagonal, or objects squished or stretched vertically.
Jello shot. A rolling-shutter shot where significant high-frequency vibration is present
(say more that 1/10th the frame rate of the shot). Jello shots are unusable.
Note that though we've given you some jumping-off links to look at, working
through the manual is going to be necessary. You need to learn the fundamentals that
go behind all the shot types; they are not repeated everywhere.
Basic Operation
Before describing the match-moving process in more detail, here is an overview
of the elements of the user interface, beginning with an annotated image. Details on
each element can be found in the reference sections.
Coordinate Systems
SynthEyes can operate in any of several different coordinate system alignments,
such as Z-up, Y-up, or Y-up-left-handed (Lightwave). The coordinate axis setting is
controlled from Edit/Scene Settings; the default setting is controlled from Edit/Edit
Preferences.
The viewports show the directions of each coordinate axis, X in red, Y in green, Z
in blue. One axis is out of the plane of the screen, and is labeled as t(towards)
or a(away). For example, in the Top view in Z-up mode, the Z axis is labeled Zt.
SynthEyes automatically adjusts the scene and user interface when you change
the coordinate system setting. If a point is at X/Y/Z = 0,0,10 in Z-up mode, then if you
change to Y up mode, the point will be at 0,10,0. Effectively, SynthEyes preserves the
view from each direction: Top, Front, Left, etc, so that the view from each direction
never changes as you change the coordinate system setting. The axis will shift, and the
coordinates of the points and cameras.
Consequently, you can change the scene coordinate axis setting whenever you
like, and some exporters do it temporarily to match the target application.
Rotation Angles
SynthEyes uses pan, tilt, and roll angles to describe the orientation of cameras
and objects. With an upright camera looking at the Front view, the pan, tilt, roll angles,
are 0,0,0. Because 10 degrees and 370 degrees refer to the same orientation, rotation
angles have some subtleties.
You can hand-animate the rotation tracks of cameras and moving objects using
unlimited rotation angles, which prevents crazy movements—such as a 358 degree
motion from +179 to -179 instead of the intended 2 degrees.
The 3D solving process and much of the math associated with tracking produce
principal values, ie -180 to +180 (generally the math doesn't use rotation angles at all!).
At the completion of solving, the sequence of rotations is processed to create better-
behaved unlimited rotation angles, which you can edit. But note that it can be possible
to make changes that result in a different set of angles. For example if you take a
tracked shot with a big pan and extend the shot earlier into a prior pan, the new initial
pan will start at 0, which can cause the original section to be 360... These situations
should be rare.
Applying various mathematic operations to the track can also result in different
unlimited values, especially reorienting the entire path using Whole mode. (This also
necessarily forces frames that have keys on individual axes to be keyed on all axes
instead).
Menus
When you see something like Shot/Edit Shot in this manual, it is referring to the
Edit Shot menu item within the Shot section of the main menu.
SynthEyes also has right-click menus that appear when you click the right mouse
button within a viewport. The menu that appears will depend on the viewport you click
in.
The menus also show the keyboard equivalent of each menu item, if one is
defined.
Key Mapping
SynthEyes offers keyboard accelerators for menu entries, various control
buttons, and Sizzle scripts, as described in the keyboard manager reference section.
Keyboard accelerators for menu operations are accessed with control-whatever
on Windows and Linux, and command-whatever on macOS. So you'll see something
like control/command-A meaning control-A on Windows/Linux, command-A on Mac.
Similarly, anything referring to the ALT key on Windows/Linux refers again to the
command key on the Mac. ALT-left means hold down the ALT key and click the left
mouse button; on a Mac use command with the left mouse button. Note that the Mac's
Opt key is not used in SynthEyes; command is used instead.
You can change the keyboard accelerators from the keyboard manager, initiated
with Edit/Edit Keyboard Map or by changing the keybd14.ini file in your File/User Data
Folder.
Note that the tracker-related commands will work only from within the camera
view, or when one is visible, so that you do not inadvertently corrupt a tracker.
On Windows, you can also use Windows’s ALT-whatever acceleration to access
the menu bar, such as ALT-F-X to exit.
The active tracker host camera or object is separate from the usual idea of
selection. Just because an object is the active host does not mean that it or its trackers
are selected (for example, to be deleted by the Delete key).
Customer Care Buttons
Three buttons at right of the toolbar perform Customer Care Center functions
such as messages and upgrades:
Msg for messages about new SynthEyes releases, scripts, tutorials, etc.
D/L to show whether there is a new release ready for download, and to
initiate that download, and
Sug to open the web browser to a page that allows you to make feature
suggestions (be sure to see whether they already exist!).
These features require that you have supplied active support login information,
which arrives as part of the SynthEyes permanent license, or directly via the Help/Set
Update Info menu item, that you have "Check for Updates" set to Daily or On Startup,
and that your system not be blocking SynthEyes from internet access via a firewall.
All the way to the right on the main toolbar you can also find the IA button, which
launches the Synthia Instructible Assistant. See the Synthia PDF on the Help menu.
Undo/Redo
SynthEyes includes full undo and redo support via the Undo/Redo buttons and
via Edit/Undo and Edit/Redo menu items.
Control/command-Z immediately undoes the most recent operation, Control-Y or
Command-shift-Z immediately redoes the most recent operation. You can always see
the name of the most recent operation from the tooltip of the undo or redo button.
Right-click Undo/Redo History
Right-clicking the Undo or Redo buttons brings up a menu showing available
undo and redo operations. If you click on the third item down, for example, the three
most-recent operations will be undone.
NOTE: If you find yourself having to think too much about left or right clicking,
you can make the undo and redo buttons always bring up the list of
operations, by turning on the Show undo menu, don't undo yet preference in
the user interface section. If you do that, you can still get instant action using
the keyboard shortcut (read on).
Past the first ten (preference setting) elements of the undo or redo menus,
successive items with the same name are condensed to a single item, for example
"Select tracker (*3)" represents 3 consecutive tracker selections. As those items reach
the first ten items of the undo or redo menu, each item will be shown individually.
There are a number of controls for how many undo operations are possible,
based on the number of operations, but also on the amount of memory required to store
the operations (some operations such as the Image Preprocessors "Apply to trackers"
can require substantial amounts of memory on long complex shots). Preferences in the
Undo/Redo area control this.
Multi-way Redo Branching
We've probably all had the experience with various applications of temporarily
undo-ing a few times to see what we'd done previously, then inadvertently making a
small change which wipes out the things we were planning to redo. Wouldn't it be better
if we could somehow avoid that?
And taking it a step further, wouldn't it be great if we could try things a few
different ways, be able to switch among them, and then be able to continue on with the
most satisfactory approach?
With multi-way redo branching in SynthEyes, you can! This is a really cool
concept in SynthEyes, and we'll run through some of the details here. Conveniently, it
hides behind the scenes and pops up only when it's relevant. As you experiment with it,
you'll probably better understand the details here. Because the multi-way redo capability
is explicitly designed to preserve information, it's pretty harmless to experiment with it.
Note: the first time that multi-way branching could be used, ie you make a
change when there are possible redo items, you'll be asked whether you want
to enable multi-way branching. You can change this choice at any time in the
Undo section of the preferences.
Warning: As cool as this capability is for being able to try different things,
note that like normal Undo/Redo functionality, the multiway redo information
is not preserved when you save a file and then reopen it. So you shouldn't
rely on it for things you want to save.
To understand and use multi-way redo, you should be familiar with the right-click
Undo/Redo history feature, which you'll use to see the undo and redo history is like, and
what is happening with the branches. The history menus show the undo items created
each time you change a control in SynthEyes. The multi-way branch feature
manipulates those histories!
We'll use a simple non-useful but easy example here to describe how multiway
branching works. We'll use the checkboxes on the Solver panel, but in reality you could
be using any of the controls in SynthEyes. And instead of just a single operation (such
as Slow but sure), you can have an entire sequence of operations.
Consider this sequence of clicks on the Solver panel, after opening SynthEyes:
1. Slow but sure
2. Undo
3. Constrain
4. Undo
With multi-way branch disabled, you can only Redo, and the Constrain checkbox
would become checked again. With multi-way branch enabled, the Undo right-click list
contains not only Constrain, but below it "(2-way branch)". You have the option to do a
second undo and then a redo (more on that later), and the following dialog appears:
Each box shows an undoable or redoable action. The line marked "You are here"
shows the current state of SynthEyes, what you see in the viewports and if you save
the file. To the left of it are all the items in the Undo list, to the right are all the items in
the Redo list. Clicking Redo will execute box R1 (moving You are here right one box).
Each circle represents an undo item that is a multi-way branch. Clicking Undo (moving
You are here left), then Redo, or selecting (3-way branch) in the right-click Undo list, will
execute 3WB, resulting in a dialog box with four options, one to go to the end of each of
the 3 branches (Earlier Branch #1, Earlier Branch #2, and the current one), plus an
option to stay at the beginning of the current branch, ie right back to You are here. If you
make a change from You are here, the current REDO LIST will be made into a new
Earlier Branch #3, and the REDO LIST will be empty. New actions that you take will
appear at the top of the UNDO LIST between 3WB and You are here. If you undo them,
they'll move onto the REDO list.
Notice that Earlier Branch #1 itself contains a multiway branch, 2WB. If you
select Earlier Branch #1 from 3WB, you'll get a second popup dialog asking you
whether you'd like to take Earlier Branch #1A or #1B. If you check the "To leaf without
questions" checkbox when you select Branch #1 from 3WB, that second 2WB dialog
(and any others) won't be shown: the main (most recent) branch will be taken on any
subsequent multi-way braches encountered. In this case, Earlier Branch #1B will be
taken.
Similarly, if you select an item in the Redo list, any intervening multi-way
branches will not result in a Select Branch dialog. In our example, selecting R5 on the
right-click-Redo list will not result in a popup from the xWB multi-way branch.
Here are some additional features and details.
If the branch contains a single redo item, that item name is shown as the name of
the branch. If there are multiple items, the first name is shown, plus the number of
additional items.
You can hover the mouse over name of a branch to see a larger list of redo
operations that comprise the branch. (There are two preferences in the Undo section
that control the maximum number of operations that are shown. They defaults to
showing the first seven and last four.)
TIP: Move the mouse from outside the list directly onto one of the list names!
Once the tooltip opens with the list of redo item names, that's the one you get;
it won't change even if you move the mouse around after that.
You can double-click a branch name to bring up a dialog that allows you to
create your own name for the branch, so you can name one "Easy Way" and one "Hard
Way", for example.
Branches that require over 1 MB of storage will show the amount as a suffix to
the branch name, similar to the right-click Undo and Redo lists.
You create new branches by making any change when the Redo list is not
empty. The branch will be consolidated with any existing multi-way branch at the top of
the Undo or Redo list. In the example, a new branch would be consolidated into 3WB
either from the You are here location, or from the location before it (ie by Undoing
once).
You can prune (delete) your branches by selecting a branch from the list, then
clicking the Delete key. This can be worthwhile if large autotracks, solves, or Apply To
Trackers operations are no longer needed (which can be a Gb or more).
Within modal dialogs (such as the image preprocessor), you can use multi-way
branching, but those branches are removed and only the current branch remains once
you close the dialog box. (Undo and redo buttons in modal dialogs typically don't have
full right-click undo and redo history menus.)
Control Panels
At any time, one control panel, such as the Summary panel, Tracker panel, or
Solver panel, is displayed in the control panel area along the left side of the user
interface. Each control panel features a variety of different buttons, checkboxes, etc
regarding some particular area of functionality.
In the graphic above, the Trackers room has the Tracker control panel. Normally
it brings up a single camera view; we changed it for the graphic above. You can use any
viewport configuration with a given room, just by selecting that view.
Use the room bar across the top of the user interface to select which control
panel is displayed. SynthEyes uses rooms and control panels as a way to organize all
the many individual controls. Each control panel corresponds to a particular task, and
while that control panel is open, the mouse actions in the viewports, and the keyboard
accelerator keys, adapt to help accomplish a particular task. The rooms are arranged so
that you can often start at the left and work to the right.
For some rooms, there is a secondary control panel under the first for
convenience. Don't be alarmed if the secondary control panel appears to be cut off on
small monitors! You can access its own room directly.
Viewport Layouts
A layout consists of one or more viewports (camera view, perspective view, top
view, etc), shown simultaneously in a particular arrangement. Select a layout with the
drop-down list on the main toolbar.
A tab at top right of each pane switches to a single full-size version of that
pane, or back if it already is.
You can adjust the relative sizing of each pane by dragging the gutters between
panes.
You can change any pane of a layout to a different type by clicking the tab
just above the upper left corner of the pane, creating a custom layout. You can keep on
changing panes and that will continue to affect the same custom layout. To create a
new custom layout, switch to a different existing non-custom layout and begin
customizing it.
To name your custom layouts and create different pane arrangements, use the
viewport layout manager (on the Window menu). Your layouts are stored in the
SynthEyes file; you can also set up your own default configurations (preferences) using
the layout manager.
Some viewports have several flavors, for example Camera, LCamera,
Perspective, Perspective B, SimulTrack, RSimulTrack, Top, and Right. These flavors
are different settings for the same underlying view type, ie LCamera and RCamera are
both camera views, but one initializes to look at the left eye of a stereo shot, vs right for
the other. Each flavor preserves various settings when the overall layout changes, so
that when you switch from a Quad view to a Top view, the Top view will continue to look
the same. For this reason, if you have a layout with two perspective views, you should
use Perspective and Perspective B. If you used two Perspective B's, then changed
layout and then back again, both Perspective B's would have the same configuration—
one of the two originals at random. For good layouts for stereo shots, use an
appropriate combination of LCamera, RCamera, LSimulTrack, and RSimulTrack rather
than the plain Camera or SimulTrack.
You have quite a lot of control over the details of this in SynthEyes:
If you click the text of a room name, both the panel and view configuration will
change, ...but...
If you click the icon of a room, or hold down SHIFT, just the panel will change. (Icon-
specific behavior enabled by the No layout change upon icon click preference.)
If you change the view configuration for a room, SynthEyes remembers that change
so that you will get the new configuration if you go to a different room then come
back, ....but...
If you hold down the SHIFT key when you select the new view configuration,
SynthEyes will not remember the change.
You can use the room manager (right-click a room and select Edit Room) to change,
add, or delete rooms.
You can use the room manager to add completely new rooms, for example for
stereo or to have different pre-existing view configurations for the same panel.
If you don't want SynthEyes to ever remember your manual viewport changes, use
the No room change upon layout change preference. Use the room manager to
make changes.
If you want a very simple approach: with the No layout change upon room change
preference, rooms select only the panel, and never change the viewport.
The rooms are stored in each .sni file, and you can save a configuration as your
preferences via right-click/Save All as Preferences.
Via a preference, you can control whether the room bar shows text only, icons
only, or the default—both text and icon as shown.
Floating Panels
Programs such as Photoshop and Illustrator have many different palettes that
can appear and disappear individually. If you are more familiar with this style, or it is
more convenient for a particular task, you can select the Window/Many Floating
Panels option, and have any number of panels open at once. Keep in mind that only
one panel is still primarily in charge, and there may be unwanted interactions between
panels in some combinations. The primary panel is still marked in hot yellow, while the
other panels are a cooler blue.
You can also keeping the current panel floating, instead of locked into the base
window, with the Window/Float One Panel menu item. Floating panels are preserved
and restored across runs and files.
items from a list, and editable selectors , where you can not only
select something, but change its name. Note that selectors are sometimes called
comboboxes or dropdowns.
As you can see, selectors and editable selectors have the triangle on the right,
denoting the drop-down menu that allows you to select the item. Depending on a
preference, the list might appear below the selector (Windows style) or placed so that
the currently selected item is overtop of the selector (macOS style). The macOS version
is selected by the default preference in Linux.
Editable selectors may be identified by the vertical line at their right interior,
denoting that the selector is partitioned into two parts. To change the name of
something, double-click the text itself, ie "Camera01."
When a list is dropped down, the selection follows the mouse position without
changing the selection, as long as the mouse is inside the dropdown list. To use
keyboard keys (up arrow, down arrow, home, end) or the mouse scroll wheel, move the
cursor outside the dropped-down list, in order to prevent a conflict between the mouse
position and key or scroll operation.
When a button is connected to an animated track, such as a tracker enable,
right-clicking the button will delete a key, shift-click will truncate keys from the current
frame on, and control-click will delete all keys.
Spinners
Spinners are the numeric fields with arrows at each end: . You can
drag right/upwards and left/downwards from within the spinner to rapidly adjust the
value, or click the respective arrow < or > to change a little at a time.
If you click on the number displayed in a spinner, you can type in a new value. As
you enter a new value for a spinner, the keystrokes do not take effect immediately, as
that tends to make things fly around inconveniently. When you have finished entering a
value, hit the enter key. This will enter the value into SynthEyes.
Spinners show keyed frames with a red underline. You can remove a key or
reset a spinner to a default or initial value by right-clicking it. If you shift-right-click a key,
all following keys are truncated. If you control-right-click a key, all keys are removed
from the track.
Viewports
The main display area can show a single viewport, such as a Top or Camera
View, or several independent viewports simultaneously as part of a layout, such as
Quad. Viewports grab keyboard and mouse focus immediately when the mouse enters
the field (except if you are entering text, see below), so you can move the mouse into a
view and start clicking or mouse-wheel-zooming or using accelerator keys immediately.
Many viewports use the middle-mouse button for panning, and the scroll wheel or
right-mouse-drag to zoom. A 3-button+scroll mouse is highly recommended for effective
SynthEyes use, as trackpads are not designed for this! For short-term use without a
middle-mouse button, ie with a trackpad, see the section on Middle Mouse Button
Problems. Note that the ESCape key can be used to terminate mouse operations in lieu
of right-click.
You can change methods depending on what you are doing at any particular time
using the settings on the View menu and the corresponding Preferences (for new
scenes).
Play Bar
The play bar appears at the bottom of the SynthEyes window for compatibility
with other apps, though for efficiency it can be moved to the top by a setting on the
preferences panel. A duplicate appears at the top of the Tracker control panel as well.
The playbar features play , stop, frame forward , etc controls as well as the
frame number display. The play button changes to show the current playback direction.
Frames are numbered from 0 unless you adjust the preferences.
Tip: SynthEyes normally plays back "as fast as possible" one frame at a time,
which is most productive for tracking. If you want to play back at the nominal
frame rate, adjust the settings on the View Menu (temporarily).
Time Bar
The time bar shows the frame numbers, a current frame marker (blue vertical
line), key markers (black triangles at bottom of the timebar), enable status (solid
horizontal blue line at bottom of timebar), and cache status (pink if the frame is not
currently in-cache) or whether or not there are enough trackers. Not shown are
playback range markers, which are small green or red triangles at the top of the timebar
on the beginning and end of the range. Also not shown are darker portions of the
timebar for frames past the beginning or end of the shot.
The timebar background is selected from the View/Timebar background
submenu, to be either the Show cache status or Show tracker count status. A
preference controls the initial setting.
Floating Views
Floating viewports can be created for the camera, graph editor, hierarchy,
perspective, and/or SimulTrack views with Window/Floating whatever menu items.
You can move floating windows to a second or third monitor, and they will be restored
when you close and reopen SynthEyes. While by default for convenience only a single
floating instance of each type can be created (and toggled in and out of existence by the
menu item), you can create multiple floating windows of each of these types by
changing the Only one floating ... preferences in the User Interface section.
Tooltips
Tooltips are helpful little boxes of text that pop up when you put the mouse over
an item for a little while. There are tooltips for the controls, to help explain their function,
and tooltips in the viewports to identify tracker and object names. If you aren't sure what
a tooltip does, check for a tooltip!
The tooltip of a tracker has a background color that shows whether it is an
automatically-generated tracker (lead gray), or supervised tracker (gold).
Status Line
Some mouse operations display current position information on the status line at
the bottom of the overall SynthEyes window, depending on what window the mouse is
in, and whether it is dragging. For example, zooming in the camera view shows a
relative zoom percentage, while zooming in a 3-D viewport shows the viewport’s width
and height in 3-D units.
Color Scheme
You can change virtually all of the colors in the user interface individually, if you
like. For example, you can change the default tracker color from green to blue, if you
are constantly handling green-screen shots. See Keeping Track of the Trackers for
more information.
Click-on/Click-off Mode
Tracking can involve substantial sustained effort by your hands and wrists, so
proper ergonomics are important to your workstation setup, and you should take regular
breaks.
As another potential aid, SynthEyes offers an experimental click-on/click-off
mode, which replaces the usual dragging of items around with a click-on/move/click-off
approach. In this mode, you do not have to hold the mouse buttons down so much,
especially as you move, so there should be less strain (though we can not offer a
medical opinion on this, use at your own risk and discretion).
Note: we're not sure how often, if at all, click-on/click-off is used, and don't
test it regularly. If you are using it and have trouble, contact us.
You can set the click-on/click-off mode as a preference, and can switch it on and
off whenever convenient from the Window menu.
Click-on/click-off mode affects only the camera view, tracker mini-view, 3-D
viewports, perspective window, and spinners, and affects only the left and middle
mouse buttons, never the right. This captures the common needs, without requiring an
excess of clicking in other scenarios.
Scripts
SynthEyes has a scripting language, Sizzle, and uses Sizzle scripts to implement
exporters, some importers, and tool functions. While many scripts are supplied with
SynthEyes, you can change them as you see fit, or write new ones to interface to your
studio workflow.
You can find the importers on the File/Importers menu, exporters on the
File/Exporters menu, and tool scripts on the main Script menu.
The File menu presents the last few exporters for quick access, and an Export
Again option (with keyboard accelerator) for fastest of all. Similarly the Script menu
presents the last few scripts, with a keyboard accelerator for the last one. You can
adjust the number of most-recent exports and scripts kept from the Save/Export section
of the preferences.
On your machine, scripts are stored in two places: a central folder for all
SynthEyes users, and a personal folder for your own. Two menu items at the top of the
Script menu will quickly open either folder.
SynthEyes mirrors the folder structure to produce a matching sub-menu
structure. You can create your own “My Scripts” folder in your personal area and place
all your own scripts in that area, to be able to quickly find your scripts and distinguish
them from the standard system scripts.
Similarly, a studio might have an “Our Shared Scripts” folder in the shared
SynthEyes scripts folder. The entire scripts folder can be moved to a shared drive using
the Scripts folder preference.
Scripts in the user's area are listed with an asterisk (*) as a prefix. If a user's
script has the same name as a system script, thus replacing it, the user's script will be
used, and it is prefixed with three asterisks(***) to note this situation.
Script Tidy Up
We release new or updated scripts for SynthEyes fairly often in between the
overall SynthEyes releases, and customers can install those in different locations.
Customers can also produce their own new or modified versions of scripts. And
sometimes we may reorganize the script folders to eliminate long menus and make it
easier to find scripts.
As a result, the system and user script folders must be monitored for possible
conflicts; that's the role of the Tidy Up Scripts tool, which runs every time SynthEyes
starts, and in response to the File/Tidy Up Scripts menu item.
Tidy Up Scripts looks at all the scripts, and compares them to its list of what
should be where. Each file found falls into one of the following categories:
Distribution. These are files that are part of the current standard SynthEyes distribution
and are located in the proper place in the file structure. Since they are correct,
you don't see them listed by or affected by Tidy Up Scripts.
Novel. These are user-supplied files that are not part of the SynthEyes distribution, and
may be located in the system or user script folders. Scripts in this category will
later have a single asterisk(*) before their name in the SynthEyes Script, Import,
and Export menus. These scripts are listed in Tidy Up Scripts only if you check
the box "Include novel (user-supplied) files in both system and user areas."
Overrides. These are files located in the user script area that match in name and sub-
folder placement to a file in the SynthEyes distribution. These are typically
modified or updated or possibly old files, and in case they will override and
replace the action of the built-in SynthEyes script. That could be good or bad,
depending on the circumstances. Scripts in this category will later have three
asterisks(***) before their name in the SynthEyes Script, Import, and Export
menus.
Mis-placed. These are files that match up to files that are part of the SynthEyes
distribution, but they are located somewhere other than where they should be,
and they aren't in place to be a proper override. In most cases these files should
be deleted! Mis-placed scripts will be invisible and ignored by SynthEyes.
As you can see, these categories rank from good to bad. The Tidy Up Scripts
tool will pop up automatically when SynthEyes starts only if there are Mis-placed scripts:
those are the worst kind and they should always be fixed. Tidy Up Scripts doesn't try to
second-guess what's happened—it won't take action (such as deleting scripts) on its
own, it lets you decide.
Scripts that are novel user scripts are generally harmless: scripts that you've
created. They might be installed in the system area so everyone can access them, or in
your user are. In some cases a script might be obsoleted and removed from the
SynthEyes distribution, in which case your manually-installed version of it might be
flagged as a user script. (If it had been installed by the SynthEyes installer, the system
will generally remove it automatically.)
Tidy Up Scripts presents you with information to help decide what you need to do
to tidy up your scripts. You can get some additional information by double-clicking any
of the lines in the system or user script areas.
Tidy Up Scripts can delete scripts for you at your command, either by deleting
specific categories of scripts, or by deleting individual scripts by selecting them and
clicking the keyboard Delete key. If you need to do something more subtle, renaming or
moving a script, you'll need to do that manually using your system's file explorer.
Note that SynthEyes may be unable to delete files in the system folder area,
since system administrative privileges may be required. In that case, again you (or your
administrator) will need to delete them using your system's file explorer after escalating
to a sufficient privilege level.
The Tidy Up Scripts dialog will continue to appear each time you start
SynthEyes, if there are mis-placed scripts. That's pretty annoying, a clear signal that we
think it's very important for you to determine and correct the situation.
Important: Script bars are stored and deployed as small text files. When you
create a new script bar, you must store it in either one of the system-wide
script folders or your user script folder. To see the proper locations, use the
Script/User script folder or Script/System script folder menu items to open the
File Explorer or Finder to the corresponding folder.
The Camera and Perspective views have their own kind of floating toolbars as
well, though they are limited to existing built-in operations and there is no manager for
them (they are controlled by the user-changable XML file camtool14.xml and
pertool14.xml). You can save your placements, and whether toolbars are open or not,
using the Save as Defaults item on the respective right-click/Toolbars menu.
Notes
You can place rectangular notes on your SynthEyes camera and perspective
views, to mark places that still need tracking, need improvement, or whatever. They are
particularly useful for tracking supervisors to communicate with individual trackers.
To create a note, click the Edit/Add Note menu item. The Notes Editor will appear
with information on the new note:
one or more lines of text
a checkbox for whether the note is shown or hidden
a background color swatch for the note. The background color for new
notes is default color, with the actual color set from the preferences.
the camera (view) that the note is shown on. Can also be set to All.
the beginning and ending frame numbers that the note is shown. By
default, they begin at the current frame and last for a value set by a
preference. Set the preference to zero to have them default to the entire
shot.
Whether the note is pinned to the shot image and moves as it pans and
zooms, or is stationary at a specific location in the camera or perspective
view, regardless of panning and zooming.
You can create new notes or delete the current note from the notes editor as
well. You can create any number of notes on any camera, spread through the shot. Use
the go back and go forward buttons to step through the notes.
New notes are located at top left of the (active) camera view. You can drag them
to any desired location, positioning specifically the marked corner. (Which corner is
marked will vary dynamically as you zoom, pan, etc, such that the marked corner stays
put and the text stays visible.)
To edit a note, double-click it.
To look for notes, click Window/Notes editor.
To show/hide all notes, click View/Show notes
Notes are not "selectable"
undo records are the settings we're talking about here. Many are controlled by
menu settings.
Note: Because settings data don't constitute "changes" (do not create undo
records), changing settings or window placements will not trigger the "Scene
changed, save file?" dialog. So if you save a file, change the window
placements, then exit, the changed window placements will be lost without
warning. Click Save if you want to preserve them. (For the same reason,
desirably, moving windows won't trigger an auto-save or -increment.)
There is a potential hazard from the automatic restoration of settings: a file that
you open may contain settings that you have forgotten or even are unaware of. For this
reason, there are settings in the File Open area of the preferences that give you control
over whether or not the settings are reloaded (see below). If you have a file that is
behaving unexpectedly and that you suspect may have settings you don't understand,
you can save it, set the preferences to not load settings, then reopen it. Alternatively
you can override the current scene with your own preferences.
You can also save the current set of window placements and settings as an
additional type of preferences: a favorite set of window configurations and settings. This
can be dangerous, however, as it might obscure the effect of some preferences. If
you've stored a scene's settings as preferences and want to change them, you should
start SynthEyes, make the change, and then re-save the settings to preferences (don't
try to do this from inside some scene you're working on).
Settings Preferences
Placements and Settings. Selector. This selector controls whether or not the
placement and settings data is used when a SynthEyes file is opened, with the
options listed next.
Never load. (Option.) The information is never used.
From preferences only. (Option.) When a file is opened, the placements and
settings are used that you have set, not those from the file.
Only from my machine. (Option.) Placements and settings from the file are used
only if they were generated from this same machine; otherwise the
placements and settings from the preferences are used, if any.
Always load. (Option.) The placements and settings in the file are always used.
Load only placements. Checkbox. When checked, only the window placements will be
loaded, not the detailed settings.
Set Scene as Preferences. Button. The placements and settings information is stored
as part of your preferences, and will be used for new SynthEyes scenes.
Placements as Preferences. Button. Only the placements information, not the settings,
will be stored in your preferences for application to new scenes.
Remove these preferences. Button. Any stored settings and placements are removed
from your preferences.
Set Scene TO Preferences. Button. Overrides the placements and settings of the
current scene by replacing them with those that you've stored as preferences, if
any. This can be used to reset a rogue file, for example.
Auto-Save
SynthEyes can auto-save the .sni file every few minutes if desired, under control
of the auto-save preferences (in the Save/Export section). You will be asked whether
you want to use auto-save the first time a save is necessary. (You can have the
filename incremented automatically as well. )
IMPORTANT: to minimize the chances of data loss, please read this section
and the following section on file name versioning, then configure the
preferences to correspond to your desired working style!
If you keep auto-save off, then the file will be saved whenever you File/Save.
If auto-save is turned on, then the Minutes per auto-save setting applies. Once
the specified time has been reached, SynthEyes saves the file except under the
following conditions:
there has been no change in the scene since the last save;
a modal dialog is displayed (the main window is disabled);
an operation is currently in progress, such as dragging a spinner or
dragging a tracker around in a viewport;
the mouse is captured, even if it does not affect the scene file, for
example, dragging the current time marker in the time bar;
the shot is being played back (to avoid disturbing the frame rate);
if a text field is being edited, such as a spinner's text value.
If the save must be deferred, it will be retried every six seconds. If you are very
busy or leave SynthEyes playing continuously, auto-save will be blocked. Status
messages are displayed in the status bar at the beginning and completion of an auto-
save.
The "If no filename" preference determines the action taken if the file has not
been previously saved by the time the auto-save period is reached, with the choices:
Don't save, Ask, or Save as untitled. If Don't save is selected, no auto-save occurs until
you define a file name, putting your work at risk in the event of any problem. You can
select Save as untitled, in which case the file is saved as untitled.sni is produced in your
File/User Data Folder. (The prior version of that file becomes untitled.bac.)
If you select Ask, when the time to auto-save comes without a filename, the
save-file selection dialog will pop up for you to specify a file name. That is only
somewhat optional: if you cancel the file-selection dialog, you will be re-prompted for a
file name each time the auto-save interval expires, until you successfully enter a file
name.
Before the .sni file is auto-saved, the prior version of the .sni file will be renamed
to be a ".sni.bac" file. SynthEyes produces .sni.bac files only for auto-saves, not for
regular File/Save. In addition to saving an additional older backed-up version, it serves
as a backup in case the auto-save itself causes a crash (for example, if it runs out of
memory).
Tip: Auto-tracked files can be rather large, 10s or 100s of MBytes. They can
take a while to save, especially if the .sni file compression preference is also
turned on. To reduce the save time, be sure to use Tracker Cleanup or Clear
All Blips to reduce the blip storage (which takes up the bulk of the space)
once you no longer need to peel additional auto-trackers or use the Add More
Trackers tool. See also the Mesh De-Duplication features. If you frequently
have large auto-tracked files and cannot clear blips and the auto-save is
taking too long, you should probably keep auto-save off.
Warning: some network storage systems are buggy, and do not correctly
process the file rename (from foo.sni to foo.sni.bac) that immediately
precedes an auto-save. They block the save, causing it to fail with an error
message. If you encounter file-save errors during auto-save, turn off auto-
save.
When auto-save is on and you have saved at least once, SynthEyes will always
immediately save (over-write) changes to the previous file when you do a File/New,
Open, or Close, rather than asking permission, since you have already given permission
to save it by specifying auto-save. This makes for fast and simpler operation.
Using auto-save will require to you to be a little more careful to make sure you do
not inadvertently over-write files you want to preserve, described as follows.
When auto-save is off, you are prompted if you want to save the current .sni file
when you do a File/New, Open, or Close. You can save the file or discard it at that time
if you do not want to save changes.
But, especially if you have auto-increment and auto-save both on, and especially
with auto-tracked scenes, you can use a LOT of disk space rapidly!
To reduce that, you can
have the filename increment only every several minutes, and/or
Re-Finding Files
When you open an existing SynthEyes .sni file, SynthEyes attempts to verify that
all the files referenced by it still exist: the shot, alpha images, texture files, preview files,
etc.
If a file doesn't exist, SynthEyes looks around to find it. If the filename has
leading folders in common with the .sni file path when the .sni was saved, SynthEyes
will use the current sni path to look for the file now. For example, /A/B/C/foo.sni had
/A/B/D/texture.jpg, now it is opened as /E/F/G/H/foo.sni, so SynthEyes will look for
/E/F/G/D/texture.jpg.
If it isn't found that way, SynthEyes will just look in the same folder as the .sni file,
in case the files have been consolidated.
If it still hasn't been found, SynthEyes will ask you to find it—if you have the
preference in the FILE OPEN section turned on. Once you've re-found a file, SynthEyes
will check that folder for any other missing files also.
The situation is a little more complex for files that get written, such as export file
names and preview movie output, because the file may not have been written at all, and
you may not have bothered to copy the file to the new location since it's a derived
output.
So for output files, if they aren't found, but the path matching processing above
was successful, SynthEyes will keep the new relative name without further ado.
Otherwise, it checks a preference to determine whether to simply clear the file name (so
you'll have to set it when you get around to using it), put it into the same folder as the
.sni file (make sure that following this plan will never overwrite something useful), or ask
you for a new file name immediately.
Mesh De-Duplication
When you are using large meshes, typically reference models or lidar data, and
have many versions of the SNI file, a lot of disk space can be required.
Tip: The amount of space required for a given mesh can be shown by
selecting it, then running the Mesh Information tool script.
To reduce the total SNI file sizes, you can use the mesh de-duplication feature,
which stores mesh data separately from the SNI file.
Edit/Edit Scene Settings panel. By default when a scene is created, the scene will just
use the preference setting, even if it is later changed.
You can use the scene settings panel to set the de-duplication mode specifically
for a given file. (The most common reason for that is to set the mode to Never before
saving a SNI file that will be sent to someone else, more on that later.)
Mesh de-duplication files may be located in a "Mesh Area," which is a folder
within your File/User Data Folder. The location of this folder is a preference which can
be changed from the Folders portion of the Preferences panel.
De-Duplication Modes
There are quite a few de-duplication modes, to encompass a variety of
scenarios. We recommend sticking to the "Imports..." modes in most cases where the
large meshes are created external to SynthEyes and imported to it. Generally you
should not change imported meshes inside SynthEyes if you are using "Imports..."
modes.
With all of these, you can "follow along" and learn more by looking at the sbm
files created in the respective folders. It is a transparent process. You can open those
sbm files directly from SynthEyes.
Never. No de-duplication is performed. This is simplest, and best for sending files to
others.
Imports. De-duplication is performed for all imported meshes, and no others. The
originally-imported mesh file is re-read each time the file is opened, which can be
slow. This can be useful if the mesh is changed frequently. Important: any
changes to the mesh within SynthEyes will be lost if you save then reopen the
file. (Vertex and facet selection data is cleared when a file is read).
Imports with SBM. De-duplication is performed for all imported meshes, and no others.
An SBM file is written to (and later read from) the same folder and with the same
name as the original mesh file, which makes it easy to identify and keep track of.
The SBM will contain any modifications made to the mesh; for this reason the
mesh is not automatically reread from the original if the original changes.
Imports with local SBM. De-duplication is performed for all imported meshes, and no
others. An SBM file is written to (and later read from) the Mesh Area with the
same name as the mesh's original filename. This is useful when the original
mesh is located on a remote networked drive, so you can use a fast local disk
instead. The SBM will contain any modifications made to the mesh; for this
reason the mesh is not automatically reread from the original if the original
changes.
Scene, Single. De-duplication is performed for any mesh that exceeds the size limit on
the preferences panel. The SBM file will be placed with the SNI file. See the
following section on Single vs Versioned Files.
Scene, Versioned. De-duplication is performed for any mesh that exceeds the size limit
on the preferences panel. The SBM file will be placed with the SNI file. See the
following section on Single vs Versioned Files.
Mesh Area, Single. De-duplication is performed for any mesh that exceeds the size
limit on the preferences panel. The SBM file will be placed in the Mesh Area. See
the following section on Single vs Versioned Files.
Mesh Area, Versioned. De-duplication is performed for any mesh that exceeds the size
limit on the preferences panel. The SBM file will be placed in the Mesh Area. See
the following section on Single vs Versioned Files.
Single Files vs Versioned Files
The Scene and Mesh Area modes can be used whether a mesh was imported or
created within SynthEyes. Meshes are de-duplicated, or not, based on the size of the
mesh. Since there is no original imported mesh, the mesh is entirely contained in the
SBM file; if the SBM file is deleted, lost, or missing, the mesh is gone—the SBM file is
not a "cache".
The Scene modes store SBM files in the folder with the SNI scene. The Mesh
Area modes store them in the Mesh Area folder described earlier.
The Scene and Mesh Area modes come in both Versioned and Single variants.
Here's the problem that is being solved:
Suppose you have a SNI file with a large mesh that you edit from time to time,
and you are keeping different versions of the file, possibly using an auto-incrementing
auto-save. You have v05 of the file, and just wrote v08. Now you want to go back to
v05—but the mesh was different at that time.
Unless the mesh de-duplication has kept separate versions of the SBM file,
going back to v05 accurately will be impossible. The Versioned modes do keep multiple
versions, producing a new SBM when the mesh has changed.
For example, suppose the mesh changed between v02 and v03, and v06 and
v07. The de-duplication code will write new SBMs associated with v03 and v07 of the
file, and none for v01, 02, 04, 05, 06, or 08. When you go back to v05, you will be using
the SBM written for v03. If you re-open v08, you will be using the SBM from v07.
While SynthEyes can automatically limit the number of auto-incremented SNI
files, it does not automatically cull the SBM files—that's a bit tricky! In the example
above, the SBM from v03 is used for v06, so even though you may delete the v03 SNI
file, you may need to keep its SBM files for quite a while. SynthEyes can't keep SBM
files for each SNI—that's what we're avoiding in the first place!
So at some point you may need to clean up the SBM files manually. You can
determine what SBM files a given SNI file uses by looking at the mesh file list in its
File/File Info.
The Single modes avoid this complexity by not doing versioning: they keep only
the single, most-recent, version of the mesh. That will cut down the number of versions
of SBM files you have, and the thought process required to clean them up. But, if you go
back to an earlier version of the file, you'll still get the most-recent version of the mesh.
That could be a huge problem, or none at all, depending on what you're doing. So pick
wisely!
Forcing a Re-Read
You can force an imported mesh to be re-read (for example if it's originator
changes it) by selecting it, then using File/Import/Reload. It will also be reread if the
SBM file is deleted, but that might be more mistake-inducing. (Be sure not to delete an
SBM that has no original mesh file, or which has been changed later!)
Sending SNI files to Others.
If you send a SNI file to someone else and it contains de-duplicated meshes, the
meshes will be missing: the recipient needs to have the SBM files (or original meshes).
While it is possible to identify the necessary SBM files from the File/File Info
listing, we recommend not attempting to package those files: the user will have to locate
them at the same full path locations, which is unlikely and generally unreliable.
Accordingly, to prepare a SNI file to send to someone, it is recommended to
open the Edit/Edit Scene Settings panel and set the de-duplication mode to Never, then
save the file. The resulting SNI file will then contain all the meshes. (You might turn on
the Compress .sni files preference before saving, as well.)
Important: A language translation file may require that your operating system be
set to a specific primary language in order to display translated text appropriately. Be
alert to notes to that effect from the creator of the language file, and make sure your
system is set to the same language.
To begin using a new translation file, open the preferences (Edit/Edit
Preferences) and select the language in the UI Language field on the right side of the
preferences panel. Click OK to save the change and close the preferences panel.
Restart SynthEyes to begin using the new translation file. See the later section
on Tracker etc Names for additional usage information.
Set the UI Language preference back to the empty entry and restart to return to
an unmodified interface.
Creating a New Language Translation XML
Starting a new language translation is easy. Click File/Make new language
template. Enter the name of your language in English (French, German, etc), then
select a filename in which to store it. By default it will go in your user script folder, which
is basically essential for the initial stages of development.
Open the newly-create file in your text or xml editor. All the translatable elements
are listed, over 1600 at present. All the menu items are listed in alphabetic order, then
all the dialogs are listed in alphabetic order, with each editable control in alphabetic
order within each dialog.
Each translatable element has from and to elements. Do not change the from
field! Add your translation to the to field, which is initially empty. Please see the next
section, on Character Sets, for some more details. Also, do not change the id or class
fields.
Keep in mind that the space available for each control is fixed and limited, so
translated text will have to be kept very short and succinct or abbreviated. Requiring a
smaller UI font size preference, or a specific UI font name preference, are options if
necessary. (Menu translations do not have to worry about limited space.)
You do not have to translate every element—you can start with as few or as
many as you want.
Note that some text is dynamically generated by SynthEyes, and therefore
cannot be permanently changed by a translation. For example, the Undo and Redo
menu items are changed dynamically based on what will be undone or redone. It is
possible that there may be restrictions on translating other fields as well.
We strongly recommend that you do not change the order of entries. As long
as everyone uses the same alphabetic ordering, it is easy for you or us to compare and
merge files translated by different users, to produce a composite result. Perhaps you
might translate some menus, someone else translates some dialogs; we can put them
together to a more-complete file.
Character Sets
SynthEyes on Windows and macOS uses the 8-bit ISO-8859-1 Latin-1 character
set (not Unicode). European languages should be able to stay with the default character
set. (SynthEyes on Linux uses UTF-8, since that is the only choice.)
To use other character sets, you can switch the operating system language
settings (code page). Any code page must have the usual ASCII character set in the
lower 128 positions, regardless of any shift codes, with language-specific characters in
the upper 128 positions 0x80 - 0xFF. For example, EUC-JP takes this approach.
When you open and save the translation XML file from your text editor, you'll
need to make sure that it uses the right character coding.
You can note any required settings in the notes attribute of the top-level
CTPLanguage XML node (second line of the XML file).
Tracker etc Names
SynthEyes makes little use of user-entered text fields, mainly for tracker mesh,
object, and other entity names. The tracker and object names can be exported to many
other programs, which may not support alternate character sets at all, or may support
them in other ways.
Accordingly, we strongly recommend that you use only the standard ASCII
characters A-Z, a-z, and 0-9 in tracker etc names, with a leading alphabetic character
(don't start a name with a number!), and no space characters. That is recommended
practice even for plain English usage.
Non-English characters in tracker names may display correctly in some places
(with the appropriate character set in place), but will not display correctly in any
OpenGL window. OpenGL windows include the perspective view, graph editor, and
SimulTrack views, plus the camera and 3D Views on macOS and Linux. (Camera and
3D Views can be set to OpenGL or non-OpenGL via preference.)
While we would prefer to use UTF-8 Unicode throughout, at present there are
two complications:
Providing consistent conversion of UTF-8 into the 16-bit Unicode
approach that Windows prefers (very simple for macOS).
Being able to draw any Unicode font in OpenGl. Tricky!
Since SynthEyes for Linux does use UTF--8, text will not be interpreted
consistently across platforms if non-ASCII characters are used. Note that non-ASCII
characters won't be displayed properly in OpenGL windows on Linux.
Over time we'll hopefully be able to address all these issues.
Image Sequences
SynthEyes will normally produce an IFL (image file list) file for each file
sequence, and write it into the same folder as the images. The IFL serves as a reliable
placeholder for the entire sequence and saves time re-opening the sequence, especially
on networks, because SynthEyes does not have to re-check the entire sequence. If the
IFL file conflicts with your image-management system, or you frequently open the same
image sequence from different machines, producing a different file name for the images
from each computer, you can turn off the Write .IFL files for sequences preference.
IFL files are required for survey shots, however.
Note: For maximum reliability, you should always open the first image in a
sequence. Although SynthEyes can open any image, that image defines the
beginning of the shot. Other software, such as After Effects, may interpret the
shot separately. Use the Start/End controls on the Shot setup panel or the
time bar to control what portion of the shot you track (this gives the most
flexibility for editorial changes as well).
The "Read 1f at a time" preference (on the Shot menu and in the preferences)
controls whether SynthEyes tries to read multiple images from a sequence or movie
simultaneously, or only one at a time.
Tip: Try turning Read 1f at a time off if your images are on a local disk! This
can increase shot-loading rate by several times, especially on RAID disks.
WARNING: you may encounter substantial delays the first time you open a
"movie" file using Quicktime on the Mac or Windows Media Foundation on
Windows, as the file must be indexed to locate the time stamp of each frame.
This index is written with a ".atimes", ".btimes", or ".times" extension so that
subsequent opens occur rapidly. (If you turn off the Write frame index files
preference, no times file is written, and each re-opening will take as long as
the first.)
Much like still cameras can produce "RAW" images that require special software
to read them and produce standard PNGs or JPEGs, some cameras produce
specialized image formats that must be converted to standard formats for further use.
(See the section on reading RED files.)
The operating system software loaded on your particular machine plays a large
part in determining what movies can be read. If you have a particular hardware device
that produces movie files, such as a camera or disk recorder, be sure to install the
software that came with that device, so that its files can be read. Or, you may have to
look online for the appropriate codec for footage supplied by a customer. Older legacy
codecs are being dropped by the operating system vendors.
Highly compressed files produced by cameras are not well-suited for post-
production, as reading a single frame often requires decompressing many others first.
SynthEyes often reads frames out of order, either at your request or during multi-
threaded processing, so reading inter-frame compressed files can take a very long time,
especially if there are few keyframes. A file with few keyframes can make scrubbing
painfully slow. If a movie file format is used, compression formats such as ProRes are
specifically intended for post-production.
The Max RAM Cache GB preference in the Image Input section is worth
checking on. It doesn't affect tracking or solving accuracy—it affects performance, how
many frames will fit in RAM. With enough RAM, this will be the whole shot. You may
want to decrease this value if there are other large apps running, or increase it a bit if
there are none. (The SynthEyes Intro version is limited to 1.25 GB, about 200 HD
frames.)
Note that the Image Preprocessing button brings up another panel with additional
possibilities; we’ll discuss those after the basic open-shot dialog.
Frame rate: Usually 23.976, 24, 25 or 29.97 frames per second. NTSC is used in the
US & Japan, PAL in Europe. SynthEyes does not care whether you use an exact
or approximate value, but it may be crucial for downstream applications,
especially when the shot is a 'movie' file, rather than a sequence.
Interlacing: No for film or progressive-scan DV. Yes to stay with 25/30 fps, skipping
every other field. Minimizes the amount of tracking required, with some loss of
ability to track rapid jitter. Use Yes, But for the same thing, but to keep only the
other (odd) field. Use Starting Odd or Starting Even for interlaced video,
depending on the correct first field. Guessing is fine. Once you have finished
opening the shot in a second, step through a few frames. If they go 2 steps
forward, one back, select the Shot/Edit Shot menu item, and correct the setting.
Use Yes or None for source video compressed with a non-field-savvy codec such
as JPEG sequences.
Channel Depths: Process and Store. 8-bit/16-bit/Float. Radio buttons. You can
select different depths for intermediate processing in the image processor, and
storage in the RAM cache. The selection marked with an asterisk is the default,
based on the source imagery.
+Alpha. Button. Click this to add a separate alpha image sequence to the shot. Click
the button, cancel the file selector, then answer Yes to remove the alpha
sequence. Button is blue when a sequence has been explicitly selected. Note
that sequences can also be attached implicitly, if their names match the
Separate-alpha suffix preference.
Keep Alpha: when checked, SynthEyes will keep the alpha channel when opening files,
even if there does not appear to be a use for it at present (ie for rotoscoping).
Alpha data can be in the RGB files, ie RGBA, or in separate alpha-channel files,
see Separate Alpha Channels. Turn on when you want to feed images through
the image preprocessor for lens distortion or stabilization and then write them,
and want the alpha channel to be processed and written also.
Apply Preset: Click to drop down a list of different film formats; selecting one of them
will set the frame rate, image aspect, back plate width, squeeze factor, interlace
setting, rolling shutter, and indirectly, most of the other aspect and image size
parameters. You can make, change, and delete your own local set of presets
using the Save As and Delete entries at the end of the preset list.
Image Aspect: overall image width divided by height. Equals 1.333 for standard
definition video, 1.777 for HDTV, 2.35 or other values for film. Click Square Pix to
base it on the image width divided by image height, assuming the pixels are
square(most of the time these days). Note: this is the aspect ratio of the input to
the image preprocessor.
Pixel Aspect: width to height ratio of each pixel in the overall image. (The pixel aspect
is for the final image, not the skinnier width of the pixel on an anamorphic
negative.)
Back Plate Width/Height: Sets the width of the “film” or sensor of the virtual camera,
which determines the interpretation of the focal length. You must know this if you
want to use focal lengths. SynthEyes uses field of view, so this value does not
affect the solve. Note that the real values of focal length and back plate width are
always slightly different than the “book values” for a given camera. Use Back
Plate Units to change the desired display units.
Rolling Shutter Enable/Fraction. Checkbox and spinner. Enables and configures
rolling-shutter compensation during solving for the tracker data of the camera
and any objects attached to this shot. CMOS cameras are subject to rolling
shutter; it causes intrinsic image artifacts.
Image Preprocessing: brings up the image preprocessing (preparation) dialog,
allowing various image-level adjustments to make tracking easier (usually more
so for the human than the machine). Includes color, gamma, etc, but also
memory-saving options such as single-channel and region-of-interest processing.
This dialog also accesses SynthEyes’ image stabilization features.
Memory Status: shows the image resolution, image size in RAM in megabytes, shot
length in frames, and an estimated total amount of memory required for the
sequence compared to the total still available on the machine. Note that the last
number is only a rough current estimate that will change depending on what else
you are doing on the machine. The memory required per frame is for the first
frame, so this can be very inaccurate if you have an animated region-of-interest
that changes size in the Image Preprocessing system. The final aspect ratio
coming out of the image preprocessor is also shown here; it reflects resampling,
padding, and cropping performed by the preprocessor.
After Loading
After you hit OK to load the shot, the image prefetch system begins to bring it into
your processor’s RAM for quick access. You can use the playbar and timebar to play
and scrub through the shot.
Note: image prefetch puts a severe load on your processor by design—it rushes
to load everything as fast as possible, taking advantage of high-throughput devices
such as RAID disks. However, if the footage is located on a low-bandwidth remote
drive, prefetch may cause your machine to be temporarily unresponsive as the
operating system tries to acquire the data. If you need to avoid this, turn on the “Read 1f
at a time” option on the Shot menu. It is a sticky preference. If that does not help
enough, turn off prefetch on the Shot menu, or turn off the prefetch preference to turn
prefetch off automatically each startup.
You can use the Image Preprocessing stage to help fit the imagery into RAM, as
will be described shortly.
Even if the shot does not fit in RAM, you can get RAM playback of portions of the
shot using the little green and red playback markers in the timebar: you can drag them
to the portion you want to loop.
Sometimes you will want to open an entire shot, but track and solve only a
portion of it. You can shift-drag the start or end of the shot in the timebar (you may want
to middle-drag the whole timebar left or right first to see the boundary.
Select the proper coordinate system type (for MAX, Maya, Lightwave, etc) at this
time. Adjust the setting scene (Edit/Edit Scene Settings), or the preference setting if
desired.
If you have only a single shot in a scene, this may do what you want. If there are
multiple shots for stereo, witness cameras, etc, you will have to exercise caution to
avoid confusing yourself. If you think about it, you'll see that if the left and right eyes of a
stereo shot have different image frame numbers, according to this scheme, then it's not
at all clear what frame the time bar refers to. Same with witness cameras. This scheme
shows the numbering corresponding to the active camera, while still aligning all shots at
the same starting point.
If you have Match frame#'s turned on, it affects frame number display, but not
exports. The compatibility of other software with very large frame numbers will vary.
There a several technical problems that they cause. You may be able to easily modify
some exports to accommodate large frame numbers, but this is not a standard option.
In summary: starting at frame zero is a good idea! We like to teach what we feel
is the best approach.
The shot settings dialog will re-appear, so you can adjust or correct settings such as the
aspect ratio.
When activated as part of Change Shot Images, the shot settings dialog also
features a Time-Shift Animation setting. If you have tracked a shot, but suddenly the
director wants to extend a shot with additional frames at the beginning, or removed
some of them, use the Change Shot Images selection, re-select the new version of the
shot (with the additional or missing images), and set the Time-Shift Animation setting to
the number of frames added or removed (positive for added, negative for removed).
When you click OK, this will time-shift all the tracking data, splines, object paths, etc
later or earlier in the shot by that amount. You can extend the trackers or add additional
ones, and re-solve the shot.
Time-shifting is a fairly complex operation not to be taken lightly, as it involves
the creation or destruction of information. Some caution and scrutiny should be given to
shifted shots, and some cleanup or fine-tuning of animation may be required in the
vicinity of the beginning of the shot..
If frames from the beginning of the shot are no longer needed, it may be easiest
and best to leave them in place, but change the shot start value by shift-dragging it in
the time bar.
As you modify the image preprocessing controls, you can use the frame spinner
and assorted buttons to move through the shot to verify that the settings are appropriate
throughout it. Fetching and preprocessing the images can take a while, especially with
film-resolution images. You can control whether or not the image updates as you
change the frame# spinner, using the control button on the right hand side of the image
preprocessor.
The image preprocessing engine affects the shots as they are read from disk,
before they are stored in RAM for tracking and playback. The preprocessing engine can
change the image resolution, aspect ratio, and overall geometry.
Accordingly, you must take care if you change the image format---if you
change the image geometry, you may need to use the Apply to Trackers button on the
Output tab, or you will have to delete the trackers and do them over, since their
positions will no longer match the image currently being supplied by the preprocessing
engine.
The image preprocessor allows you to create presets within a scene, so that you
can use one preset for the entire scene, and a separate preset for a small region around
a moving object, for example. Presets can be configured to affect or not affect various
groups of controls.
Image Adjustments
As mentioned, the image adjustments allow you to fix up the image a bit to make
it easier for you and SynthEyes to see the features to be tracked. The preprocessor’s
image adjustments encompass 3-D LUTs, saturation and hue, level adjustments, and
channel selection and/or bit depth.
Rez Tab.
You can change the processing and storage formats or reduce image resolution
here to save memory. Floating point format provides the most accuracy, but takes much
more time and space. Float processing with Half or 16-bit storage is a reasonable
alternative much of the time. Most tracking activities use 16 bit format internally; you
may wish to use 8 or 16 bit while tracking for speed and to maximize storage, then
switch to float/float or float/half when you render undistorted or re-distorted images, if
you have high-dynamic-range half or float input.
It may be worthwhile to use only one of the R, G, or B channels for tracking, or
perhaps the basic luminance, as obtained using the Channel setting. (The Alpha
channel can also be selected, mainly for a quick check of the alpha channel.)
If you think selecting a single channel might be a good idea, be sure to check
them all. If you are tracking small colored trackers, especially on video, you will find they
often aren’t very colorful. Rather than trying to increase the saturation, use a different
channel. For example, with small green markers for face tracking, the red channel is
probably the best choice. The blue channel is usually substantially noisier than red or
green.
Levels Tab.
SynthEyes reads files “as is” by design, especially Cineon files are not
automatically gamma-corrected for display. That permits files to be “passed through”
with the highest accuracy, and also allows you to select the proper image and display
calibration if you like.
The level adjustments are the “simple way,” they map the specified Low level to
blackest black out (luma=0), and specified High level to whitest white (luma=1), so that
you can select a portion of the dynamic range to examine. The Mid level is mapped to
50% gray (luma=0.5) by performing a gamma-type adjustment; the gamma value is
displayed and can be modified. You should be a bit careful that in the interests of
making the image look good on your monitor, you don’t compress the dynamic range
into the upper end of brightness, which reduces that actual contrast available for
tracking.
The level adjustments can be animated to adjust over the course of the shot, see
the section on animated shot setup below.
The hue adjustment can be used to tweak the color before the channel selection;
by making yellows red, you can have a virtual yellow channel, for example.
The exposure control here does affect the processed images, if you write them
back to disk. That is different than the F-P Range Control setting on the Shot Setup
panel. See the section on using floating-point images.
Note that you can change the image adjustments in this section without having to
re-track or adjust the trackers, since the overall image geometry does not change.
Note: Having a large number of color map presets will increase SynthEyes's
startup time. One-time-use color maps should be stored with the source
imagery.
SynthEyes can save the current Level Adjustment, Hue, Saturation, and
Exposure settings in the form of a 1D or 3D color map (as needed). This allows you to
create color-only presets independent of a particular shot. The LUT resolution is set by
a preference in the FILE EXPORT section (1D LUTs are 8x the setting, which is for 3D
LUTs).
With that exception, SynthEyes will otherwise not build LUT tables for you; it is
not a color correction tool. You will need to obtain the tables from other sources such as
the film scanning house. There are some tool scripts for generating or manipulating
color maps in the Script/Lens menu item. There is a script for combining LUTs, for
example if you have a film LUT and a LUT for your own monitor, you can combine them
using the script, since you can only apply one at a time. There is also a tool for creating
color maps for Cineon files from the white- and black-point values.
You can find additional tools for manipulating and converting LUTs online,
including digitalpraxis.net — including a tool for ‘ripping’ LUTs from before and desired-
after images. That permits you to adjust a sample image in your favorite color-correction
app, then burn what you did to it into a 3D LUT SynthEyes can use. (Their tools are
commercial software, not freeware, we have no relationship with them and can not
vouch for the tools in any fashion, merely cite them as a potential example.)
Floating-Point Images
SynthEyes can handle floating-point images from EXR, TIFF, and DPX image
formats. Floating-point images offer the greatest accuracy and dynamic range, at the
expense of substantially greater memory requirement and processing time. The 64-bit
SynthEyes version is recommended for handling floating-point images due to their large
size. DPX images will offer the highest performance.
Floating point images may use 32-bit floats, or the 16-bit “half” format. The half
format does not have as much dynamic range, but it is almost always enough for
practical work even using High-Dynamic-Range images. The good news is that Half-
floats are half the size, only 16 bits. The bad news is that it takes a substantial amount
of time to translate between the half format and an 8 bit, 16-bit, or float format you can
track or display.
Accordingly, SynthEyes offers separate bit-depth selections for processing and
for storage. If you need the extended range of a float (or 16-bit int) format, you can use
that for any processing (especially gamma correction and 3-D LUTs), to reduce
banding, then select a smaller storage format, Half, 16-bit, or 8-bit. But keep in mind
that additional processing time will be required.
Though a floating-point image—float or half—provides accuracy and dynamic
range, to track or display it, it must be converted to a standard 8-bit or 16-bit form, albeit
temporarily. To understand the necessary controls, here are a few details on how that is
done (industry-wide).
Eight and sixteen bit (unsigned) integers are normally considered to range from 0
to 255 or 65535. But to convert back and forth, the numbers are considered to range
from 0 to 1.0 (in steps of 1/255), or 0 to 1.0 (in steps of 1/65535).
Correspondingly, the most-used values of the floating-point numbers ranges from
0 to 1.0 also. With all the numbers ranging from 0 to 1, it is easy to convert back and
forth.
But, the floating point values do not necessarily have to range solely between 0
and 1. With plenty of dynamic range in the original image, there may be highlights that
may be much larger, or details in the shadow that are much lower. The 0 to 1 range is
the only portion that will be converted to or from 8- or 16-bit.
The F.-P. Range Adjustment (F.-P. for floating-point) on the Shot setup dialog
allows you to convert a larger or smaller range of floating-point numbers into the 0 to 1
range where they can be inter-converted. The effect of this control is to brighten or
darken the displayed image, but it affects only the display and tracking—not the values
themselves.
You can adjust the F.P. Range Adjustment, and it will not affect the floating-
point images later written back to disk after lens distortion or stabilization.
This is quite different than the Exposure control on the Levels tab. The Exposure
control changes the actual floating-point values that will be written back to disk later.
The two controls serve different purposes, though the end result may appear the same
at first glance.
Tip: You may be able to open movie files in SynthEyes that you cannot open
in your downstream post-production software. If that is the case, use the
image preprocessor to write an image sequence version of the shot, then use
Shot/Change Shot Images to switch SynthEyes to use that version also. It will
then export the sequence name for the downstream software, rather than the
hard-to-read original.
Background
SynthEyes recognizes the following file extensions as possibly-readable movie
files (these lists are subject to change at any time):
macOS: .mov .mpg .mpeg .mp4 .avi .r3d .ts .dv .3g2 .3gp .3gp2 .3gpp .m2v .m4v .mp4v
.wmv .asf .divx .mts .m2t .m2ts
Windows : .avi .mov .mpg .mpeg .mp4 .r3d .mxf .wmv .3g2 .3gp .3gp2 .3gpp .m2v .m4v
.mp4v .asf .ts .dv .divx .mts .m2t .m2ts
Linux: .r3d
Just because an extension is on this list does not mean that SynthEyes can
read it. These extensions are here in the hopes that your system contains a codec that
can read it.
SynthEyes reads only RED R3D files, all others are read by your operating
system on SynthEyes's behalf, either directly or using additional specialized software
called codecs.
Files such as .avi and .mov are container formats that specify only how data is
wrapped, not the format of the image data. So h.264 data can be contained in a .avi or
in a .mov. While your operating system may be able to unwrap the data from an AVI or
MOV, if it does not contain the appropriate codec, it will not be able to uncompress the
images.
Some codecs are supplied by the operating system, while others are available in
the software supporting particular cameras or storage systems, or on the internet for
free or for purchase. If SynthEyes cannot read a particular movie file, you should
determine the codec involved and establish where it comes from (the Quicktime player's
Movie Inspector window can help with this).
Movie-Reading on Mac OS X
In macOS, there is only a single movie-file reading subsystem, Quicktime. Apple
does not supply a 64-bit version of Quicktime. Apple provides a set of stubs that
translate 64-bit requests from 64-bit SynthEyes to 32-bit Quicktime.
For information on formats supported by macOS, see
"http://support.apple.com/kb/HT3775 : Media formats supported by QuickTime Player."
For information on some additional formats that you may be able to read using
additional third party software, see "http://support.apple.com/kb/HT3526 : Adding
additional media format support to QuickTime."
You may be able to locate additional Quicktime codecs as well.
Note that though Apple lists the AVI file format, there are few available codecs for
reading AVIs using Quicktime (Win on Mac), and that due to the details of the 64-bit to
32-bit translation, only a predefined list of codecs can be used to output Quicktimes.
Movie-Reading on Windows
Windows has maintained compatibility with quite old codec software by providing
a succession of movie-reading systems, resulting in a rather complex situation.
HEVC/H.265 on Windows 10
Microsoft has removed some HEVC/H.265 codecs from Windows, and if you are
not able to open HEVC/H.265 files, you probably need to get either of the following from
the Windows Store:
HEVC Video Extensions from Device Manufacturer (Free), or
HEVC Video Extensions ($0.99)
This is apparently tied into licensing arrangements between Microsoft and the
three different licensing entities claiming HEVC/H.265, based on your history of
installing various Windows updates, reinstalling Windows, whether you have a blu-ray
player, etc...
Reader Subsystems
Windows provides an extensive set of subsystems, from oldest (1990s) to
newest:
the original Video for Windows (VfW, AVI) subsystem,
the Quicktime for Windows subsystem (supplied by Apple),
the DirectShow subsystem, and
the Windows Media Foundation (WMF) subsystem. (Windows 7+ only!)
(There's another subsystem for DRM-protected content that SynthEyes does not
support, as well)
Each subsystem has its own strengths and weaknesses. The VfW subsystem is
limited to files of at most 2 GB but opens files quickly. The WMF subsystem is the most
advanced, 64-bit with support for the latest formats including those from cell phones—
but Windows 7 or 10 is required to use WMF.
Each of the subsystems supports a different set of codecs, corresponding to its
age. So the AVI subsystem supports old codecs, but not the latest, while the WMF
subsystem supports only recent codecs.
For information on media formats supported in WMF, see
"http://msdn.microsoft.com/en-us/library/windows/desktop/dd757927%28v=vs.85%29.aspx : Supported
Media Formats in Media Foundation." Note that WMF reads some common macOS
.MOV formats.
For information on DirectShow, see "http://msdn.microsoft.com/en-
us/library/windows/desktop/dd407173%28v=vs.85%29.aspx : Supported Formats in DirectShow."
64-Bit SynthEyes
Windows is a 64-bit-only application. Many of the older codecs are available only
in 32-bit form. There are few 64-bit VfW AVI-writing codecs on Windows (Microsoft RLE,
Video 1, Intel IYUV).
To access the 32-bit codecs, SynthEyes contains a subsystem that sends movie-
reading requests from 64-bit SynthEyes to a 32-bit server that is part of the SynthEyes
installation.
Subsystem Ordering
A single .AVI or .MOV to be opened might be opened on all four movie-reading
subsystems. Oh, it might be opened in 64-bit mode, or in 32-bit mode, for a total of
eight combinations!
Depending on the movie, all 8 subsystems might be able to open it, or none!
Each subsystem that can open a movie may result in different performance.
To bring order to this madness, the SynthEyes Image Input preferences allow
you to control the order in which SynthEyes tries to open a movie file: the 1st Reader
(subsystem), the 2nd Reader (subsystem), etc. The first subsystem able to open the file
wins.
When "Via 32-bit" is encountered, the file is passed to the 32-bit server, which
attempts to use it using exactly the same order (excepting itself). So you can try
opening a file in the 32-bit environment first if you want, or last (the default).
The default ordering should be useful in most circumstances. Changes to the
preferences take effect when previously unseen files are opened. To see new settings
on an already-open file, close and reopen SynthEyes.
If you have a movie file with very few keyframes, so that seeks are slow, you
might consider experimenting with this setting.
Disk Cache
In addition to in-RAM caching, SynthEyes can cache shots onto a hard drive for
potentially faster access in certain circumstances. This feature is available only in
SynthEyes Pro, and makes sense only in the 64-bit version.
The disk cache can save time when the original footage is located on a remote
disk drive, or is encoded using a codec that takes a substantial amount of time to
decode (such as RED). The disk cache stores the entire shot's data in a single large flat
file ("BAFF" file) within a folder that you can locate on a fast local disk.
Warning: The Disk Cache does not preserve the shot. If metadata is
required, either turn off the disk cache, or save the metadata in tandem with
the BAFF file. See Shot Metadata below.
The disk cache stores a (decoded) version of the entire original shot, regardless
of shot begin/end settings, before any effects caused by the image preprocessor. The
disk cache stores the images as you work, whenever they are read by SynthEyes. If you
want to load the entire shot, click Play to run through it.
You can monitor the disk cache load percentage in the Memory Status area of
the Shot Setup panel. Unlike the RAM cache, the disk cache is still there after you close
and reopen SynthEyes.
To estimate the size of a disk cache file, multiply the image width times the
image height times the number of frames in the shot times THREE. If the "Cache only 8-
Bit versions" preference is off, then multiply by two if you require 16 bit/pixel processing
or "half" OpenEXR files, or multiply by four if the shot has floating-point values (Don't
multiply at all if the preference is on). An alpha channel will add an additional byte per
pixel per frame.
For example, 1000 RED images at 4096x2304 will require about 28 GB, which
will fit in RAM on customer's machines with 40 GB or more, but with disk cache, the
decoded version will persist from run to run, instead of requiring decoding each time.
The entire disk cache file appears simultaneously in the address space of
SynthEyes, which is one of the reasons SynthEyes is a 64-bit application!
Shot Naming
SynthEyes uses the file name of the shot, with a .baff extension, as the name of
its file in the disk cache folder. For example, Shot15.mov becomes Shot15.baff. For an
image sequence, the name of the image-file-list (IFL) file is used—which is the name of
the first image in the sequence by default.
SynthEyes keeps track of the last-modified date of the original file, so that if you
change the original footage, the disk cache will be flushed and rebuilt, and it stays
synchronized with the footage.
Tip: If you open an image sequence (creating a .IFL file) that has loaded into
the disk cache, and you want to create another SNI file based on the same
image sequence without causing the disk cache to reload, be sure to (re-
)open the .IFL file directly, rather than the first image in the sequence, which
would cause the IFL to be re-written, in turn causing the disk cache to be
invalidated and have to reload.
You may need to pay some attention to make sure that different shots don't have
the same BAFF file name. If they do, either one shot will not be cached (if they are both
open simultaneously), or each time you open one shot, the cache for the other will be
replaced. If you won't be working on the other shot any more, that is fine. But if you want
both caches to be persistent, you need to name them separately.
This is especially relevant for stereo work: you should name each eye's images
separately. If you have a LeftEye and RightEye folder with Shot37.mp4 in each, they will
conflict. Name them Shot37L.mp4 and Shot37R.mp4, for example. This will probably
help avoid mistakes as well.
Placing the Disk Cache Folder
The disk cache consists of files that reside in a folder whose location you must
specify. Disk caching is off by default, as the best location depends on the details of
your particular machine's configuration, and disk caching may or may not be a good
idea on your machine.
Carefully consider the following factors to determine the possible location of the
disk cache folder:
DO use SSD or RAID drives
unless you commonly work with compressed "movie" files (such as RED
or MP4) that take a long time to decode per frame, do NOT use
conventional rotating hard drives
DO use disk drives internal to your machine, ie connected via SATA or
equivalent.
DO use disk drives connected via Thunderbolt
do NOT use disk drives attached via USB
do NOT use disk drives attached via a network (Ethernet or SAN)
DO create a new folder just for the disk cache (it will contain many large
files)
DO use D:/DiskCache or /Volumes/FASTRAID/DiskCache for example
do NOT use D: and do NOT use /Volumes/FASTRAID etc
do NOT use a folder on a partition formatted as FAT-32 (the maximum
allowable file size is too small)
DO ensure you have full read/write permissions for the disk cache folder
do NOT allow the disk cache folder to be included in your regular backups
(This folder will be very large and frequently changing. If you back it up,
your backup size will explode. If the file is lost, it is easily regenerated from
the original shot.)
DO be aware that the large files written for disk caching may result in life-
limiting "wear" on SSD drives, as determined by how many different shots
you open per day and their sizes etc.
we recommend NOT using your main system SSD as a disk cache. Using
a secondary SSD drive will balance the performance and life of the drives.
To turn disk caching on, open the SynthEyes preferences (Edit/Edit
Preferences), and look at the Folder Presets group box, at the bottom right of the
preferences dialog. Change the drop-down selector to Disk Cache, then click Set and
select the desired folder to use (see below).
To turn disk caching off, use the same process as to set the Disk Cache folder,
but click the Clear instead. This does not delete the folder or any file within it.
Disk Cache Preferences
Aside from the folder location described above, which not only controls the
location but whether disk caching is enabled or not at all, there are some additional
controls to determine which shots are cached and how much space is allocated to
caches. These may be found in the Image Input subsection of the preferences panel.
The spinner values are in units of gigabytes (GB), ie 1024*1024*1024 bytes.
Min disk-cache shot Shots must require a cache at least this big to be cached. There's
no point caching a shot that fits easily in RAM.
Max disk-cache shot Shots must be less than this size to be cached. Defaults to 50 GB.
This is just a reference value; if you are working with large 4K shots
you may need to increase it.
Maximum disk-cache size The maximum total size of all the files in the disk cache. Old
disk cache files will be deleted to stay under this maximum size.
Defaults to 250 GB. You will almost certainly need to set this to
correspond better to the size of the disk containing the disk cache.
If you are dedicating an SSD or hard drive as a disk cache, allow 5-
10 GB of margin.
Cache only 8-Bit versions When set, all shots will be disk-cached at 8 bits per channel
bit depth, regardless of their original depth, to reduce disk cache
space and improve speed. When this checkbox is off, shots are
cached at the bit depth of the original images. See the following
section for more information.
Note that disk cache files are created at the required final size. You cannot
determine how much of the cache has been loaded by looking at the (BAFF) file in your
operating system... see the Shot/Edit Shot panel's Memory Status area.
Native vs 8-Bit Caching
Normally, when a shot is read, the image bit depth (8-bit, 16-bit, floating) is
determined by the "Process Depth" setting on the Shot Setup panel (which is also found
on the Rez tab of the image preprocessor). That turns out to be problematic for disk
caching, where shots can be open from multiple places and settings can change without
warning.
The disk caching system uses one of two different approaches, controlled by a
preference (Cache only 8-Bit versions):
Always caching a shot in its native bit depth
Always caching every shot at 8-bit depth
Note that the setting does not matter if all your shots are 8-bit, for example, a
typical AVI, MOV, Targa, or JPEG!
Using the native setting (default) preserves the full original content of the files for
sure, and is highly recommended for applications such as stabilization or lens
undistortion, where modified images will be output.
Using the 8-bit-only setting will cause the images to be converted to 8-bit, then
stored in the disk cache. This will speed improve interactive performance and minimize
the size of the disk caches. Do not use if for stabilization or shot undistortion, however.
Fine point: if you have half-float or floating-point images and 8-bit depth, the
Range Adjust control affects how the images are converted to 8-bit format.
Normally, floating-point values range from 0..1 and the range control can be
left at zero. If you adjust the range setting, it will affect all images as they are
read and placed in the disk cache. If you change the value after reading some
but not all images, you can wind up with a mixture in the disk cache, which is
not good. If that occurs, you should flush the cache using Script/Flush Shot's
Caches.
When you change the preference, all currently-open shots will be converted (and
flushed in the process). Other disk caches are unaffected until the shot is reopened.
Filling the Disk Cache
A disk cache will fill automatically as you work within SynthEyes, storing each
frame you use.
If you want to fill the cache all at once, perhaps before working on a shot at all,
you should open SynthEyes, open the shot, and click Play to play through the entire
shot. If you are using only a portion of the shot's entire frame, then only frames within
that range will be valid in the cache. If you want the entire shot to be valid in the cache
(for example, to work disconnected when you might change the frame range), then you
should load the disk cache before changing the beginning and ending frames of the
shot.
The Shot/Enable Prefetch and Shot/Read 1f at a time settings may affect how
fast frames read, and thus how long it takes to fill the cache. The best settings will
depend on your particular machine and the exact details of the source imagery (file type
and codec), so you may want to experiment a little. You can monitor the playback frame
rate on the SynthEyes status line to help.
When a shot is played, reading and decoding the original imagery and storing it
in the disk cache, a large data flow is produced.
If your machine has RAM available, some of the disk cache will be stored
temporarily by the operating system in that RAM, which saves time versus writing it to
disk immediately. The operating system will move some of the data to the disk as it
needs to use that RAM instead.
Once all available RAM has been used, additional frames can be decoded only
as fast as they can be stored to disk. For example, with a 30 MB/sec non-SSD disk and
6 MB/frame, frames will be read at about 5 frames/second.
SSD drives are recommended for disk caches because they are much faster
than hard drives. An Intel 520 SSD is specified to write at up to 520 MB/sec (6 Gbit/sec
SATA), which would permit writes 10x faster, in the 50 frame/sec range. At that speed,
the image decode time will dominate the playback rate. (SSDs similarly read much
faster than hard drives, which is useful once the shot is in the disk cache.)
When you close SynthEyes, any unsaved data will immediately be queued to be
written to disk. That can be tens of GBs depending on size of the file and the amount of
RAM on your machine. Since a standard non-SSD disk drive has only a few tens of
MBs/sec of write speed, saving the remaining data can take ten minutes or more. Your
operating system does that automatically in the background, so that it will have little
impact on other things you are doing.
Warning: if your electric power fails or you force your machine to shut off
before the BAFF file has been completely written, then any un-saved data will
be lost. The BAFF file will be unusable, since it will claim to have many valid
frames, but some will not have the complete image stored. Although this is
inconvenient, it is easily rectified by clicking Script/Flush Shot's Caches..
pre-processed images are always converted into this desired format for storage, right at
the end of pre-processing. This permits you to use floating point format for processing,
but convert to 8-bit for storage, for example. The RAM cache is shared among all open
shots.
Tracking and Display. The trackers and viewport display code request images from
the RAM cache, based on what you want SynthEyes to do. If you scrub through only
frames 20-30, then only frames 20-30 will be fetched and potentially reside in the RAM
or disk cache: image reading is based on "pull" not "push."
Operating System Free RAM Pool. (Not an official part of the processing pipeline, but
working off to the side.) Modern computers often have many GBs of RAM, even though
they are running fairly small applications such as web browsers. This means that often
there is a large pool of unused free RAM—unused in the sense that it is not part of any
particular application or process on your machine.
The operating system uses the free RAM as a cache for your disk. Once you read a file,
it stays in RAM as well as in disk, so that if you need it again, the operating system
already has it. And when you are writing to disk, the operating system will put the data
in RAM, then tell the application that the write is done, even though it is merely queued,
and may not be written for some time. So this pool of unused RAM is quite helpful. The
operating system may keep the disk cache's BAFF files in this unused RAM, if there is
enough available, which makes your machine quicker. If it needs the RAM, the
operating system writes the data to disk, if that is needed. If the data on disk is already
correct (the file is being read), then the operating system can take the RAM back at any
time. All of this happens automatically without your interaction.
Final exam question. If you change the Saturation value in the image preprocessor,
what caches are affected? Answer: only the RAM cache. Any affected images stored in
the RAM cache will be invalidated (dropped), and the image preprocessor will have to
regenerate them as they are needed. Since the disk cache stores unprocessed images,
it is unaffected, and the RAM cache will be supplied with images from the disk cache (if
present). SynthEyes will not have to go back to the movie reader (and maybe image
reader) to reread the original images.
RAM Cache Size with Disk Caching
The RAM cache and disk cache both use copious amounts of RAM, though in
different ways. The size of the RAM cache is controlled by the Max RAM Cache GB
preference in the Image Input section. The size of the disk cache is determined by the
size of the shot, but the amount of the disk cache in RAM is controlled adaptively by the
operating system, depending on the requirements of SynthEyes and other running
applications.
If you have the image preprocessor performing extensive processing (especially
un- or re-distortion or stabilization), and the shot can fit into RAM, be sure to keep the
RAM cache large enough to cache the shot, if possible, so that you avoid repeatedly
pre-processing the shot. This is no different than without the disk cache. Though the
disk cache helps quickly access the original images, it is important to use the RAM
cache to minimize repeated preprocessing.
When the image preprocessor is not being used (ie all controls are at their
default settings), then the image supplied by the disk cache will be the same one saved
in the RAM cache. Though the image is listed at its normal size on the shot setup panel,
in fact it does not take any extra RAM, because the image is already stored in the file on
disk. When the image is "stored" in the RAM cache, it occupies less than a hundred
bytes regardless of the image resolution.
Using Disk Cache Files Directly
The disk cache consists of large flat files, one for each shot, each with a BAFF
file extension. BAFF is a custom file type that SynthEyes can open and read directly as
well: essentially it is another kind of movie file, like an AVI or MOV or MP4.
Once a shot has been completely loaded into the BAFF disk cache file, the BAFF
file can be opened directly by SynthEyes. For example, you might want to create a
BAFF file from a shot on a network, then work on the shot from a different physical
location, disconnected from the original network.
To do that:
Make sure the shot is 100% loaded into disk cache by playing through the
entire shot, then checking the Memory Status area of Shot/Edit Shot.
Make sure it is at 100.0%.
Close SynthEyes
You MUST move the BAFF file to a different folder, because BAFF files in
the disk cache folder are subject to deletion at any time (to limit the size of
the cache, or clear it).
Reopen SynthEyes, and then the SNI file referencing the original shot.
Do a Shot/Change Shot Images, and select the BAFF file
The entire shot will be available immediately, subject to the time required
for your disk to read the section you need.
The BAFF file format is a new simple flat file-type specifically designed for very
fast operation using advanced virtual memory techniques. SynthEyes never reads it at
all: the entire file is mapped directly from disk into the application's address space, and
SynthEyes can pull data from it as if it was in RAM.
The operating system reads data for the BAFF file on SynthEyes's behalf, using
unused system RAM as a buffer to speed operation.
Interested developers can inquire about the BAFF format; it should be
straightforward to add high-performance BAFF image readers to other applications.
Warning: RED periodically updates their cameras and software with new
features and data formats that can make them unreadable by software built
with earlier versions of the RED SDK. SynthEyes currently uses RED SDK
7.0.8, which includes 8K support; check the release history for your version
for its RED SDK version.
RED Files can be read by three different methods, ordered fastest to slowest:
RED Rocket hardware board
GPU-accelerated processing using your video card
Main CPU processing
They can't all read all types of file, however. Most of the time, something
reasonable should happen automatically without your involvement, if you adjust
Shot/Enable prefetch appropriately. We'll discuss optimizing your configuration for
each case below.
RED Rocket Decoding
If one or more RED Rocket boards are present, they will be used by default.
We're only assuming that the Rocket is fastest, though. If you have a big powerful video
card, you can try using the GPU instead to make sure. See the following section to see
how to enable the use of Rocket.
For RED Rocket decoding, you should have "Shot/Enable Prefetch" turned on.
RED Rocket does not support decoding Dragon material. (RED Rocket-X might?)
RED GPU Video Card Decoding
If you have a more recent video card with 2GB or more of video RAM, the RED
GPU reader can provide much faster RED file decoding times, as much as ten times
faster or more!
Note: The RED GPU decoder does not support decoding HDRx or
ColorVersion1 files. Opening these files with the GPU reader active will cause
the software reader to be successfully used instead, but you will want to turn
"Shot/Read 1f at a time" back off to maintain interactive performance while
reading.
The real test of whether your video card is usable or not is to actually try it; we
highly recommend that approach (yes, that's you, demo users).
When opening RED shots using the GPU, you have "Shot/Enable Prefetch"
turned on to start with.
You can see whether your video card is being used the first time you open a
RED shot after starting SynthEyes. When the shot setup dialog appears, the status line
will provide information on whether CUDA or OpenCL are being used; if nothing
appears, software will be used.
While SynthEyes can read RED R3D files, there is no assurance that
downstream compositing or animation applications can do the same. You may need to
convert the R3D movie file to an image sequence in order to export files from
SynthEyes that downstream applications can read.
Hopefully the DNG standard and SDK will provide standardized processing in the
future.
File Sequences
Cinema DNG sequences are a little different from SynthEyes's normal image
sequences, so we list them here:
The filenames shall include a sequencing field that is at the same position and of the
same length for all filenames.
The sequencing field shall contain characters 0 through 9 only. A file whose filename
contains other characters in the location of the sequencing field (including sign,
period, comma, or space) shall not be part of this sequence.
The sequencing field shall be a run of at least four decimal characters in the
filename. When more than one such run exists in the filename, the sequencing field
shall be the run closest to the end of the filename.
Omitted intermediate numbers shall indicate corresponding missing frames.
For example, in the filename bridge_0812.1136.day13.dng, "1136" is the
sequencing field.
By contrast, a normal SynthEyes sequence field can be fixed length or steadily
increase, is always located at the end of the file name, and any missing frame ends the
sequence. If the usual SynthEyes rules are desired for DNG sequences, turn off the
Cinema DNG rules preference in the Image Input section, then open the file to produce
a normal IFL file. The preference can then be turned back on.
Tip: To facilitate later editorial changes and compatibility with other software,
you should always open the first file in a DNG sequence.
DNG files can be opened as 8-bit integer, 16-bit integer, or 32-bit floating point
files. The 32-bit format is the most accurate.
Performance
Reading DNG files and processing them to usable information is much more
time-consuming compared to simple TIFF or PNG files.
To improve performance, keep the Shot/Enable Prefetch menu item on, and the
Shot/Read 1f at a time menu item off. This will enable all the cores in your processor to
work simultaneously to read frames.
You can also use SynthEyes's local disk caching capabilities, so that the DNG
files are only read and converted once, then kept locally behind the scenes, preferably
on a fast RAID or SSD disk.
Shot Metadata
SynthEyes can retrieve metadata about shots (more specifically about the
individual frames in the shot) such as lens focal length, focus distance, iris, ISO, etc,
depending on the information available and extracted by the image-format-specific
reader. So it may be available or not depending on the image format and what camera
produced it, and what applications have processed it after that.
Current image/movie formats that SynthEyes can read metadata from: DNG,
EXR, JPEG, RED, and TIFF.
Warning: The Disk Cache does not preserve the metadata from the source
shot by itself. If metadata is required, either turn off the disk cache, or use the
Metadata/Export All Frames script to save the metadata in tandem with the
BAFF file.
There are a limited number of things that can be done with the metadata:
Take fixed or animated focal length and plate size data from a zooming
shot, and drive that into SynthEyes's seed FOV track via the
"Metadata/Retrieve focal length and plane" script (which is a Tool script,
not an importer!).
Look at it for clues to what happened on set with the "Metadata/Export
Single Frame" exporter.
Export it to a text file with "Metadata/Export All Frames", for example to
use metadata from a RED file that has been converted to an image
sequence.
Access the predefined metadata with Synthia, for quick looks or setup
gimmicks.
Access it via your own Sizzle workflow-integration scripts.
Note that the metadata is specific to each individual frame, ie it is animated.
Generally setup information should be available, and the same, for each frame, though
that may depend on the source.
Here are the predefined metadata items, which are created from other metadata
items. The names shown are literal and must be used exactly (without the quotes)
within a metadata file, Sizzle or python. Missing values will be zero or the null string, or
use MetaPresent to test. Note that the importers will typically define many more format-
specific items; exif_* items are frequently present.
"exposure" The shutter exposure time in seconds.
"fnumber" Lens f-number in f-stops (may be a t-stop for RED, see the RED-
specific metadata)
"focal" The lens focal length, in mm. WARNING: useless unless you also
really know the back plate width! Even still, not very accurate!
"focus" The lens focus distance, in mm.
"iso" The ISO setting of the camera 100, 200, 400, etc.
"plateHeight" The height of the plate (sensor) corresponding to the actual image,
in mm.
"plateWidth" The width of the plate (sensor) corresponding to the actual image, in
mm.
"shootDate" Shooting date YYYY-MM-DD (local time)
"shootTime" Shooting time HH:MM:SS (local time)
"shutter" Exposure as a shutter angle
"timecode" Timecode HH:MM:SS:FF
"timestamp" Timestamp of the frame within the file, in seconds. Can be used to
diagnose dropped frames during filming, for example.
You can easily access any additional metadata items that may appear in future
camera firmware. If you use the Metadata/Export Single Frame exporter, you'll see the
tag name that you can use in Sizzle or Synthia. For example, you can access RED's 8-
character reel ID from Sizzle with shot.metastr.red_reel_id_8_character To teach
Synthia to retrieve it, use
define an object attribute "reel id" values are a string accessed
readonly by `shot.metastr.red_reel_id_8_character`.
Then you can ask for camera 1's reel id, for example. Use metanum and "a
number" for numbers.
Retrieving Focal Length from Metadata
The (Scripts/) "Metadata/Retrieve focal length and plane" script looks at the
available metadata and drives it into the seed field-of-view track. This script is the
primary current use of metadata. The script handles both fixed and zooming shots.
Different controls will appear in the script depending on what metadata is available.
It is crucial to have an accurate back plate width number as well, as otherwise
the focal length means nothing. Sometimes the back plate width information is available
from the metadata; if not you must acquire it from the camera specifications or from
other calibration shots. In at least some cases, more metadata may be available from
still images acquired as "raw" and then saved as other image types.
In the event that no better information is available, the script may use the
exif_focal_length_in_35mm tag, which is the equivalent focal length, as if the camera
was a 35-mm still camera. This tag is dependent on the camera manufacturer doing the
math right. Unfortunately a technical issue in the EXIF data format requires that the
number be a whole (integer) number, limiting its accuracy.
The script writes to the "seed" track, ie the initial suggested value for the lens
field of view. Check the "Set lens mode to Known" checkbox (or set the mode yourself)
to have the lens field of view used exactly for the solve. Otherwise it may be used as a
starting point, or just for comparison.
If the scene has already been solved, you'll be asked if you want to clear the
solved camera FOV track. If it is cleared, then the newly-created metadata-based FOV
is visible directly and immediately. If not, it won't be visible unless the scene is re-
solved, the solution is cleared, "View/Show seed path" is selected, or you look at it
directly in the graph editor.
Writing Metadata
At the current time, only the TIFF image writer can write metadata into the
images it produces, ie via Save Sequence in the image preprocessor. It writes the EXIF
data produced by the DNG, JPEG, or TIFF image readers. Note that the data is not
currently modified to account for the potential effect of changes made by the image
preprocessor, such as resampling, padding, etc.
For bulk storage of metadata, use the Metadata/Export all frames script.
Note that if the preference is cleared (empty), then SynthEyes won't look for any
implicit alpha channel files at all.
Separate Alpha Channel File Types
Alpha channel files are themselves black and white files: their own alpha channel
is ignored and should be omitted. Ideally they can be monochrome images.
Since SynthEyes is intended for processing RGB imagery, support for reading
gray-scale formats is very limited (currently PNG or TIFF). You can write an alpha
channel as RGB if necessary (many apps won't give you a monochrome output option).
SynthEyes will use the green channel of an RGB image for alpha.
Alpha channels have a lot of redundancy, so you want to make sure that the
format do a good job compressing the alpha channel, without introducing artifacts. Run-
length or ZIP compression are good choices (not JPEG!). Due to compression, an RGB
image will probably not be much larger than a gray-scale image.
For most usage, PNG is the recommended alpha-file format.
Rolling Shutter
Rolling shutter is an imaging problem produced in many popular "CMOS"
cameras. This includes company-specific proprietary variations such as 3MOS
HyperMOS, etc, as well as other cameras such as RED that have other sensor names.
It occurs just as much in expensive cameras as cheap ones. Only cameras that
specifically state "global shutter" in their specifications, or "CCD" cameras (which are
rarely seen these days) do not suffer from this problem.
The rolling shutter problem arises because there is no consistent shutter period
for the entire image. The top lines of the image are "taken" over a physically different
stretch of time than the bottom lines. The lines at the top are taken earlier, the lines at
the bottom much later.
Consider a 30 fps 1080p camera. The top line is read out then reset; it begins
accumulating light and charge for the next 1/30th of a second. As soon as the first line
has been read out and reset, the second line is read out and reset, and it then begins
accumulating light for 1/30th second. That continues for every line in the chip.
Generally, for a 30 fps camera, it will take about 1/30th of a second to read out, one
after another, all 1080 lines.
That means that by the time the bottom, 1080th, line is read out, almost a full
1/30th of a second has gone by. It will be accumulating light for a period of 1/30th of a
second with virtually no overlap with the 1/30th of a second that the top line is integrated
over! The top line of the next frame will be read out and begin integrating in just an
instant. So the last line is closer to the next frame than the first!
This wreaks havoc on the geometry of the image. Depending on how the
camera is moving during the 1/30th of a second, and what it is looking at, an alarming
number of geometric distortions will be introduced.
To the extent that the camera is panning left or right, the image will be skewed
horizontally: a vertical pole will become slanted. To the extent that the camera is tilting
vertically, images will be squashed or stretched vertically. If the camera is pushing in or
pulling back, keystone distortions will result.
Consider a shot from a camera panning to follow a truck. The background suffers
rolling shutter distortion, but the truck (stationary in the image) does not! Each moving
object is affected differently, depending on its motion across the image.
If the camera is vibrating, the image turns completely to jello. (We rather
famously lost an expensive helicopter shoot to this effect.)
In short, rolling shutter, and CMOS cameras, are pretty much a disaster for visual
effects. If at all possible, we recommend using cameras with a global shutter for visual
effects shots, despite their higher cost.
Unfortunately, there's no way to eliminate this problem. You can reduce it, work
around it, but not eliminate it. Contrary to the claim of some, a short shutter time does
not reduce rolling shutter. If you think about the explanation above, you'll see why. A
short shutter time that reduces blur will make it harder to hide mistakes, as well.
Improperly shot footage with CMOS cameras will be objectionable even to lay
human observers, because it does not correspond to our perception of the world—
except in cartoons!
For professional shooters, the usual tactic for CMOS is to make sure that the
camera motion is slow in all directions, so that there is comparatively little motion from
frame to frame (wasn't it supposed to be "moving pictures"?). And CMOS cameras
should always be shot from hard mounts such as dollies, cranes, etc.
Moving a CMOS camera slowly does not eliminate the rolling-shutter problem. It
may reduce the geometric distortion sufficiently that you cannot see it. However, in
match-moving we typically match shots down to the sub-pixel level, so something you
can't see may still be a ten pixel error! That's bad.
There are several possible approaches to deal with that:
1. try to shoot to keep the rolling-shutter low, and try to cheat the tracks and
inserts to allow for the inevitable errors,
2. use a third-party software tool to try to correct the footage, or
3. compensate for the rolling shutter in the solve, producing a solve for an
'ideal' camera.
The first choice can work out for small amounts of distortion and modest tracking
requirements. The solver will adjust the solve to best match the distorted data, which
winds up allowing inserts to appear at a correspondingly distorted location. Long shots
that loop back on themselves and other shots with high self-consistency requirements
will be substantial problems.
The second choice, a third-party tool, can be OK for simple images too. But,
keep in mind that rolling-shutter causes an unrecoverable loss of data in the image.
Any repair can be only a best guess at the missing data, and for that reason you will
commonly see artifacts around edges, and residual distortion.
The third choice, producing a solve for an ideal camera, is the approach we
make available in SynthEyes, using the rolling-shutter controls on the Shot Setup panel,
and the Advanced Lens Controls.
When you turn on rolling shutter compensation, the solver will correct the tracker
data (not the images) based on the motion of each tracker, so that the tracker's position
is the position it had at the vertical center line of the image.
To do that, you must turn on rolling shutter compensation and supply a single
number, which is the portion of the frame time required to read out the image data. An
online tutorial shows how to measure rolling shutter.
Alternatively, the solver can compute rolling shutter from suitable shots,
when enabled from the Advanced Lens Controls panel. To be suitable, a shot must
have sufficiently rapid motion, and well-enough distributed trackers, for the rolling
shutter to be computed.
The rolling shutter value is the ratio of the readout time to the frame time, for
example, a 30fps camera with a 27 msec readout has a rolling shutter fraction of 0.81
(=27/33.33). Camera manufacturers are moving to reduce the readout time to reduce
the rolling shutter problem, so a 5 msec readout at 30 fps would be a rolling shutter
factor of 0.15 (quite modest). Some measured values can be found on the web site.
Gory detail: on footage shot on mirror stereo rigs, the footage from the
camera that is mirrored vertically will have rolling shutter in the reverse
direction, so the rolling shutter value should be set to the negative of the
usual value.
Eliminating the rolling shutter distortion in this fashion can provide very
substantial improvements in the quality of the solve.
In order to composite CGI generated for the ideal camera with the original rolling-
shutter-based footage, you must render the footage with a rolling shutter also. At
present, that capability is not widely available, for example in Lightwave and
Renderman, but we expect it to be more widely available in the future. It is essentially a
modified form of motion blur.
At present, you can likely simulate the effect by rendering at a multiple of the
frame rate, then combining the subframes in varying amounts depending on the vertical
position in the image.
Rolling shutter compensation also makes it more difficult to assess the quality of
tracking within SynthEyes, as the predicted 3-D position of a solved tracker is based on
the ideal camera, with no rolling shutter. So there will be apparent errors where there
are none.
Rolling Shutter Compensator
The Rolling Shutter Compensator tool script (in the Projection Screen section)
offers a gag that does some simple overall compensation for the effect of rolling shutter.
It is mainly offered as illustration of the effects of Rolling Shutter, rather than a regular
production tool to remove it (which is not possible in general).
Procedure
1. Solve the shot, using or computing a rolling-shutter value.
2. Run the Lens Workflow script if distortion is present.
3. Run this script, to generate a rolling shutter compensation projection
screen.
4. Modified footage can be viewed ONLY in the perspective view.
5. Turn OFF right-click/View/View Image in the perspective view.
6. Ensure the projection screen is visible, potentially by adjusting Far Clip on
right-click/View/Perspective View Settings.
7. Parts of the image will be missing, due to the rolling shutter compensation
(as with lens distortion), so you must reduce the field of view (starting at
the first frame) enough that all pixels are 'covered' throughout the entire
shot.
8. You can export the entire assembly, including the animated projection
screen to applications that support vertex caches (for example, via the
FBX exporter).
9. In downstream apps, make sure that the projection screen is visible, not
the original background image. Delete any redundant screen introduced
by the exporter.
10. Alternatively to 7-9, you can use RENDER in the perspective view to write
'corrected' images, then delete the screen, and Shot/Change Shot Images
to the corrected images.
Notes
This script is a experiment demonstrating the effect of rolling shutter, and
why it is problematic, not a way to remove it in general.
The shot MUST BE SOLVED before running the script, or it won't do
anything useful. It compensates for the distortion of the background (only)
as the camera moves.
Plausibly self-consistent only for tripod-type shots.
Note that the mesh distorts vertically as well as horizontally, even
rotationally. Rolling shutter distortion occurs for ANY direction or amount
of motion (ie not just pan)!
Rolling shutter compensation is possible only for relatively low-frequency
motions. Higher-frequency vibrations inherently can't be compensated;
they result in the dreaded jello shot.
Minimizing Grain
The grain in film images or speckle noise in digital cameras can perturb tracking
somewhat. Use the Blur setting on the Filtering tab of the image preparation panel to
slightly filter the image, minimizing the grain. This tactic can be effective for
compression artifacts as well. (Use the blur setting, rather than matching blurs on luma
and chroma.)
There is also a Noise Reduce spinner, which controls a somewhat slower
algorithm for noise reduction with less actual blur. It is intended to help tracking, rather
than for producing ultra-clean final images. It avoids some operations in typical noise
reduction algorithms that can shift the position of features in the image.
SynthEyes can stabilize the images, re-size them, or correct for lens distortion.
As it does that, it interpolates between the existing pixels. There are several
interpolation modes available. You can produce a sharper image when you are re-
sampling using the more advanced modes, but you increase the grain as you do so.
You can increase the hi-pass image contrast using the Levels settings, for
example low=0.25, high=0.75.
You can use a small blur for grain/compression in conjunction with the high-pass
filtering. It will also reduce any slight banding if you have used the Levels to expand the
range.
Memory Reduction
It is much faster to track, and check tracking, when the shot is entirely in the PC’s
RAM memory, as fetching each image from disk, and possibly decompressing it, takes
an appreciable amount of time. This is especially true for film-resolution images, which
take up more of the RAM, and take longer to load from disk.
SynthEyes offers several ways to control RAM consumption, ranging from blunt
to scalpel-sharp.
The most important control is the Max RAM Cache GB preference in the Image
Input section. It controls how many frames of the shot are stored in your computer's
RAM. This can be much lower than the length of the shot. If you are auto-tracking, keep
at least two frames per processor (four if your processors are hyper-threaded, ie two per
usable thread). If you are seeing swapping on a 64-bit license, reducing the MAX RAM
Cache GB should be your first move, and can generally be your last. (If your system
says it is running out of memory, check for plenty of free disk space on your main
system disk!)
A reduced RAM cache will mean that your computer will have to go out to fetch
images from disk more often. That can be painful for some movie codecs. If you want
fast interactive performance, but are willing to give up some other things in order to fit
your shot into RAM, read on.
If your source images have 16 bit data, you can elect to reduce them to 8 bit for
storage, by unchecking the 16-bit checkbox and reducing memory by a factor of two. Of
course, this doesn’t help if the image is already 8 bit.
If you have a 2K or 4K resolution film image, you might be able to track at a lower
resolution. The DeRez control allows you to select ½ or ¼ image resolution selections.
If you reduce resolution by ½, the storage required drops to ¼ the previous level, and a
reduction by ¼ reduces the storage to 1/16th the prior amount, since the resolution
reduction affects both horizontal and vertical directions. Note that by reducing the
incoming image resolution, your tracks will have a higher noise level which may be
unacceptable; this is your decision.
If you can track using only a single channel, such as R, G, or luma, you obtain an
easy factor of 3 reduction in storage required.
The most precise storage reduction tool is the Region Of Interest (ROI), which
preserves only a moving portion of the image that you specify, and makes the rest
black. The black portion does not require any RAM storage, so if the ROI is only 1/8 th
the width and height of the image, a reduction by 1/64th of storage is obtained.
Tip: If you need ROI these days, likely you should just get some more
memory! It was originally intended for processing film images on 32-bit
machines. This feature is subject to future removal! It may be useful for
processing padded 360VR images, ie 360x120 to 360x180.
The region of interest can be used with object-type shots, such as tracking a face
or head, a chestplate, a car driving by, etc, where the interesting part is comparatively
small. The ROI can also be used in supervised tracking, where the ROI can be set up
for a region of trackers; once that region is tracked, a different ROI can be configured
for the next group. A time savings can be achieved even though the next group will
require an image sequence reload. (See the section on presets, below, to be able to
save such configurations.)
The ROI is controlled by dragging it with the left mouse button in the Image
Preprocessing dialog’s viewport. Dragging the size-control box at its lower right of the
ROI will change the ROI size.
The next section describes animating the preprocessing level and ROI.
It can also be helpful to adjust the ROI controls when doing supervised tracking
of shots that contain a non-image border as an artifact of tracking. This extra border can
defeat the mechanism that turns off supervised trackers when they reach the edge of
the frame, because they run out of image to track before reaching the actual edge.
Once the ROI has been decreased to exclude the image border, the trackers will shut
off when they go outside the usable image.
As with the image adjustments, changing the memory controls does not require
any re-tracking, since the image geometry does not change.
To animate the controls, turn on the Make Keys checkbox ( ) at lower right of
the image prep dialog. Changes to the animated controls will now create keys at the
current frame, causing the spinners to light up with a red outline on keyframes. You can
delete a keyframe by right-clicking a spinner.
If you turn off Make Keys after creating multiple keys, subsequent changes will
affect only the keyframe at the start of the shot (frame zero), and not subsequent keys,
which will rarely be useful.
You can navigate within the shot using the next frame and previous frame
buttons, the next/previous key buttons, or the rewind and to-end buttons.
Disabling Prefetch
SynthEyes reads your images into RAM using a sophisticated multithreaded
prefetch engine, which runs autonomously much of the time when nothing else is going
on. If you have a smaller machine or are maybe trying to run some renders in the
background, you can turn off the Shot/Enable prefetch setting on the main menu.
Get Going! You don’t have to wait for prefetch to finish after you open a shot. It
doesn’t need courtesy. You can plough ahead with what you want to do; the prefetcher
is designed to work quietly in the background.
Image Centering
The camera’s optic axis is the point about which the image expands or contracts
as objects move closer or further away. Lens distortion is also centered about this point.
By convention of SynthEyes and most animation and compositing software, this point
must fall at the exact center of the image.
Usually, the exact optic center location in the image does not greatly affect the 3-
D solving results, and for this reason, the optic center location is notoriously difficult to
determine from tracking data without a laboratory-grade camera and lens calibration.
Assuming that the optic axis falls in the center is good enough.
There are two primary exceptions: when an image has been cropped off-center,
or when the shot contains a lot of camera roll. If the camera rolls a lot, it would be wise
to make sure the optic axis is centered.
Images can be cropped off-center during the first stages of the editorial process
(when a 4:3 image is cropped to a usable 16:9 window), or if a film camera is used that
places the optic axis allowing for a sound channel, and there is none, or vice versa
(none is allowed for, but there is one).
Image stabilization or pan/scan-type operations can also destroy image
centering, which is why SynthEyes provides the tools to perform them itself, so they can
be done correctly.
Of course, shots will arrive that have been seriously cropped already. For this
reason, the image preprocessing stage allows images to be padded up to their original
size, putting the optic axis back at the correct location. Note that the shot must be
padded to correct it, rather than cropping the image even more! It will be important to
identify the degree of earlier cropping, to enable it to be corrected.
The Fix Cropping (Pad) controls have two sets of three spinners, three each for
horizontal and for vertical. Both directions operate the same way.
Suppose you have a film scan such that the original image, with the optic axis
centered, was 33 mm wide, but the left 3 mm were a sound track that has been
cropped. You would enter 3 mm into the Left Crop spinner, 30 mm into the Width Used
spinner, and 0 mm into the Right Crop spinner. The image will be padded back up to
compensate for the imagery lost during cropping.
The Width Used spinner is actually only a calculation convenience; if you later
reentered the image preprocessing dialog you would see that the Left Crop was 0.1 and
the Width Used 1.0, ie that 10% of the final width was cropped from the left.
The Fix Cropping (Pad) controls change the image aspect ratio (see below) and
image resolution values on the Open Shot dialog, since the image now includes the
padded regions. The padding region will not use extra RAM, however.
It is often simpler to fix the image centering in a way that does not change the
image aspect ratio, so that you can stay with the official original aspect ratio throughout
your workflow. For example, if the original image is 16:9 HD, it is easiest to stay with
16:9 throughout, rather than having the ratio change to 1.927 due to a particular
camera’s decentering. The Maintain original aspect checkbox will permit you to
update the image center coordinates, automatically creating new padding values that
keep the aspect ratio the same.
Tip: Save Sequence can also include a simple render of the meshes in your
scene, for previewing with still-distorted footage. It is also useful for rendering
quick inserts in 360 VR footage, since the perspective view doesn't use 360
VR views. Normally, use the perspective view for better renders with motion
blur and control over antialiasing.
Use the Save Sequence button on the Image Preparation dialog’s Output tab to
save the processed sequence. If the source material is 16 bit, you can save the results
as 16 bit or 8 bit. You can also elect whether or not to save an alpha channel, if present.
If the source has an alpha channel, but you are not given the option to save it, open the
Edit Shot dialog and turn on the Keep Alpha checkbox.
Depth and Store Depth settings on the Shot Setup panel are set to that depth!
By default, they are set to 8 bit for speedy tracking, which will make your
imagery noisy!
Output file formats include ASF, AVI, BMP, Cineon, DPX, JPEG, OpenEXR,
PNG, Quicktime, SGI, Targa, TIFF, and WMV. Details of supported number of bits per
channel, compression formats, and alpha availability vary with format and platform.
Those settings will be available for later reuse if the same file extension is selected.
If you have stabilized the footage, you will want to use this stabilized footage
subsequently.
However, if you have only removed distortion, you have an additional option that
maximized image quality and minimizes the amount of changes made to the original
footage: you can take your rendered effects and run them back through the image
preprocessor (or maybe your compositing package) to re-introduce the distortion and
cropping specified in the image preprocessing panel, using the Apply It checkbox.
This redistorted footage can then be composited with the original footage,
preserving the match.
The complexity of this workflow is an excellent argument for using high-quality
lenses and avoiding excessively wide fields of view (short focal lengths).
You can also use the Save Sequence dialog to render an alpha mask version of
the roto-spline information and/or green-screen keys.
Note: The tracker cleanup stage above is labeled for experts because if the
initial solve doesn't go well—for example you have unmasked actors walking
around—the tracker cleanup will make the situation worse, not better. If you're
not certain the track and solve will go well, you're better off examining them
yourself and making any necessary changes before running the cleanup. If
you expect it to go well and it doesn't, you can undo the different stages of
AUTO individually.
The overall 2-D automatic tracking process is controlled from the Features Panel.
Typically, blips are computed for the entire shot length with the Blips all frames
button. They can be (re)computed for a particular range by adjusting the playback
range, and computing blips over just that range. Or, the blips may be computed for a
single frame, to see what blips result before tracking all the frames, or when changing
blip parameters.
As the blips are calculated, they are linked to form paths from frame to frame to
frame.
Finally, complete automatic tracking by clicking Peel All, which will select the
best blip paths and create trackers for them. Only the blip paths of these trackers will be
used for the final camera/object solution.
You can tweak the automatic tracking process using the controls on the
Advanced Features panel, a floating dialog launched from the Feature control panel.
You can delete bad automatically-generated trackers the same as you would a
supervised tracker; convert specific blip paths to trackers; or add additional supervised
trackers. See Combining Automatic and Supervised Tracking for more information on
this subject.
If you wish to completely redo the automated tracking process, first click the
Delete Leaden button to remove all automatic trackers (ie with lead-gray tooltip
backgrounds), and the Clear all blips button. After changes to the Roto splines, you
may also need to click Link Frames—in most cases you will be prompted for that.
Note that the calculated blips can require tens of megabytes of disk space to
store. After blips have been calculated and converted to trackers, you may wish to clear
them to minimize storage space. The Clean Up Trackers dialog encourages this. (There
is also a preferences option to compress SynthEyes scene files, though this takes some
additional time when opening or saving files.)
corner location are frequently not particularly reliable, being subject to varios blurs and
noise that have the effect of minimizing a corner, or shifting it from its true location.
Consequently the SynthEyes corner detector looks for suitable line segments at
a distance from the corner, and intersects them to produce the location of the corner. So
the accuracy derives from many smooth pixels further from the corner, rather than the
few unreliable pixels located at it. Even still, corner features are inherently less accurate
than spot features, which are based on a whole region of pixels. Since a tracker's 3-D
position is based on many frames of data, generally an excellent position can still be
obtained.
There will typically be fewer corners than spots located by an autotrack. For
modeling, you may want to increase the number of corners. The Add Many Trackers
panel can be set to selectively add only corner features. Or, the Features Control panel
can be set to make only prospective trails of corner blips visible and eligible to be
promoted to trackers (at your instruction).
You can adjust the corner detector's parameters from the Advanced dialog on the
Features panel. To better understand the process, you can use the edge or corner view
types, which will produce rather colorful displays in the main camera view.
The tooltips contain simple descriptions of the parameters. If you are using very
high or low resolution images, you might consider changing the various pixel-based
numbers such as edge width, minimum length, and intersect distance. In low-contrast
situations you might experiment with Edge Threshold and Contrast. Be sure to
experiment first, before doing an autotrack, so that you don't have to worry about
trackers you've already worked on.
Motion Profiles
SynthEyes offers a motion profile setting that allows a trade-off between
processing speed and the range of image motions (per frame) that can be
accommodated. If the image is changing little per frame, there is no point searching all
over the image for each feature. Additionally, a larger search area increases the
potential for a false match to a similar portion of the image.
The motion profile may be set from the summary or feature panels. Presently,
two primary settings are available:
Normal Motion. A wider search, taking longer.
Crash Pan. Use for rapidly panning shots, such as tripod shots. Not only a broader
search, but allows for shorter-lived trackers that spin rapidly across the image.
Low Detail. Use for green-screen shots where much of the image has very little
trackable detail.
There are several other modes from earlier SynthEyes versions which may be useful on
occasion.
Green-Screen Shots
Although SynthEyes is perfectly capable of tracking shots with no artificial
tracking marks, you may need to track blue- or green-screen shots, where the
monochromatic background must be replaced with a virtual set. The plain background is
often so clean that it has no trackable features at all. To prevent that, green-screen
shots requiring 3-D tracking must be shot with tracking marks added onto the screen.
Often, such marks take the form of an X or + made of electrical or gaffing tape.
However, a dot or small square is actually more useful to SynthEyes over a wide range
of angles. With a little searching, you can often locate tape that is a somewhat different
hue or brightness as the background — just enough different to be trackable, but
sufficiently similar that it does not interfere with keying the background.
You can tell SynthEyes to look for trackers only within the green- or blue-screen
region (or any other color, for that matter). By doing this, you will avoid having to tell
SynthEyes specifically how to avoid tracking the actors.
You can launch the green-screen control dialog from the Summary control panel,
using the Green Screen button.
When this dialog is active, the main camera view will show all keyed (trackable) green-
screen areas, with the selected areas set to the inverse of the key color, making them
easy to see. [You can also see this view from the Feature panel’s Advanced Feature
Control dialog by selecting B/G Screen as the Camera View Type.]
Upon opening this dialog, SynthEyes will analyze the current image to detect the
most-common hue. You may want to scrub through the shot for a frame with a lot of
color before opening the dialog. Or, use the Scrub Frame control at lower right, and hit
the Auto button (next to the Average Key Color swatch) as needed.
After the Hue is set, you may need to adjust the Brightness and Chrominance so
that the entire keyed region is covered. Scrub through the shot a little to verify the
settings will be satisfactory for the entire shot.
The radius and coverage values should usually be satisfactory. The radius
reflects the minimum distance from a feature to the edge of the green-screen (or actor),
in pixels. The coverage is the amount of the area within the radius that must match the
keyed color. If you are trying to match solid non-key disks that go as close as possible
to an actor, you might want to reduce the radius and coverage, for example.
You should use the Low Detail motion hint setting at the top of the Summary
panel to when tracking green-screen shots (it normally reads Normal). SynthEyes’s
normal analysis looks for the motion of details in the imagery, but if the most of the
image is a featureless screen, that process can break down. With Low Detail selected,
SynthEyes uses an alternate approach. SynthEyes will configure the motion setting
automatically the first time you open the greenscreen panel, as it turns on the green-
screen enable. See also a technique for altering the auto-tracker parameters to help
green-screen shots.
The green-screen settings will be applied when the auto-track runs. Note that it is
undesirable to have all of the trackers on a distant flat back wall. You need to have
some trackers out in front to develop perspective. You might achieve this with tracking
marks on the floor or (stationary) props, or by rigidly hanging trackable items from the
ceiling or light stands. In these cases, you will want to use supervised tracking for these
additional non-keyed trackers.
Since the trackers default to a green color, if you are handling actual green-
screen shots (rather than blue), you will probably want to change the tracker default
color, or change the color of the trackers manually. See Keeping Track of the Trackers
for more information.
After green-screen tracking, you will often have several individual trackers for a
given tracking mark, due to frequent occlusions by the actors. As well as being
inconvenient, it does not give SynthEyes as much information as it would if they were
combined. You can use the Coalesce Nearby Trackers dialog to join them together; be
sure to see the Overall Strategy subsection.
You can work in the perspective view with a matted version of the green screen
shot, ie with the keyed portions of the shot transparent and leaving only the un-keyed
actors and set visible in the 3-D perspective view. You can use this to place the actors
at their appropriate location in the 3-D world as an aid to creating your composite.
This will happen automatically if the dynamic projection screen is active in the
perspective view (on by default; see the Perspective Projection Screen Adjust script).
The dynamic projection screen holds the imagery within the perspective view, and is not
selectable or visible in other viewports.
For "real" geometry that is visible in other viewports (notably, other perspective
view), the Projection Screen Creator script creates mesh geometry in the scene that is
textured with the current shot imagery. As with the builtin screen, you can tell the
creator to Matte out the screen color.
Note: whether you use the dynamic projection screen or the creator script,
you can use an alpha channel on your shot instead of SynthEyes's keyer.
When doing this, be sure to turn on Keep Alpha when opening the shot
With a suitable tracker, use the Camera to Tracker Distance script to determine
the distance to the screen: use the value in parentheses, ie along the camera axis
You can write the green-screen key as an alpha-channel or RGB image using the
image preprocessor. Any roto-splines will be factored in as well. With a little setup, you
can use the roto-splines as garbage mattes, and use small roto dots to repair the output
matte to cover up tracking marks.
Important: do not do a Clear All Blips or Clean Up Trackers (with Clear Blips
enabled) before attempting to use either method of converting blip trails to
trackers. If the blips are cleared, there will be no raw auto-track data to create
trackers from.
If you wish to have a tracker at a particular location to help achieve an effect, you
could create a supervised tracker, but a quicker alternative can be to convert an existing
blip trail into a tracker—in SynthEyes-speak, this is Peeling a trail.
To see this, open the flyover shot and auto-track it again. Switch to the Feature
panel and scrub into the middle of the shot. You’ll see many little squares (the blips) and
red and blue lines representing the past and future paths (the trails).
You can turn on the Peel button, then click on a blip, converting it to a full tracker.
Repeat as necessary.
Use the controls at the bottom of the Feature panel to show only the longest
potential trails—here we will show only those that are 100 frames or longer. If you have
corner detection on, you can select only corners as well.
Alternatively, you can use the Add Many Trackers dialog to do just that in an
intelligent fashion—after an initial shot solution has been obtained.
The Add Many Trackers dialog searches the autotracking data for additional trails
that can be converted to trackers. It allows you to specify your requirements for the
trackers to be added, such as a minimum length, maximum error, or coverage of a
certain range of frames.
And, especially for mesh building, it allows you to use a previous camera-view
lasso to indicate an area in which new trackers should be selectively added. The new
trackers are chosen so that they are evenly distributed over the lassoed area to the
extent possible. (If there are few or no significant blips in an area, nothing can be added
there).
SynthEyes also provides default colors for trackers of different types. Normally,
the default color is a green. Separate default colors for supervised, automatic, and zero-
weighted trackers can be set from the Preferences panel. You can change the defaults
at any time, and every tracker will be updated automatically—except those for which
you have specifically set a color.
You can assign the color by clicking the swatch on the Tracker panel, or by
double-clicking the miniature swatch at the left of the tracker name in the graph editor
. If you have already created the trackers, lasso-select the group, and shift-click to
add to it (see the Lasso controls on the Edit menu for rectangular lassos). Then click the
color swatch on the Tracker panel to set the color. In the graph editor panel, if you have
several selected, double-click the swatch to cause the color of all the trackers in the
group to be set. Right-clicking the track panel swatch will set the color back to the
default.
If you are creating a sequence of supervised trackers, once you set a color, the
same color will be used for each succeeding new tracker, until you select an existing
tracker with a different color, or right-click the swatch to get back to the default.
You will almost certainly want to change the defaults, or set the colors manually,
if you are handling green-screen shots.
You will see the tracker colors in the camera view, perspective view, and 3-D
viewports, as well as the miniature swatch in the graph editor.
If you have set up a group of trackers with a shared color, you can select the
entire group easily: select any tracker in the group, then click the Edit/Select same
color menu item or control-click the swatch in the graph editor.
Each tracker has two possible colors: its primary color, and a secondary color.
The secondary color is used for each tracker when the View menu's Use alternate
color menu item is checked. The second color is usually set up by the Set Color by
RMS Color script; having done that you can switch back and forth between color
selections using the menu item.
To aid visibility, you can select the Thicker trackers option on the preferences
panel. This is particularly relevant for high-resolution displays, where the pixel pitch may
be quite small. The Thicker trackers option will turn on by default for monitors over 1300
pixels horizontal resolution.
Note that there are some additional rules that may occasionally override the color
and width settings, with the aim of improving clarity and reducing clutter.
Skip-Frame Track
The Features panel contains a skip-frame checkbox that causes a particular
frame to be ignored for automatic tracking and solving. Check it if the frame is subject to
a short-duration extreme motion blur (camera bump), an explosion or strobe light, or if
an actor suddenly blocks the camera.
The skip-frames checkbox must be applied to each individual frame to be
skipped. You should not skip more than 2-3 frames in a row, or too many frames
overall, or you can make it more difficult to determine a camera solution, or at least
create a temporary slide.
You should set up the skip-frames track before autotracking. There is some
support for changing the skipped frames after blipping and before linking, but this is not
recommended; you may have to rerun the auto-tracking step.
button .
Tip: You can create a tracker at any time by holding down the ‘C’ key and left-
clicking in the camera view. Or, right-click in the camera view and select the
Create Trackers item. In either case you will be switched to the Tracker
control panel.
Left-click on the center of your feature, and while the button is down, position the
tracker accurately using the view window on the command panel. The gain and
brightness spinners located next to the tracker mini-view can make shadowed or blown-
out features more visible (they do not affect tracking directly). Adjust the tracker size
and aspect ratio to enclose the feature and a little of the region around it, using either
the spinner or inner handle.
Adjust the Search size spinner or outer handle based on how uncertainly the
tracker moves each frame. This is a matter of experience. A smooth shot permits a
small search size even if the tracker accelerates to a higher rate.
Create any number of trackers before tracking them through the shot. You might
create and track a batch of 3-6 at a time in a simple shot, or only one at a time on shots
requiring more supervision.
Tip: later you'll see how to tell if you have enough trackers using the the
colored background in the Graph Editor, or in the timebar—if View/Timebar
background/Tracker count is turned on.
To track them through the shot, hit the Play or frame forward button or
use the mouse scroll wheel inside the tracker mini-view (scrubbing the time bar does
not cause tracking by design). Watch the trackers as you move through the shot. If any
get off track, back up a frame or two, and drag them in the image back to the right
location. The Play button will stop automatically if a tracker misbehaves, already
selected for easy correction.
Supervised trackers have many controls; they are there for a reason. You should
be sure to adjust the controls to each specific shot. Usually when people have problems
with supervised tracking , it is because they have not configured them at all!
Important! In addition to the more obvious tracker size and search size
settings, you should always be sure to select the proper tracker prediction
mode on the Track menu.
Types of Trackers
SynthEyes supports the following types of trackers, as controlled by a dropdown
button on the tracker control panel:
pattern-match trackers ,
white and black spot trackers,
symmetry trackers, and
planar trackers.
Pattern matching trackers are the most commonly used type for supervised
tracking: they allow any feature that you can see to be tracked, since you position the
tracker directly. SynthEyes searches subsequent images for the same image found
within the tracker's interior (specified by its size and aspect). Pattern match trackers can
be thrown off by scenes with rapid overall illumination changes, especially explosions
and strobe lighting. For such scenes, set up the image preprocessor to perform high-
pass filtering.
Spot trackers are produced by auto-tracking, though you may use them as well.
As the name suggests, they look for the center of a white or black spot, positioning the
center of the spot exactly at the center of the tracker. If you have a suitable feature, they
can be tracked easily through the shot without drift. You'll need to adjust the size of the
tracker properly through the shot: if the tracker is too big, surrounding imagery will
influence the tracker position; if it is too small, the spot will jump around between small
local maxima within the tracker. Note that typically there are many small local spots
(local maxima) that might be selected—SynthEyes uses a preliminary low-resolution
pattern match to identify the right spot from frame to frame. As with straight pattern
match trackers, this preliminary match can be thrown off by adverse conditions in the
shot, hence your supervision.
Symmetry trackers look for locations where the interior of the tracker is
symmetric—it looks the same when it is reversed top to bottom and left to right
simultaneously. This encompasses spots as well as other more complex patterns with
shapes similar to X's, H's, and often (weaker) nearly-symmetric features such as C's
and U's.
Planar trackers are substantially different, because they track a whole
rectangular (in 3-D) region, not a single point feature. They are pattern-match trackers
on steroids, if you like. For clarity, planar trackers are described in the separate 3-D
Planar Tracking Manual (available from the SynthEyes Help menu).
When dragging a spot or symmetry tracker in the camera view, tracker control
panel mini-view, or SimulTrack view, the tracker position will automatically snap exactly
to nearby peak locations, so that key locations can be set precisely to match
subsequent tracked frames. In the tracker mini-view and SimulTrack view, an X marker
will appear to show the nearest potential tracker location. If required, you can suppress
snapping by holding down the ALT/Command key while dragging.
The spot trackers created by autotracking are tracked only to the nominal shot
resolution, while spot trackers that are supervised are tracked at a higher resolution
controlled by the high-resolution setting at bottom of the Track menu (typically 4x
resolution). So you'll see some small off-centering in the X tracker location when
viewing autotrackers with the tracker mini-view or SimulTrack. A simple re-track of auto-
trackers will not eliminate those differences, because the positions are all keyed directly.
To re-run the autotrackers at the higher resolution, use a Fine-Tune with a Key Spacing
of one (ie without converting to pattern matches).
Channel Selection
Trackers can be set to look at only the red, green, blue, or luminance channels of
the input imagery. (By default, all three channels are used.) Selecting a single channel
may be useful in specific situations, such as when one channel has very high noise; you
should use RGB in most circumstances until proven otherwise.
You can set the channel for each specific tracker, using the channel selection on
the tracker control panel, or for all trackers by adjusting the channel selection on the
Rez tab of the image preprocessor. The channel selector initially shows , which is
the RGB setting.
Note that while an Alpha selection is shown on the tracker control panel's
channel flyout, it is usable only by planar trackers. If you want to track the alpha channel
with non-planar trackers for some reason, you'll need to set that using the image
preprocessor.
Important! SynthEyes can find the pattern only if you have configured the
tracker appropriately: the search area must be large enough, and the prediction mode
suitable, so that the pattern is still inside the search area. You can animate the tracker
size and search size as you progress through the shot.
If a tracker goes off course, you can fix it several ways: by dragging it in the
camera view, by holding down the Z key and clicking and dragging in the camera view,
by dragging in the small tracker mini-view, or by using the arrow keys on the number
pad. (Memo to lefties: use the apostrophe/double-quote key ‘/” instead of Z.)
You can lock SynthEyes in this "Z-Drop" mode using the Track/Lock Z-Drop on
menu item. In the zdrop-lock mode, a single selected tracker will be moved to the
mouse location immediately when the button is depressed. In zdrop-lock mode, if you
click over a mesh, it will be ignored. You can click a different tracker to select it, or use
other usual left-mouse functionality, without issue. The status line will show ZDROP-
LOCK when the mouse is in the camera view.
If a tracker gets occluded or goes off the edge you should turn it off (its stoplight-
like enable ) for a few frames. (See also the Hand Animation and Offset Tracking
techniques). When you turn it back on, SynthEyes will try to seamlessly reacquire the
tracker pattern. If so, no intervention is required. If it has been too long since the tracker
was last seen, SynthEyes will not look for it. You must reposition it manually by
dragging in the tracker mini-view or with Z-Drop. You can adjust the number of frames
until SynthEyes stops looking with the "Stay Alive" preference.
You can keep an eye on a tracker or a few trackers by turning on the Pan to
Follow item on the Track menu (keyboard: 5 key), and zooming in a bit on the tracker,
so you can see the surrounding context. When Pan To Follow is turned on, dragging the
tracker drags the image instead, so that the tracker remains centered. See the
SimulTrack view for monitoring multiple trackers simultaneously.
Also, the number-pad-5 key centers the selected tracker whenever you click it.
Tip: Normal tracker position keys appear as black triangles on the time bar.
When there is a key on a secondary channel, such as tracker size or search
size, and not on the tracker position, a gray tracker key is shown instead.
First, suppose you are tracking with a vertical search size of 0.03. At frame 30,
the shot is bouncier, and you increase the vertical search size to 0.04 so that the pattern
continues to be found.
Later, you go back and have SynthEyes play through the shot again, re-tracking
the tracker. Frame 29 was originally tracked with a search size of 0.03, but now it will be
tracked with a search size close to 0.04, say 0.395substantially larger. Depending on
the situation, there is a chance that the tracker will no longer be placed at the same,
presumably correct, location as it was originally. The same is true of the earlier frames,
to correspondingly smaller chances.
For a 100% reproducible effect, the search size keys could be interpolated using
staircase interpolation. But that doesn't really correspond to the underlying situation,
and requires some bizarre and incomplete changes if you later change the tracking
direction. If you'd tracked backwards, you'd probably have set the sizes the other way,
resulting in utterly different sizes in the middle frames. With linear interpolation, the
results are the same in both directions.
The same effect occurs when you change the tracker size and aspect ratio: due
to the linear interpolation, changes are effectively retroactive and may cause problems
to surface in previously-tracked frames.
But especially in the case of tracker size and aspect, the linear interpolation may
result in more accurate tracking data to result if the earlier frames are re-tracked.
Depending on the tracker settings, in particular if you use Smooth after Keying,
the affected earlier frames may automatically be re-tracked.
Linear interpolation may sometimes cause a change upon re-tracking, but it
makes more sense in general!
Tracker Size/Aspect Details
When SynthEyes goes to track a specific frame, it interpolates the size/aspect
keys to determine the values on that frame. That same size is used to determine the
reference pattern from the reference frame also, ie the same number of pixels are
compared between the two frames. The images are not rescaledthis is the key
difference between normal trackers and planar trackers. Using the same subimage size
is simpler, faster, and avoids resampling errors..
If the feature being tracked is dramatically different sizes between the two
images (search and reference), tracking will be less accurate or even fail. This could
occur in a long push-in or pull-back shot.
Accordingly, if you are animating the tracker size, you should be sure to set new
tracker position keys regularly, for example using Key Every. That way the difference in
scale never be that large.
If this is an issue, you should use a planar tracker.
The SimulTrack view shows each frame of a tracker with a position key, and
allows you to adjust the position key location: essentially the SimulTrack is an entire
collection of tracker mini-views, each corresponding to a different frame.
To take advantage of this, you can set up a tracker for smooth keying, as in the
prior section, open a SimulTrack view (either floating or as part of a viewport
configuration), and track the tracker.
You'll then see all automatically-generated keyframes simultaneously, and you
can adjust any of them directly in the SimulTrack view, without having to change frames
if you do not want to. Make sure Track/Smooth after keying is on, and the adjacent
frames will automatically be updated to reflect the changed key.
After you have an initial solve for the shot, you have an exciting option
available to you: you can have SynthEyes generate the entire set of auto-keys
automatically, acting as if the tracker is a zero-weighted tracker. Set the first position
key at the beginning of the lifetime of the tracker. Then click to a much later frame
where the tracked feature is still visible, and set another key using Z-Drop (hold down
the 'z' key and click in the camera view).
The two keys enable SynthEyes to estimate the tracker's 3-D location, then
generate a position key at each appropriate frame (determined by its key-automatically
setting on the Tracker Control Panel). All of those automatically-generated keys will pop
up in the SimulTrack view. The locations will be approximate, based on the accuracy of
your keys and the existing solve.
Then, use the SimulTrack view to tweak the positioning of each of the generated
key locations, which will trigger the Smooth after keying functionality. You can use the
strobe functionality (click on the 'S') to check each key location for consistency with its
neighbors—go ahead and adjust it even while strobing! After each key has been
adjusted, you'll have an accurate supervised track for the feature.
Tip: SimulTrack shows the tracked image feature of an offset tracker, plus an
offset marker for the final location, when the tracker is unlocked. The offset
Note that using SimulTrack is one potential workflow, not a required workflow. On
a simple shot, allowing supervised trackers to track through an entire shot by
themselves may be faster. You can then still use the SimulTrack view to monitor the
results. We provide the tools, you decide the best way is to use them.
Reference Crosshairs
You can enable the display of reference crosshairs with View/Show Reference
Crosshairs on the main or camera's right-click menu (default keyboard accelerator: shift-
+). The crosshairs can be handy for features on corners, or where comparison to other
nearby features is desired. The horizontal and vertical crosshairs can be adjusted
independently to any desired orientation and length (the horizontal and vertical
nomenclature refers only to the initial position).
The crosshairs can and typically must be animated to be useful, to match the
desired feature over the length of the shot. To manipulate the keys, see the graph
editor.
Cliffhanger Trackers
As a tracker approaches the edge of the image, it will be shut off automatically.
This is undesirable if the tracker comes close to the edge, but then moves around
without actually going off the edge. Simply turning the tracker back on is ineffective,
because it is immediately turned back off, and even setting a key position is only
temporary, as the tracker will typically be turned off again the next frame.
To handle this situation, turn on the Cliff (cliffhanger) button on the tracker control
panel, or on the camera view right-click menu in the Attributes section. This will disable
the automatic shutoff at the image edge.
Finishing a Track
When you are finished with one or more trackers, select them, then click the
Lock button . This locks them in place so that they won’t be re-tracked while you
track additional trackers.
Hand-Animating a Tracker
Hand animation can be a useful technique to create approximate tracking data
when an object is occluded and the camera or object motion is very smooth, ie the
camera is on a crane or dolly. It is not useful when there is a lot of vibration from a
hand-held camera; in that case use Offset Tracking. Hand-animation is typically used
when there are few available trackable features, ie for object tracking. If there are many
trackers, there's little incentive to go to the trouble.
Hand-animation uses the By Hand button on the tracker control panel. Suppose
you're tracking a corner of a building and a pole passes in front of the corner. On the
first occluded frame, turn on By Hand (instead of turning off Enable).
Hint: Follow what By Hand does by using the Camera & Graphs view. Open it
to your selected tracker with the U Pos, V Pos, Enable, and Hand Animated
curves displayed.
Warning: When you disable and later re-enable a tracker, you are saying you
don't know what happens in between. That's safe. When you use By Hand,
you are claiming you know what happened. If you aren't close to right, you will
make the solution worse. Therefore, if you have many trackers, it makes
sense to Enable and Disable. Hand-animation makes sense when there are
few trackable features and every one counts.
The By Hand button is an animated track, so you can have multiple separate
hand-animated regions during the shot, for example, each time a telephone pole goes
by. And you can adjust the keys at any time. (It also works fine for forwards or
backwards trackers.)
To better understand what it is doing, here's a brief explanation. When you
change By Hand or set a tracker position key, a spline interpolation routine runs. It looks
to see if the frame where a change was made is in, or immediately adjacent to, a
sequence of frames where By Hand is on. If so, it acquires all the tracker keys in that
region, plus the tracker position immediately before and after the sequence of frames.
Those keys are then interpolated and those positions stored on every frame that is not a
key.
If you originally used Enable and re-enabled for a temporary occlusion, you can
later go back and change it to use By Hand. Just go to the first occluded frame (which
will be disabled), and turn on By Hand. SynthEyes will re-enable the tracker for the
disabled section, and animate By Hand to be on for exactly the that previously-disabled
section. Magic!
There is a more advanced case worth pointing out where By Hand may appear
not to be working, but is being safe. That's when you have a nicely-tracked tracker with
a dodgy section in the middle, and you want to replace the dodgy part with a hand-
animated part.
In that case, when you turn on By Hand, note that initially it will be on for the
entire rest of the shot. If SynthEyes were to blindly do the spline interpolation described
above, it would overwrite the entire rest of the track (except for any keys). That wouldn' t
be a permanent disaster, since you could just have it Play through the rest of the shot
(or Undo), but it would be inconvenient.
To avoid that, when the By Hand region extends to the end of the shot,
SynthEyes stops the splining process at the next following tracker key. That protects the
rest of the already-tracked frames, limiting the potential damage (often to just the right
spot).
If you need to control what frames get replaced precisely before turning on By
Hand, either set a tracker position key at the frame you will turn off By Hand, or animate
the Enable on and off for the right section, so that the usual simple case applies. In any
case, once you've adjusted the By Hand region, you can just play through some frames
after the end of the region to restore them if they had been replaced.
Note that if you try to animate a tracker completely from scratch using By Hand,
it won't go interpolate past the second tracker key and you may think something is
broken, but that is just the protective mechanism in action. Turn By Hand off on the very
last frame of the shot and put a final position key there and splining will be active for the
entire duration.
Offset Tracking
In offset tracking, you track one feature in order to track another. Offset tracking
utilizes additional controls on the tracker control panel to handle three situations:
the feature being tracked is temporarily obscured, but a nearby feature is
not,
overlaying a correcting animation on top of an existing tracker that has a
slow drift, and
when an additional feature is to be tracked that is very close to an existing
completed tracker.
When handling obscured features, hand-animated tracking can be simpler for
shots where the camera is mounted on a crane or dolly. Offset tracking is valuable for
shots from hand-held cameras, since the already-tracked vibration carries over
automatically to the offset tracker.
In the cases above, you'll use the Tracker control panel's offset channels,
which offset the final tracker position from the position being visually tracked. The offset
is applied only if the Offset button is turned on. You can animate both the offset enable
button and the offset channels, though usually you'll create the offset values by
dragging in the camera view, rather than adjusting the spinner values.
When the offset is enabled, you'll see a small crosshair at the final location, and
the usual tracking graphics at the location being visually tracked as long as the
tracker is not locked. If the tracker is locked, then the tracker graphics are the standard
locked-tracker graphics, at the final location.
Similarly, the tracker mini-view and SimulTrack views show the location being
visually tracked if the tracker is not locked, and show the image of the final location if
the tracker is locked.
Important: Offset tracking is always less accurate than tracking real image
features: you're making data up, hopefully artfully, so it's only an
approximation to the right position. Temporarily disabling the tracker may be a
better alternative. Offset tracking is valuable when very few trackable features
are available, or when unaddressed lens distortion or other tracking issues
are causing pops when trackers appear or disappear (see also the Transition
Frms. spinner on the Solver panel).
5. Track until you reach frame 21stopping on frame 21when the sign and
light are both visible again.
6. Adjust the position of the offset marker to re-position it exactly on the sign.
7. Turn off the Offset button. The tracker snaps back onto the sign.
8. Continue tracking the sign.
The process looks more complicated than it is in practice. Due to changes in the
camera viewing angle, the required offset (from light to sign) won't be the same on at
the beginning and end of the offset. The point of the steps above is to set exact keys on
the offset channel at both ends of the offset section; the offsets interpolate linearly in
between.
You can change to a different reference pattern at any time during offset tracking,
simply by shift-dragging the tracker to a new feature. (This sets a new key on the offset
channels at the prior frame, in addition to the current frame, to make the offsets behave
properly; see the graph editor for details.)
You can also use offset tracking when you are tracking backwards (from large
frame numbers to small frame numbers); the procedure is the same, though you track in
the other direction.
Using an Offset to Compensate for Drift
The scenario here is that you did a quick and dirty track, largely without any
interior tracker position key frames, and you'd like to animate up an offset to
compensate for the drift.
Note: These features are intended for use after the underlying tracker has
been completed. Attempts to use these features for intermingled tracking and
offsetting probably won't go well: use one of the other workflows.
To overlay an offset on a completed (but not locked) tracker, go to the start of the
tracker and turn on the Offset button. If necessary, adjust the offset position of the
tracker (usually since you started there, it's OK). Go to the end of the shot and adjust
the offset position of the tracker. Scrub through the shot, adding additional keys to the
offset position until it tracks the correct location throughout.
To make this process a little easier, you can turn on the E (edit offset) button on
the tracker panel. When on, it changes the following functionality:
the camera view shows only the (somewhat larger) offset marker,
dragging the mini-tracker view moves the offset,
the Nudge keys move the tracker offsets, instead of the tracker pattern, and
the time bar shows only keys on the offset trackers.
These changes facilitate a keyboard-based workflow for adjusting the offsets,
using the nudge keys and the A and F keys to switch rapidly across the entire shot to
verify that the tracker is consistently at the same location.
(New Offset Tracker) on the Tracker control panel to make (clone) a copy of the
selected tracker. If the tracker already has an offset track, you will be asked if you want
to remove it, which you shouldit will be baked into the trackers path.
Rewind to the beginning of the tracker, then drag the offset cursor to the desired
feature to be tracked. Work through the shot, periodically adjusting the position of the
offset cursor to stay accurately on the desired point to be tracked.
Important! Don't change the underlying tracker, only the offset marker.
Tip: To monitor the position of the offset marker carefully, click the '5' key to
turn on pan-to-follow in the camera view. Zoom into the camera view.
Scrub through the shot to monitor the position of the offset marker. You can stop
adding additional keys when the offset position is suitably accurate throughout the entire
length of the shot (underlying tracker).
If the offset tracker winds up in the wrong 3-D location compared to the original
tracker, it is because you have not set up the offset channel accurately! The relative 3-D
location is ENTIRELY determined by what offset keys you set. An offset tracker does
not contain as much information as a normal tracker.
Offset tracking is easier for simple camera motions, such as a left to right dolly,
even if it bounces quite a bit, versus a complex move that will require many different
keys in the offset to get right.
Combining Trackers
You might discover that you have two or more trackers tracking the same feature
in different parts of the shot, or that are extremely close together, that you would like to
consolidate into a single tracker.
Select both trackers, using a lasso-select or by shift-selecting them in the camera
view or graph editor (see the Lasso controls on the Edit menu for rectangular lassos).
Then select the Track/Combine trackers menu item, or the Shift-7 (ampersand &). All
selected trackers will be combined, preserving associated constraint information.
If several of the trackers being combined are valid on the same frame, their 2-D
positions are averaged. Any data flagged as suspect is ignored, unless it is the only
data available. Similarly, the solved 3-D positions are averaged. There is a small
amount of intelligence to maintain the name and configuration of the most-developed
tracker.
When you combine trackers that have offsets, the offset channel data is lost: it is
baked into the combined tracker position.
Note: the camera view’s lasso-select will select only trackers enabled on the
current frame, not the 3-D point of a tracker that is disabled on the present frame. This
is by design for the usual case when editing trackers. Control-lasso to lasso both the 2-
D trackers and the 3-D points, or shift-click to select 3-D points.
Zero-Weighted Frames
If you have a few frames where you have eyeballed tracker positions, perhaps
due to extreme motion blur or defocusing, you have the option to tell SynthEyes to solve
for the camera/object position on those frames, but not affect the 3D tracker
positions.
To do that, open the Solver Locking panel (title line is Hard and Solf Lock
Controls), and animate the Zero-weighted frame checkbox at its top to indicate those
particular frames.
Don't use the zero-weighted frame channel so much that all of a tracker's frames
are zero-weighted. If that happens, the tracker will be insoluble and the solve will fail.
Trackers need to have a sufficient amount of perspective shift over their lifespan. If a
tracker is valid on twenty frames, which would be fine by itself, l but all except the first
two are zero-weighted, that tracker cannot be solved accurately because there is too
little shift between the two usable frames.
Once you have created some animation on this channel, it will be visible in the
Graph Editor, up at the top for each affected camera and object.
Pan To Follow
While tracking, it can be convenient to engage the automatic Pan To Follow
mode on the Track menu, which centers the selected tracker(s) in the camera view, so
you can zoom in to see some local context, without having to constantly adjust the
viewport positioning.
When pan to follow is turned on, when you start to drag a tracker, the image will
be moved instead, so that the tracker can remain centered. This may be surprising to
begin with.
Once you complete a tracker, you can scrub through the shot and see the tracker
crisply centered as the surroundings move around a bit. This is the best way to review
the stability of a track.
Pan to Follow applies to both eyes of a stereo setup simultaneously, ie turning on
Pan To Follow will do so for both eyes, based on the selected trackers or the cross-
selected trackers for the other eye.
Shift-5 turns on 3-D pan-to-follow, which follows the solved 3D location of a
tracker, and also can follow a moving object or mesh.
Skip-Frame Track
If a few frames are untrackable due to a rapid camera motion, explosion, strobe,
or actor blocking the camera, you can engage the Skip Frame checkbox on the feature
panel to cause the frame to be skipped. You should only skip a few frames in a row, and
not that many over all.
The Skip Frames track will not affect supervised tracking, but it affects solving,
causing all trackers to be ignored. After solving, the camera will have a spline-
interpolated motion on the resulting unsolved frames.
If you have a mixture of supervised and automatic tracking, see the section on
the Skip-Frame track in Automated Tracking as changing the track after automated
tracking can have adverse effects.
Guide Trackers
Guide Trackers are supervised trackers, added before automated tracking. Pre-
existing trackers are automatically used by the automated tracking system to re-register
frames as they move. With this guidance, the automated tracking system can
accommodate more, or crazier, motions than it would normally expect.
Unless the overall feature motion is very slow, you should always add multiple
guide trackers distributed throughout the image, so that at any location in the image, the
closest guide tracker has a similar motion. [The main exception: if you have a jittery
hand-held shot where, if it was stabilized, the image features actually move rather
slowly, you can use only a single guide tracker.]
Note: guide trackers are rarely necessary, and are processed differently than
in previous versions of SynthEyes.
You can also use the Combine trackers item on the Track Menu to combine a
supervised tracker with an automatically-generated one, if they are tracking the same
feature.
The Track/Fine-tune Trackers menu item re-tracks supervised trackers, to
improve accuracy on some imagery.
Trick: if you set the Key Spacing parameter to 1, ie a key on every frame,
SynthEyes will leave the trackers as spot trackers and re-set all the position
keys as if it was a supervised spot tracker, ie at the higher (interpolated)
image resolution controlled by the Track menu.
Controlling Fine-Tuning
When you fine-tune, SynthEyes will modify each auto-tracker so that there is only
one key every 8 frames (by default), then run the supervised tracker at all the
intermediate frames.
There are several options you can control when starting fine-tuning:
The spacing between keys
The size of the trackers
The aspect ratio of the trackers (usually square, 1.0)
The horizontal and vertical search sizes
The shot’s current supervised-tracker filter interpolation mode.
Whether all auto trackers will be tuned, or those that are currently selected
(whether they are automatic, or a previously-unlocked automatic tracker,
which would not otherwise be processed).
Usage Suggestions
The fine-tuning process is not necessary on all shots. The automatic tracker
produces excellent results, and does some supervision of its own. Fine-tuning may
produce results that are indistinguishable from the original, or even a little worse! Shots
with a slow camera motion may deserve special attention.
You can do a quick test by selecting and fine-tuning a single tracker, then
comparing its track (using tracker trails) before and after fine-tuning using Undo and
Redo. (See the online tutorial.) If the fine-tuning is beneficial, then fine-tune the
remaining trackers.
After fine-tuning, be sure to check the tracker graphs in the graph editor and look
for isolated spikes. Occasional spikes are typical when a tracker is in a region with a lot
of repeating fine detail, such as a picket fence.
Keep in mind that though fine-tuning can help give you a very smooth track, often
there are other factors at play as well, especially film grain, compression artifacts, or
interlacing.
The SimulTrack view can also be helpful for checking the trackers: select some
all of the trackers and scrub or play through the shot carefully. If you are working on a
stereo project, open two SimulTrack windows simultaneously, and turn on Stereo
Spouses item on one of them, to be able to see both sides of matching stereo trackers
simultaneously.
The tool produces an informative textual output with information that can be used
to adjust the settings. That's also discussed in the reference section.
This tool is statistical in nature, so it may flag trackers that are actually fine (or of
course miss trackers that have different problems). To make the detected trackers
easier for you to find and review, the tool optionally sets the erratic trackers to a color
you have selected (magenta by default). Turn on View/Group by Color to have them
grouped together on the graph editor, for example.
If you run the tool repeatedly, the Find Erratic Trackers tool changes any trackers
that have your Bad Tracker Color, but are no longer detected as bad, back to the default
uncolored state, to correctly indicate that they are no longer considered to be bad. This
means that there can be some confusion if you change the Bad Tracker Color in
between runs of the tool. You may want to select all the trackers and reset their color on
the Tracker panel.
If you have your own color scheme for trackers, this tool can overwrite it. To
avoid that, you can tell the tool not to change the colors by clearing the color selection
field on the tool. More subtly, you can turn on View/Tracker Appearance/Use alternate
colors so that this tool overwrites the alternate color, and when you're done with the
tool, turn Use alternate colors back off to make your original colors visible again.
You can run this tool before solving (as intended), but you can also run the tool
after solving if you forgot to run it earlier. That is also useful if you want to see how well
the tool does, by putting the graph editor into sort-by-error mode and seeing where the
selected trackers fall in the list. Conceptually they'd be all at the top, but that's not
necessarily the case.
Here are a few limitations of the tool:
It needs many trackers to work with, and you may not have enough if you are doing
supervised tracking. You may be able to change the kernel settings for to
accommodate fewer trackers in borderline situations
It doesn't look in the interior lifetime of the tracker for problems, it only looks for long-
term problems.
It relies on there being minimal lens distortion and rolling shutter. If you have known
lens distortion, remove it from the tracker data before running this tool.
360VR shots will need many more trackers visible on each frame than regular shots,
because erratic trackers can only be detected in a comparatively narrow field of
view.
Experience shows that real 360VR camera data typically has too much rolling
shutter and camera field boundary mismatch (ie between the fields of a 2- or 6-
camera set) to effectively find erratic trackers.
The indication of 360VR problems is a bad kernel histogram, where the count
values decrease steadily as the number of kernels increases, rather than having an
empty region in the middle. As a result, the number of trackers flagged will be very high!
Note that the Find Erratic Trackers tool currently uses a script to provide its user
interface; the script can be found on the Scripts menu.
After you open the graph editor, make sure it is in the tracks view , if you’ve
been playing earlier. If the shot is supervised tracking, make click on the sort order
button from sort alphabetic to sort by time . If you have resized the window you
, which selects squish mode, with no keys, with the tracker-count background
visible (it starts out visible). The graph editor on one example shot looks like this:
Each bar corresponds to one of the trackers; Tracker4 is selected and thicker.
The color-coded background indicates that the number of trackers is problematic at left
in the yellow area, OK in the middle, and “safe” on the right.
Tip: you can get the same colored background for the timebar, to indicate
whether you have enough trackers or not even when the graph editor is
closed. Change View/Timebar background to Tracker count. The mode at
startup is controlled by a preference in the User Interface area.
Warning: if you have many trackers and/or frames, the colored background in
the graph editor or timebar can take a while to compute, reducing
performance. You can turn it off in such cases.
You can configure the “safe” level on the preferences. Above this limit (default
12), the background will be white (gray for the dark UI setting), but below the safe limit,
the background will be the safe color (configurable as a standard preference), which is
typically a light shade of green: the number of trackers is OK, but not high enough to hit
your desired safe limit.
This squished view gives an excellent quick look at how trackers are distributed
throughout the shot. The color coding varies with for tripod-mode shots and for shots
with hold regions. Zero weighted trackers do not count.
Hint: When the graph editor is in graph mode , you can look at a direct
graph of the number of valid trackers on each frame by turning on the
#Normal channel of the Active Trackers node.
If there are unavoidably too few trackers on some frames, you can use the Skip
Frames track on the Features Panel to proceed.
The graph editor is divided into three main areas: a hierarchy area at top left, a
canvas area at top right, and a tool area at the bottom. You can change the width of the
hierarchy area by sliding the gutter on its right. You can partially or completely close the
tool area with the toolbox at left. A minimal view is particularly handy when the graph
editor is embedded in a viewport layout.
In the hierarchy area, you can select trackers by clicking their line. You can
control-click to toggle selections, or shift-drag to select a range. The scrollbar at left
scrolls the hierarchy area.
You can also select trackers in the canvas area in squish mode, using the same
mouse operations as in the hierarchy area.
The icons next to the tracker name provide quick control over the tracker
visibility, color, lock status, and enable.
Warning: you cannot change the enable, or much else, of a tracker while it is
locked!
The small green swatch shows the display color of a tracker or mesh. Double-
clicking brings up the color selection dialog so you can change the display color. You
can shift-click a color, and add all trackers of that color to the current selection, control-
click the swatch of an unselected tracker to select only trackers of that color, or control-
click the swatch on a selected tracker to unselect the trackers of that color.
Jumping ahead, the graph editor hierarchy also shows any coordinate-system
lock settings for each tracker:
x, y, and z for the respective axis constraints;
l (lower-case L) when there is a linked tracker on the same object;
i for a linked tracker on a different object (an indirect link);
d for a distance constraint;
0 for a zero-weighted tracker;
p for a pegged tracker;
F for a tracker you specified to be far;
f for a tracker not requested to be far, but solved as far for cause.
To begin, open the graph editor and select the graphs mode . Selecting a
tracker, or exposing its contents, causes its graphs to appear.
Note: the Number Zone is typically displayed in between the hierarchy portion
on the left and the graphs on the right. It shows the current value of each
individual channel, but isn't shown here for clarity.
In this example, a tracker suddenly started jumping along fence posts, from pole
to pole on three consecutive frames. The red curve is the horizontal U velocity, the
green is the vertical V velocity, and the purple curve is the tracker figure-of-merit (for
supervised trackers). You can see the channels listed under Tracker15 at left. The
green circles show which channels are shown; zoom, pan, and color controls are
adjacent. Double-clicking will turn on or off all the related channels.
There are a variety of different curves available, not only for the trackers but for
other node types within SynthEyes.
The graph editor is a mult-curve editor—any number of completely different
kinds of curves can be displayed simultaneously. There is no single set of coordinate
values in the vertical direction because the zoom and pan can be different for each kind
of channel. To determine the numeric value at any particular point on a curve, put the
mouse over it and the tooltip will pop up with the set of values.
The graph editor displays curves for each node that is exposed (its channels are
displayed; Enable, U. Vel, V. Vel, etc above).
The graph editor also displays curves for all selected nodes (trackers, cameras,
or moving objects) as long as the Draw Curves for Selected Nodes button is
turned on. This gives you quite a bit of quick control over what is drawn, and enables
you to compare a single tracker or camera’s curves to any other tracker as you run
through them all, for example.
You zoom a channel by dragging the small zoom icon . The zoom setting is
shared between all channels with the same type. For example, the U and V velocity
channels are the same type, as are the X, Y, and Z position channels of the camera.
But the U velocity and U position are different types. If you click on the small Zoom icon,
the other zoom icons of the same type will flash.
The zoom setting is also shared between nodes of the same type: zooming or
panning on one tracker affects the other trackers too. All related channels will zoom
also, so that the channels remain comparable to one another. This saves time and
helps prevents some incorrect thought patterns.
The pan setting is also shared between nodes, but not between channels: the
U velocity and V velocity can be separated out. When you pan, you’ll see a horizontal
line that is the “zero level” of the channel. It will snap slightly to horizontal grid lines,
making it easier to make several different curves line up to the same location. You can
later check on the zero level by tapping the zoom or pan icons.
There are two kinds of auto-zooms, activated by double-clicking the zoom or pan
icons. The zoom double-click auto-zooms, but makes all channel of the same type have
the same zero level. The pan double-click auto-zooms, but pans the channels
individually. As a result, the zoom double-click keeps the data more organized and
easier to follow, but the pan double-click allows for a higher zoom factor, because the
zero levels can be different.
For example, consider zooming an X position that runs 0 to 1, and a Y position
that runs 10 to 12.
If we pan double-click, the X curve will run full-screen from 0 to 2, and Y will run
full-screen from 10 to 12. Note that X is not 0 to 1, because it must have the same zoom
factor as Y. X will only occupy the bottom half of the screen.
If we zoom double-click, X will run from 0 to 12 full screen, and Y will run from 0
to 12 full screen. The range and zero locations of both curves will be the same, and
we’ll be better-able to see the relationship between the two curves. But if we want to
see details, the pan-double-click is a better choice.
There is no option to have X run 0 to 1 and Y run 10 to 12, by design.
Both zoom and pan settings can be reset by right-clicking on the respective
icons.
In this example, two trackers have been supervised-tracked with a Key Every
setting of 20 frames (but starting at different frames). The tracker Figure of Merit (FOM)
curve measures the amount of difference between the tracker’s reference pattern and
what is found in the image. You see it drop down to zero each time there is a key,
because then the reference and image are the same.
One tracker has a small FOM value that stays mostly constant. The other tracker
has a much larger FOM, and in part of the shot it is much larger. In a supervised shot,
the reason for that should be investigated.
You can use this curve to help decide how often to place a key automatically.
The 20 frame value shown above is plenty for those features. If you see the following,
you should reduce the spacing between keys.
You’ll also be able to see the effect of the Key Smooth setting: the key smoothing
will flatten out a steadily increasing curve into a gently rounded hump, which will reduce
spikes in the final camera path.
Velocity Spikes
Here’s an example of a velocity curve from the graph editor:
At frame 217, the tracker jumped about 3 pixels right, to a very similar feature. At
frame 218, it jumped back, resulting in the distinctive sawtooth pattern the U velocity
curve exhibits. If left as-is, this spike will result in a small glitch in the camera path on
frame 217.
You can repair it using the Tracker control panel in the main user interface
by going to frame 217. Jiggle back and forth a few frames with the S and D keys to see
what’s happening, then unlock the tracker and drop down a new key or two. Step
around to re-track the surrounding frames with the new keys (or rewind and play
through the entire sequence, which is most reliable).
DeGlitch Mode
You can also repair the glitch by switching to the Deglitch mode of the graph
editor, then clicking on the first (positive) peak of the U velocity at frame 217. SynthEyes
will compute a new tracker location that is the average of the prior and following
locations. For most shots, this will eliminate the spike.
If you see a velocity spike in one direction only, it will be more difficult to correct:
it means that the tracker has jumped to a nearby feature, and not come back. You will
have to put it back in its correct location and then play (track) through the rest of the
shot.
The deglitch tool can also chop off the first or last frame of a tracker, which can
be affected when an object moves in front, or a feature is moving offscreen. Even if the
last two or three frames are bad, you can click a few times and quickly chop them off.
Finding Spikes Before Solving
Learn to recognize these velocity spikes directly. There are double spikes when a
tracker jumps off course and returns, single spikes when it jumps off course to a similar
feature and stays there, large sawtooth areas where it is chattering between near-
identical features (or needs a new position key for reference), or big takeoff ramps
where it gets lost and heads off into featureless territory.
To help find these issues, the graph editor features the Isolate mode . Left-
click it to turn it on, then right-click it to select all the trackers (it does not have to be on
for right-clicking to work).
With all the trackers selected, you will usually see a common pattern for most of
the trackers, plus a few spots where things stick out. If you click the mouse over the
spikes that stick out, that tracker will be selected for further investigation. You can push
the left button and keep it down and move around investigating different curves, before
releasing it to select a particular one. It can be quicker to delete extra automatic
trackers, rather than repairing them.
After repairing each tracker, you can right-click the isolate button again, and look
for more. With two monitors, you can put the graph editor on one, and the camera view
on another. With only one monitor, it may be easiest to operate the graph editor from
the Camera & Graphs viewport configuration. Once you are done, do a refine-mode
solving cycle.
Hint: You can stay in Deglitch mode , and temporarily isolate by holding
down the control key. This gives a quick workflow for finding and repairing glitches.
solving mode. You'll be asked if you want to convert the trackers from Far back to
normal trackers, and you should click Yes.
IMPORTANT. On shots with marginal motion, set Tripod Fuzz to zero on the
Advanced Solver Settings (there's a preference if most of your shots are like
this). This will give you the best chance of determining a valid field of view.
Instead, you should use Known lens mode, using your best available estimate of
the lens field of view as determined from other shots from the project. The error can be
lower taking any reasonable field of view that you get asking SynthEyes to compute it,
when there's no information on which to compute it.
Note that Zoom shots can have a trickier version of this problem: there can be a
pronounced zoom sequence that has a clear relative zoom shift, but lacks the overall
pan to determine any absolute value. In this case you can set up a field of view
constraint (using the Solver Locking panel) to your best guess of the field of view during
that portion of the shot, typically the portion when the zoom is wide. Then you'll get the
matching computed zoom for the rest.
Far Trackers
In a tripod shot, no distance (range) information can be determined, so all tripod-
shot trackers are automatically marked as “Far” once they are solved, meaning that they
are directions in space (like a directional light), not a point in space (which corresponds
to an omni light). For the purposes of display in the 3D viewports and perspective view,
Far trackers are located at a fixed distance (the world size) from the camera, forming
the surface of a sphere if there are many.
Far trackers can also be generated from normal 3D solves, if a point is
determined to be far from the camera. This is typical for trackers on the horizon or
clouds.
Far trackers in normal 3D solves "move along with" the camera. As the camera
moves, the far tracker apparently moves with it. That's necessary so that it is always in
the same direction. (Far trackers do not rotate with the camera.) If you see some points
that are moving along with the camera, they are Far trackers (or you have set up a
"moving object" in the shot to which the tracker is attached).
You can have zero-weighted far trackers, which are solved separately after the
main solve, and without affecting the main solve, just like there are regular zero-
weighted trackers.
Subtle Tip: What actually matters is how many are valid on both one frame
and the next. If ten trackers are valid on frames 1-10, and a different ten from
frames 11-20, then we know nothing about what happened between frames
10 and 11. If we have one tracker valid from frames 1-10, and a different one
from frames 10-20, that is roughly equivalent here to one valid from 1-20.
You may have a few trackers in some parts of the shot, and only one tracker in
other parts of the shot... or only one valid tracker at any time in the shot. (On big pans,
you may have several trackers but only one at any given time, which still counts as just
one.)
Notice that if only a single tracker is valid on a range of frames, we can't
determine anything at all about the roll angle of the camera. The camera could be
spinning wildly around a line between the camera and the pointing being tracked. But
it's probably not. So even though mathematically it's an unsolvable indeterminate
problem, we typically want to do something practical that makes sense! We'll get to that
in a moment.
Don't Clone Trackers!
There was an undocumented, unsupported, and unsound workflow that used to
be floating around for these shots. It involved cloning the one tracker that you do have,
in order to defeat SynthEyes's error checking.
This posed obvious operational difficulties of having to determine exactly what
trackers to clone, and making sure that the clones got deleted and recreated if the
original tracker was changed.
But more importantly, it could result in glitches in the roll angle at the beginning
and end of the single-tracker sections. On any particular shot, they might be small
enough to ignore, but they are intrinsic to the setup, as follows.
Consider a shot with three sections: an initial section with multiple trackers, a
middle section with only one, and a final section with multiple trackers. At some point,
all 3 sections have valid camera and tracker data. Whatever the initial roll angle of the
middle section, it will not change, because there is only one tracker, and therefore no
way to tell the roll to change. The first and third sections, however, will continue to
optimize their roll angles, based on the several trackers valid in each section. As those
roll angles move away from that of the middle section, the glitch develops: that
difference is the glitch.
So inherently, when there isn't enough information, glitches will result.
Proper Single-Tracker Workflow
If lack of information causes the glitch, the cure is clearly to add information.
Specifically, constrain the roll angle. SynthEyes offers a special workflow to make this
fast, easy, and mathematically sound.
When SynthEyes detects you solving tripod shots with sections with only one
valid tracker, it does the following:
Clones the necessary tracker(s) temporarily, behind the scenes,
Adds a temporary, behind the scenes, Roll=0 lock, and
Introducing “Holds”
Instead, we need to tell SynthEyes “the camera is not translating” during the
second section of the shot. We call this a “hold,” and there is a Hold button for this on
the Summary and Solver control panels. By animating the Hold button, you can tell
SynthEyes which range(s) of frames that the camera is panning but not translating.
SynthEyes calculates a single XYZ camera position for each section of frames where
the hold button is continuously on—though it continues to calculate separate pan, tilt,
and roll (and optionally zoom) for each frame. (Note: you do not have to set up a hold
region if the camera comes to a stop and pans, but only a little, so that most of the
trackers are visible both before and after the hold region. That can still be handled as a
regular shot.)
not. The operations and discussions that follow rely heavily on the graph editor’s tracks
you need to adjust this value based on how much the camera is moving before and
after the hold region. If it is moving rapidly, keep Far Overlap at zero.
If the Combine checkbox is off, it makes a new tracker for each hold region; if
Combine is on, the same Far tracker covers all hold regions. For typical situations, we
recommend keeping Combine off.
The Clone to Far mode will “cover the holes” in coverage. The original trackers
will continue to appear active throughout the hold region. If you find this confusing, you
can run a Truncate operation from the Preparation tool: it will turn off the trackers
during the hold region. However, this will make it more difficult if you later decide to
change the hold region.
The Hold Preparation tool can also change trackers to Far with Make Far (all of
them, though usually you should tell it to do only some selected ones). It will change
them to Far, and shut them down outside the hold region (past the specified overlap).
The Clone to Far operation creates many new trackers. If you already have
plenty, you may wish to use the Convert Some option. It will convert a specified
percentage of the trackers to Far (tightening up their range), and leave the rest
untouched. This will often give you adequate coverage at little cost, though Clone is
safer.
Usage Hints
You should play with the Hold Preparation tool a bit, setting up a few fake hold
regions, so you can see what the different modes do. The Undo button on the Hold
Preparation Tool is there for a reason! It will be easier to see what is happening if you
select a single tracker and switch to Selected mode, instead of changing all the
trackers.
After running the Hold Preparation operation (Apply button), you may want to
switch to the Sort by Time option in the graph editor.
If you need to change the hold region late in your workflow, it is helpful if the
entire tracking data is still available. If you have run a Truncate, the tracking data for the
interior of the hold regions will be gone and have to be re-tracked. For that reason, the
Truncate operation should be used sparingly, perhaps only when first learning.
If you have done some tracker preparation, then other things, then need to redo
the preparation, use the Select By Type item on the Script menu to select the Far
trackers, then delete them. Make sure not to delete any Far trackers you have created
specially.
If you look back to the initial description of the hold feature, you will see that the
camera motion during a time of “Far” trackers is arbitrary… it could be to Mars and
back. We introduced the hold only as a useful and practical interpretation of what likely
happened during that time.
Sometimes, you will discover that this assumption was wrong, that during that big
pan, the camera was moving. It might be a bump, or a shift, etc. After you have solved
the shot with Holds, you can sequentially convert the holds to camera locks, hand-
animating whatever motion you believed took place during the hold. You should do this
late in the tracking process, because it requires you to lock in particular coordinates
during each motion. The key difference between holds and locks is this context: a hold
says that the camera was stationary at some coordinates still to be determined, while
the lock will force you to declare exactly which coordinates those are.
You may also need to use camera or tracker locks if you have exact knowledge
of the relationship between different sections of the path. For example, if the camera
traveled down a track, spun 90 degrees, then raised directly vertically, the motion down
the track and vertically are unlikely to be exactly perpendicular. You can use the locks to
achieve the desired result, though the details will vary with the situation.
The Hold Tracker Preparation Tool presents plenty of options, and it is important
to know what the whole issue is about. But, in practice the setup tool is a snap to use
and can be run automatically without your intervention if you set up the hold region(s)
before auto-tracking. You can also adjust the Hold Tracker Preparation tool settings at
that time, before tracking. The settings are saved in the file for batch processing or later
examination.
Important: A focal length value is useless 99% of the time — unless you also
know the plate width of the image (typically in millimeters, to the hundredth of
a millimeter). Unfortunately, this value is rarely available at all, let alone at a
sufficient degree of accuracy. It takes a careful calibration of the camera and
lens to get an accurate value. Sometimes an estimate can be better than
nothing; read on.
SynthEyes uses the field of view value (FOV) internally, which does not depend
on plate size. It provides a focal length only for illustrative purposes. Set the (back plate)
film width using the Shot Settings dialog. Do not obsess over the exact values for focal
length, because finding the exact back plate width is like trying to find the 25” on an old
25” television set. It’s not going to happen. Lens manufacturers will tell you that the
rated value is only intended to be a guideline, something like +/-5%
Fixed, Unknown if the camera did not zoom during the shot (even if it is a zoom
lens)
Fixed, with Estimate if the camera did not zoom during the shot, and you have a
good estimate of the camera field of view, or both the focal length and plate
width.
Zooming, Unknown if the camera did zoom
Known if the camera field of view, fixed or zooming, has been previously
determined (more on this later).
If you are unsure if the camera zoomed or not, try the fixed-lens setting first, and
switch to zoom only if warranted. Generally, if you solve a zoom shot with the fixed-lens
setting, you will be able to see the zoom’s effect on the camera path: the camera will
suddenly push back or in when it seems unlikely that the real camera made that motion.
Sometimes, this may be your only clue that the lens zoomed a little bit.
Important: Never use “Known” mode solely because someone wrote down
the lens setting during shooting. Like the turn-signal of an oncoming car, it is
only a guess, not something you can count on. Do not set a Known focal
length unless it is truly necessary.
You may have the scribbled lens focal length from on-set production. If you also
know the plate size, you can use the Fixed, with Estimate setting to speed up the
beginning of solving a bit, and sometimes to help prevent spurious incorrect solutions if
the tracking data is marginal. The mode is also useful when you are solving several
shots in a row that have the same lens setting: you can use the field of view value
without worrying about plate size. In either case, you should rewind to the beginning of
the shot and either reset any existing solution, or select View/Show Seed Path, then set
the lens field of view or focal length to the correct estimated value. SynthEyes will
compute a more accurate value during solving.
It can be worthwhile to use an estimated lens setting as a known lens setting
when the shot has very little perspective to begin with, as it will be difficult to determine
the exact lens setting. This is especially true of object-mode tracking when the objects
are small. The Known lens mode lets you animate the field of view to accommodate a
known, zooming lens, though this will be rare. For the more common case where the
lens value is fixed, be sure to rewind to the beginning of the shot, so that your lens FOV
key applies to the entire shot.
When a zoom occurs only for a portion of a shot, you may wish to use the Filter
Lens F.O.V. script to flatten out the field of view during the non-zooming portions, then
lock it. This eliminates zoom/translation coupling that causes noisier camera paths for
zoom shots. See the online tutorial for more details. You can also set up animated filter
controls using the post-solve filtering to selectively filter more during the stationary non-
zooming portion.
Note: When cubic and/or quartic values are present, meshes will take longer
to display in the camera view. There is no currently workflow to accommodate
or export the advanced eccentricity, squeeze and zooming distortion values.
Tip: The camera view shows 3D positions and meshes distorted by the solver
distortion outputs to match the original footage (at least, what is coming out of
the image preprocessor). In contrast, the perspective view undistorts the
original footage with the solver distortion outputs, so that the image matches
the true linear perspective view.
You should always use the minimum distortion complexity possible, ie you should
never turn on more complex distortion calculations than necessary, as doing so will
destabilize the solve and produce deceptive over-fitting which shows lower errors, but
actually increases errors when inserting 3D elements (see Using Zero-Weighted
Trackers as Canaries).
Note: While the solver can determine some additional parameters, they are
developmental in SynthEyes 1806 and not supported by any lens workflow.
You can compute them, but not export them or even output footage matching
the results.
If the image is distorted, you can adjust the lens panel’s Lens Distortion spinner
until the lines do match; add several lines if possible. Create lines near the four edges of
the image, but stay away from the corners, where there is more complex distortion. You
will also see a lens distortion grid for reference (controlled by an item on the View
menu).
WARNING: You must have a large number of trackers in the scene and
significant camera motion. Without these, distortion cannot be distinguished
from the effect of camera motion, and the resulting solution will be
theoretically correct, but practically meaningless.
Usually you should solve the shot without calculating distortion (perhaps just a
guess), then switch to Refine mode and turn on Calculate Distortion. When calculating
distortion, significantly more trackers will be necessary to distinguish between distortion,
zoom, and camera/object motion.
quartic, or work your way from quadratic to cubic to quartic. The cubic parameter will
usually have the opposite sign of the main distortion (ie one is positive, the other
negative), and the quartic yet the opposite sign.
SynthEyes's Advanced Lens Controls panel (accessed from the more button on
the Lens panel) controls the computation of the cubic and quartic distortion coefficients
(and rolling shutter) as part of the solve (as well as other parameters for anamorphic
lenses and zooming distortion). These parameters typically are more challenging to
push through an entire lens distortion workflow.
WARNING: You must have a large number of trackers in the scene and
significant camera motion. Without these, distortion cannot be distinguished
from the effect of camera motion, and the resulting solution will be
theoretically correct, but practically meaningless, or fail outright.
Note: Note that when cubic and quartic terms or other advanced lens
parameters are non-zero on the solver panel, meshes will take longer to
display in the camera view.
appears to change (zooms), and the image shrinks or expands vertically. The
relationship between the two is generally unknown and depends on the particular
combination of lenses involved as well as the portion of the lens aperture occupied by
the sensor.
Additionally, distortion can be present in the prime lens and separately in the
converter. There can be additional unpleasantness such as axis off-centering and axis
misalignment.
In general, determining how the combined lens behaves would require very
careful laboratory measurements, which are never available. Even careful grid-based
calibration is challenging in practice (and likely to lock in systematic errors instead).
Yet you, our bold SynthEyes user, has been handed a shot with anamorphic
breathing, and need to do something with it. Sometimes that shot consists of as little as
a stationary camera exhibiting a focus pull, which offers far too little information to
extract anywhere near all the numbers required to describe the overall lens combination
that we've mentioned above.
To address that, SynthEyes greatly simplifies its model of what lens misbehavior
the lens might exhibit, to the point that we can actually get useful information and
results.
Specifically, for anamorphic lenses the solver has can use or even compute three
additional numbers, as found on the Advanced Lens Control panel:
Distortion Eccentricity,
Anamorphic Squash Slope, and
Squash Reference FOV.
While normally distortion is radially symmetric (assuming you've got the lens
center right)—those lenses are really round—the distortion of an anamorphic lens can
be a combination of distortion in the primary and in the converter, eg a mix of distortion
that is radially symmetric in the real-world optical image plane, and radially symmetric in
the squashed sensor image plane. As a result, the net combined distortion is not radially
symmetric. The eccentricity value reflects a single averaged result (that does not
change with breathing).
The two squash values handle breathing. The idea is to tie the zoom and vertical
scaling together, using a simple linear relationship appropriate to relatively small
changes in zoom and vertical scaling. As the image zooms in and out horizontally, the
image zooms vertically slightly differently.
SynthEyes computes the (horizontal) animated field of view, which determines
the change in vertical scaling (by formula) from the squash slope and squash reference
FOV. If the squash slope is 1.0, that's a normal zoom with no squashing. The reference
FOV is the field of view at which the stated anamorphic ratio and final image aspect
ratio is correct! For example, you may have a shot with a field of view that varies from
70 to 75 degrees, with a stated aspect ratio of 2.369 (16:9 with 4:3 converter). Maybe at
74 degrees that is actually correct, so 74 would be the squash reference FOV. (At 70 or
75 the aspect ratio isn't correct because the anamorphic ratio isn't exactly 1.333 at
those fields of view/focal distances)
By adjusting the squash slope, and maybe reference FOV, SynthEyes aims to
reproduce the focus breathing of the overall camera lens combination.
To handle these shots, you'll need to know how to use the solver's Advanced
Lens Controls panel and probably the Solver Locking panel.
Tip: If this is your first day, or maybe week, with SynthEyes, you need to learn
quite a bit more before tackling this. If the directions below seem impossibly
vague, that's definitely the case. You have to learn to crawl before you run;
anamorphics are an advanced topic.
The .lni lens information files contain a table of values mapping from the “correct”
radius of any image point to the distorted radius. These tables can be generated by
small scripts, including a default fish-eye lens generator (which has already been run to
produce the two default fisheye lens files), and a polynomial generator, which accepts
coefficients from Zeiss for their Ultra-Prime lens series.
These distortion maps can be either relative, where the distortion is independent
of the physical image size, or absolute, where the distortion is described in terms of
millimeters. The relative files are more useful for camcorders with built-in lenses, the
absolute files more useful for cameras with removable prime lenses.
The absolute files require an accurate back-plate-width in order to use the table
at all. Do not expect the lens calibration table to supply the value, because the lens (ie a
detachable prime lens) can be used with many different cameras!
For assembled camcorders, typically with relative files, the lens file can supply
optional nominal back-plate width and field of view values, displayed immediately below
the lens selection drop-down on the image preprocessor’s lens tab. You can apply
those values as you see fit.
When you select or open a preset, some additional information about the setup
may be loaded into the image preprocessor as well, especially the required padding
values.
If you change an lni file (by re-writing it with a script, for example), you should hit
the Reload button on the Lens tab, while that lens file is selected. If you add new files,
or update several, use “Find New Scripts” on the main File menu.
use a grid-type correction, you will likely fix the images, but not the imaging geometry,
and the entire match-move will come out wrong. When you fix the centering, the
distortion will go away properly without the need for an asymmetric grid-based
correction—and the match-move will come out right in the end. SynthEyes fixes de-
centering by padding the image, as described in the section on that topic.
You might think or hear about grid-based distortion correction, to rubber-sheet
morph the different parts of the image individually into their correct places. This seems a
simple approach to the problem, and it is, sort of! But typical industry practices with
multiresolution grids are insufficient to do accurate lens correction. They have omissions
in the interior, and typically have less and less information as you go to the boundary of
the image, exactly when you need MORE information, not less. And many industry grid
images don't extend to the edge of the image. As a result it's common for grid images to
"burn in" systematic errors in the grid itself.
A model-based approach is preferable, ie one using centering and higher-order
terms. Versions after SynthEyes 1806 will likely include even more complex lens
models, to better and easier handle anamorphic lenses.
MOVED! There's now much more information and new tools in the Camera
Calibration Manual, found from the Help Menu in SynthEyes. The existing
material is still a good overview and usable, so it is retained here for
reference.
If you are able to shoot a specific lens calibration grid immediately before or after
principal photography, SynthEyes offers a tool to generate accurate lens presets
automatically.
We have a special 36"x27" lens calibration grid that will make this easy.
This is a consequence of trying to image a sphere with a flat image plane. You
may have to switch to the one-pass approach instead.
After the lens distortion analysis is complete, you should save the SNI file—it
contains the results, which include not only the lens settings (on the image pre-
processor's lens tab), the newly-created lens preset (if requested), but also cropping
values on the Cropping tab, a zoom value on the Adjust tab, and new resolution values
on the Output tab.
The easiest way to use your newly-created lens calibration SNI file is to do a
Shot/Change Shot Images to the actual shot you need to undistort, then use
Output/Save Sequence to write the undistorted sequence to disk. You can then do a
File/New, open the newly-created undistorted sequence, and track away.
If you need to, you can look inside the LNI file to see a set of the extracted
parameters again, including the Cropping and scale settings, as well as the RMS error
value.
Tip: You may be able to perform the image distortion in your compositing
application, to avoid round-tripping the images through SynthEyes, if the
exporter to your application has the necessary support, or if you use Lens
Distortion maps or static or animated projection screens.
Note: If animated distortion is present, the Lens Workflow is doing much more
work to achieve the same ends.
The Lens Workflow script performs the following actions for you: transfers any
calculated distortion from the lens panel to the image preprocessor, turns off the
distortion calculation for future solves, changes the scale adjustment on the image prep
Adjust tab to remove black pixels, selects Lanczos interpolation, updates the tracker
locations (ie Apply to Trkers on the Output tab), adjusts the field of view, and adjusts the
back plate width (so focal length will be unchanged).
When you do the Change Shot Images with the “Switch to saved footage” option,
SynthEyes resets the image preprocessor to do nothing: if the lens distortion and other
corrections have already been applied to the modified images, you do not want to
perform them a second time once you switch to the already-corrected images.
The "Clear related controls" setting on the Lens Workflow panel clears the lens
distortion, scale, and lens preset on the Lens tab, and the Crop settings on the Cropping
tab (both tabs on the image preprocessor).
Delivering Distorted Imagery
In this workflow option, you create and track undistorted imagery, generate CG
effects, re-distort the effects, then composite the distorted version back to the original
imagery.
Determine lens distortion via calibration, checklines, or solving with
Calculate Distortion turned on.
Save then Save As to create a new version of the file. (recommended).
Click “Lens Workflow” on the Summary panel (or start the Lens/Lens
Workflow script).
In most cases, ensure that the "Use input's bit depth" checkbox is turned
on: if the input is 16 bit or floating point, you will likely want the same bit
depth on output. If you have specifically set the bit depths on the shot
setup panel, turn this off. (If the bit depth changes, the RAM cache will
flush and reload automatically.)
Select whether you want a margin value in pixels, or percentage. A margin
value in pixels will result in an image that exactly contains the undistorted
image, with that number of pixels around the edge. The image resolution
will vary each time you run the script, based on the computed distortion
values. Alternatively, select a percentage margin, in which case the image
will always be that much larger, regardless of the computed distortion.
This approach lets you keep working with a fixed "round" resolution
throughout your toolchain.
Some codecs, such as Cineform, require that the image width be a
multiple of 16 pixels, otherwise they will crash SynthEyes. To
Tip: You may be able to perform the image undistortion and redistortion in
your compositing application, to avoid round-tripping the images through
SynthEyes, if the exporter to your application has the necessary support, or if
you use Lens Distortion maps or static or animated projection screens.
You can use the regular Undo while SynthEyes is still open. However, if you
need to revisit a file where the lens workflow has been run, and no pre-workflow version
is available, you'll need to use the "Undo earlier run" option on the Lens Workflow script
itself.
Unlike a regular undo, which literally restores earlier versions of various pieces of
data, the undo on the lens workflow is computational in nature: it computes what the
scene must have been before the lens workflow was run.
Accordingly, it's important that you not manually change the lens settings before
running the lens workflow script's undo: it must have access to the original settings in
order to successfully undo the original undistortion operation.
The Undo algorithm is intended for use with normal setups, with padding-
corrected centering and solver-calculated distortion. Unusual setups may require
operator assistance. The most important datathe 2D tracker data should always be
correctly restored, as long as there were no untimely image preprocessor changes.
Fortunately there is now a way to overcome this. Even better, the details are
generally handled automatically by compatible SynthEyes exporters. We provide more
details here so that you can understand what is being done, and be able to use this
technique in combination with other software as well.
Note: The After Effects exporter uses a custom SynthEyes-supplied lens un-
distortion and re-distortion plugin to handle quadratic, cubic, and quartic
distortion and does not use image distortion maps; see the section on
distortion in After Effects.
If you need to, you can can create these image distortion maps not only in
SynthEyes but in other software by distorting a reference image; the distorted image
has enough information to allow the distortion to be reproduced elsewhere.
This scheme is an excellent candidate for wide use throughout the visual effects
industry, and would save everyone quite a bit of time and aggravation.
The main drawback is that these images can require significant time to compute,
and require quite a bit of storage if animated distortion is present. (You must not
compress them!!!)
The Reference Image
The reference image has two independent gradients: a horizontal gradient in the
red channel from left to right, and a vertical gradient in the green channel from bottom
to top.
When the image is distorted, you get something like this (with a box around it to
make the boundary apparent):
Software can use this image to reproduce the same distortion on other images.
Each pixel in the output image is selected from the input image by using an X
coordinate determined from the red channel, and a Y coordinate determined from the
green channel. The ability to do this can often be found in existing compositing nodes.
red=1.00 is the right edge of column w-1 and similarly green=0.00 is the
bottom edge of row 0, and green=1.00 is top of row h-1. To pull from the
lower-left pixel of an HD image, red=0.5/1920, green=0.5/1080. This definition
is required for consistency and to permit the use of a map at one resolution
with other images. In some apps the red and green channels must be slightly
adjusted; Fusion can be used for a quick undistort/redistort test of the
difference this makes.
You can make your own reference image if you like using Photoshop... or simply
by opening a shot in SynthEyes, not setting up any lens distortion, and writing the
distortion map.
Bit Depth, Color Space, and Resolution
Some care must be taken when creating image distortion maps (or references).
Eight-bit and half-float image formats must not be used, as they are insufficiently
precise to specify an exact pixel location. Sixteen-bit images are OK, but full floating-
point images are best. (SynthEyes will only permit you to generate 16-bit or floating
images, eliminating incompatible formats.)
Likewise, it is crucial to avoid any color-space processing when image maps
are saved, and when the image map is read. (This may be impossible in Photoshop.)
Any color-space processing will create hidden distortions that may be very difficult to
detect! Be sure to select the "linear" option.
In the distorted image above, note that the output image is no longer full frame
(for illustration). The corrected image may also have tails in the corner. Generally the
output image (and map) should be larger than the original undistorted image, so that
there is no loss of effective resolution in the distortion and undistortion process. The
SynthEyes Lens Workflow script manages this.
Unmapped Pixels and Blue vs Alpha
In the example above, some pixels in the output image do not correspond to
pixels in the original undistorted image—there is nowhere to pull those pixels from.
There are several possible values to place into such pixels.
White, no alpha (R=G=B=1, no A) This is a good choice, because no valid
pixel can be white—the blue channel is zero for all valid pixels. The image
is smaller, since no alpha channel must be stored. The alpha channel can
be recreated for compositing, if needed, as 1-b.
Black, with alpha. (R=G=B=A=0) Alpha is 1.0 for all used pixels. Alpha is
available for compositing, but the image is larger due to the alpha.
Software must pay attention to alpha, since black pixels could be valid or
invalid depending on alpha. This option is pre-multiplied alpha: PMA.
White, with alpha. (R=G=B=1, A=0) Combination of the above, where
used and unused pixels are easy to distinguish. Requires alpha storage.
These images are non-pre-multiplied alpha (nonPMA) , so compositing
packages may need to know that.
Unclipped. OpenEXR only! Any of the above, but with no clipping of out-
of-range values. See below.
SynthEyes has preferences for each allowable image-map format (DPX, EXR,
PNG, SGI, TIF) to configure which of the options above will be used for that specific
image format.
The unclipped option is available only for OpenEXR images; it allows the red
and green channels to go under zero or above one. (Blue is always used as a out-of-
range marker.) With this option, compositing software that has pixels available in
overscan/margin areas can bring those pixels into the final result.
Inverse Maps
In a two-pass lens workflow, we need two image maps: the map for un-distorting
the original footage, and a map for re-distorting CG effects to match the original footage.
While it is possible to utilize the un-distortion map for re-distorting images, it is
much more complex and time-consuming—and less accurate and reliable—than the
very simple process to apply the map in the intended fashion.
Accordingly, SynthEyes always produces both un-distortion and redistortion
maps. It does that directly from the underlying mathematics to maximize speed and
accuracy. (It is possible to access an inversion routine, for use with maps from third
parties, from within a Sizzle script, see the Sizzle manual.)
Writing Image Distortion Maps
Once you've determined the lens distortion inside SynthEyes, you can create the
forward and inverse image distortion maps using the Shot/Write Distortion Maps menu
item.
You'll be prompted for the location and name of the (un-distortion) image map to
be written. The re-distortion image is always written also; the file name is the same, but
with "Redistort" tacked onto the name. (This suffix has a preference setting.)
You'll have to select the file type to be written, either DPX, OpenEXR, PNG, SGI,
or TIFF, all of which support either the 16-bit or floating-point format required for image
maps.
There is a preference for each file type in the Save/Export section. The
preference selects 16-bit or floating images (if both are supported), and whether to use
RGB only, RGB + non-pre-multiplied alpha, or RGB + pre-multiplied alpha.
There is also a preference that allows the outer boundary of the map to be
extended slightly to minimize potential artifacts around the edge of the resulting image.
Writing Image Distortion Sequences
If your distortion settings include zooming distortion or zoom with an anamorphic
squash setting, the distortion map is different on every frame. You can write the entire
sequence using Shot/Write Undistortion Sequence and Shot/Write Redistortion
Sequence. (Movie files are not suitable for this purpose.)
The sequences are largely subject to the same preferences as described in the
previous section, for Shot/Write Distortion Maps.
However, the output file names are separate so you can set the name for each
sequence directly (ie without Redistort being tacked on somewhere).
A frame is written for every frame of the shot, from Start Frame to End Frame
from the shot setup panel.
You should supply the literal name of the first frame that is to be written,
including whatever frame number you desire, possibly including leading zeroes. That
might be img1.png, img000.tif, img001000.png, img097.exr etc depending on the first
frame of the shot and your numbering convention. SynthEyes doesn't attempt to read
your mind or reverse-engineer any other settings: the file name you give is where the
first image output goes.
Exporters and Future Work
The Fusion 7 composition export uses the image distortion map system. You
should run the Lens Workflow script (accessed from the Summary panel) to set up for a
full lens distortion pipe to be built in Fusion. The full un-distortion/re-distortion pipe is
always built, just delete the re-distortion portion if you are using the single-pass
workflow.
At present, an experimental Nuke export is available, but it has issues dealing
with the different-size images produced after un-distortion. We'll need some Nuke users
to help straighten that out.
The image distortion map describes the distortion on a particular frame. Zooming
distortion and anamorphic focus breathing require animated undistortion and
redistortion image map sequences, which can be written via the Shot menu. However,
the capability to do this is not currently integrated into any exporters. This might be
considered for Fusion, After Effects, and/or Nuke if there is enough interest.
Note that SynthEyes can read image distortion maps produced by other software
(even manually) and use them to drive its own image preprocessing system, in lieu of
the usual distortion coefficients etc. This is configured from the Lens tab of the image
preprocessor. However, there's no support for animated distortion maps, largely
because of the complexity of efficiently managing the reading and discarding of the
maps in the heavily multi-threaded image preprocessor.
a rectangular plane. By deforming the mesh, we can remove distortion from the original
imagery, as seen by a simple viewing camera.
SynthEyes generates these projection screens automatically as part of various
exporters, or explicitly if you run File/Export/Projection Screens/Lens Distorted Screen
as A/W Object or Scripts/Projection Screens/Projection Screen Creator. These are
static projection screens.
To accommodate animated distortion and focus breathing, you need an animated
vertex cache, which can be produced by File/Export/Projection Screens/Projection
Screen Vertex Cache.
Alternatively you can create the screen and vertex cache in one go as part of
Scripts/Projection Screens/Stabilization Rig Creator.
Lens De-Centering
What we would call a lens these days—whether it is a zoom lens or prime lens,
or fisheye lens—typically consists of 7-11 individual optical lens elements. Each of
those elements has been precisely manufactured and aligned, and by the nature of this,
they are all very round in order to work properly. Then they are stacked up in a tube,
which is again very round (along with gears and other mechanisms) to form the kind of
lens we buy in the store.
The important part of this explanation is that a lens is very round and symmetric
and has a single well-defined center right down the middle. You can picture a laser
beam right down the exact center of each individual lens of the overall lens, shooting in
the front and out the back towards the sensor chip or film.
With an ideal camera, the center beam of the lens falls exactly in the middle of
the sensor chip or film. When that happens, parallel lines converge at infinity at the
exact center of the image, and as objects get farther away, they gravitate towards the
center of the image.
While that seems obvious, in fact it is rarely true. If you center something at the
exact center of the image, then zoom in, you’ll find that the center goes sliding off to a
different location!
This is a result of lens de-centering. In a video camera, de-centering results
when the sensor chip is slightly off-center. That can be a result of the manufacturer’s
design, but also because the sensor chip can be assembled in slightly different
positions within its mounting holes and sockets. In a film camera, the centering (and
image plate size) are determined solely by the film scanner! So the details of the
scanning process are important (and should be kept repeatable).
De-centering errors creates systematic errors in the match-move when left
uncorrected. The errors will result in geometric distortion, or sliding. Most rendering
packages can not render images with a matching de-centering, guaranteeing problems.
And like the example zooming in earlier, the de-centered lens can result in footage that
doesn’t look right.
It is fairly easy to determine the position of the lens center using a zoom lens.
See the de-centering tutorial on the web site. Even if you will use a prime lens to shoot,
you can use a zoom lens to locate the lens center, since the lenses are repeatable, and
the error is determined by the sensor/film scan.
NOTE: The camera calibration system can determine the lens center from
lens grids when there is sufficient distortion. See the manual for that and for
another method based on intentionally vignetting the camera.
Once the center location is determined, the image preprocessor can restore
proper centering. It does that by padding the sides of the image to produce a new
larger—but centered—image. For starters, that larger image is subject to lens distortion
correction, possible stabilization, then saved to disk. The CG renders will match this
centered footage. At the end, the padding will be removed.
This means that your renders will be a little larger, but there does not have to be
anything in the padded portions, so they should not add much time. Higher quality input
that minimizes de-centering will reduce costs.
As a more advanced tactic, the image preprocessor can be used to resample the
image and eliminate the padding, but this can only be done after initial tracking, when a
field of view has been determined, and it is a slightly skewed resample that will degrade
image quality slightly (the image preprocessor’s Lanczos resampling can help minimize
that).
You can save the corrected sequence away and use it for subsequent tracking
and effects generation.
This capability will let you and your client look good, even if they never realize the
amount of trouble their shot plan and marginal lens caused.
Solving Modes
Switch to the Solve control panel. The solver panel affects the camera or moving
object currently listed as the Active Tracker Host. Select the solver mode as follows:
Auto: the normal automatic 3-D mode for a moving camera, or a moving object.
Refine: after a successful Auto solution, use this to rapidly update the solution
after making minor changes to the trackers or coordinate system settings.
Tripod: camera was on a camera, track pan/tilt/roll(/zoom) only.
Refine Tripod: same as Refine, but for Tripod-mode tracking.
From Seed Points: use six or more known 3-D tracker positions per frame to
begin solving (typically, when most trackers have existing coordinates from a 3-D
scan or architectural plan). You can use Place mode in the perspective view to
put seed points on the surface of an imported mesh (or vertex of a lidar mesh).
Turn on the Seed button on the coordinate system panel for such trackers. You
will often make them locks as well.
From Path: when the camera path has previously been tracked, estimated, or
imported from a motion-controlled camera. The seed position, and orientation,
and field of view of the camera must be approximately correct. (You should still
set up a coordinate system for the resulting solve, using tracker positions or the
seed path!)
Indirect: to estimate based on trackers linked to another shot, for example, a
narrow-angle DV shot linked to wide-angle digital camera stills. See Multi-shot
tracking.
Individual: when the trackers are all individual objects buzzing around, used for
motion and facial capture with multiple cameras.
Disabled: when the camera is stationary, and an object viewed through it will be
tracked.
The solving mode mostly controls how the solving process is started: what data
is considered to be valid, and what is not. The solving process then proceeds pretty
much the same way after that, subject to whatever constraints have been set up.
SynthEyes where to concentrate its efforts in determining a suitable solution. Here it has
been changed:
Lens Settings
Before solving, you can tell SynthEyes to compute various lens distortion
parameters or not. Generally these should be computed as a Refine pass after an initial
solve has been obtained, unless the distortion is severe, in which case it may be
needed to start with. See the Lenses and Distortion section for more information,
including the Cubic and Quartic Distortion Correction section.
Be aware that at present the Quadratic Distortion term is the only one that the
camera view is able to compensate for, in order to display 3D objects at their correct
(distorted) locations. This is true of many computational algorithms as well. Generally,
quadratic distortion should always be the starting point.
The perspective view is able to include the effect of all the distortion parameters
(except rolling shutter), as it (un)distorts the image to match its linear view.
Normally after computing distortion coefficients, you will run the Lens Workflow
script to update the images and tracking data—again, see the Lenses and Distortion
section for details.
World Size
Adjust the World Size on the solver panel to a value comparable to the overall
size of the 3-D set being tracked, including the position of the camera. The exact value
isn’t important. If you are shooting in a room 20’ across, with trackers widely dispersed
in it, use 20’. But if you are only shooting items on a desktop from a few feet away, you
might drop down to 10’.
Important: the world size does not control the size of the scene, that is the
job of the coordinate system setup.
There is a checkbox between the text "World Size" and the spinner itself. When
the checkbox is checked, adjusting the spinner sets the world size for all objects; when
it is off, the spinner affects only the current tracker host object. The checkbox is
checked by default as that is most common and useful. When the world sizes are not all
the same, the spinner value is underlined in red, ie marked as a key, for information.
(The world size cannot be animated.)
Get Your Inner Geek On: SynthEyes divides all coordinate values by the world
size internally as it works. With the default world size of 100, a coordinate value of 64
will be internally processed as 0.64. Why bother? If SynthEyes needs to square the
value, 64 becomes 4096, while 0.64 becomes 0.4096. If we need to add one unit, we
get either 4097 or 0.4196. The world size can be too big, as well as too small. A world
size of 10000 would turn 64 into 0.0064, and squaring that would be a tiny 0.00004096.
By normalizing all the values by a reasonable world size, SynthEyes ensures that all the
values it is working on stay near 1.0, maintaining the accuracy of its calculations
(computers use only approximate arithmetic). And yes, SynthEyes does a LOT of
calculations.
Choose your coordinate system to keep the entire scene near the origin, as
measured in multiples of the world size. If all your trackers will be 1000 world-sizes from
the origin (for example, near [1000000,0,0] with a world size of 1000), accuracy might
be affected. The Shift Constraints tool can help move them all if needed.
Note: There are some requirements that the world size must match between
cameras on a stereo rig, and between camera and moving objects especially
when path locks are present. SynthEyes will enforce these during the solve.
As you see, the world size does not affect the calculation directly at all. Yet a
poorly chosen world size can sabotage a solution. If you have a marginal solve,
sometimes changing the world size a little can produce a different solution, maybe even
the right one.
The world size also is used to control the size of some things in the 3-D views
and during export: we might set the size of an object representing a tracker to be 2% of
the world size, for example.
Go!
You’re ready, set, so hit Go! on the Solver panel. SynthEyes will pop up a
monitor window and begin calculating. Note that if you have multiple cameras and
objects tracked, they will all be solved simultaneously, taking inter-object links into
accounts. If you want to solve only one at a time, disable the others.
The calculation time will depend on the number of trackers and frames, the
amount of errors in the trackers, the amount of perspective in the shot, the number of
confoundingly wrong trackers, the phase of the moon, etc. For a 100-frame shot with
120 trackers, a 2-second time might be typical. With hundreds or thousands of trackers
and frames, some minutes may be required, depending on processor speed. Shots with
several thousand frames can be solved, though it may take some hours.
It is not possible to predict a specific number of iterations or time required for
solving a scene ahead of time, so the progress bar on the solving monitor window
reflects the fraction of the frames and trackers that are currently included in the tentative
solution it is working on. SynthEyes can be very busy even though the progress bar is
not changing, and the progress bar can be at 100% and the job still is not done yet —
though it will be once the current round of iterations completes.
During Solving
If you are solving a lengthier shot where trackers come and go, and where there
may be some tracking issues, you can monitor the quality of the solving from the
messages displayed.
As it solves, SynthEyes is continually adjusting its tentative solution to become
better and better (“iterating”). As it iterates, SynthEyes displays the field of view and
total error on the main (longest) shot. You can monitor this information to determine if
success is likely, or if you should stop the iterations and look for problems.
If the scene starts to solve with a very large error, especially for 360VR shots,
you may need to adjust the Begin/End frames, see Special 360VR Solving
Considerations.
SynthEyes will also display the range of frames it is adding to the solution as it
goes along. This is invaluable when you are working on longer shots: if you see the
error suddenly increase when a range of frames is added, you can stop the solve and
check the tracking in that range of frames, then resume.
You can monitor the field of view to see if it is comparable to what you think it
should be — either an eyeballed guess, or if you have some data from an on-set
supervisor. If it does not seem good to start, you can turn on Slow but sure and try
again.
Also, you can watch for a common situation where the field of view starts to
decrease more and more until it gets down to one or two degrees. This can happen if
there are some very distant trackers which should be labeled Far or if there are trackers
on moving features, such as a highlight, actor, or automobile.
If the error suddenly increases, this usually indicates that the solver has just
begun solving a new range of frames that is problematic.
Your processor utilization is another source of information. When the tracking
data is ambiguous, usually only on long shots, you will see the message “Warning: not a
crisp solution, using safer algorithm” appear in the solving window. When this happens,
the processor utilization on multi-core machines will drop, because the secondary
algorithm is necessarily single-threaded. If you haven’t already, you should check for
trackers that should be “far” or for moving trackers.
After Solving
Though having a solution might seem to be the end of the process, in fact, it’s
only the … middle. Here’s a quick preview of things to do after solving, which will be
discussed in more detail in further sections.
Check the overall errors
Check unsolved and converted-to-far trackers (which are selected after
solving, as long as the preference is set)
Look for spikes in tracker errors and the camera or object path
Examine the 3-D tracker positioning to ensure it corresponds to the
cinematic reality.
Add, modify, and delete trackers to improve the solution.
Add or modify the coordinate system alignment
Add and track additional moving objects in the shot
Insert 3-D primitives into the scene for checking or later use
Determine position or direction of lights
Convert computed tracker positions into meshes
Export to your animation or compositing package.
Once you have an initial camera solution, you can approximately solve additional
trackers as you track them, using Zero-Weighted Trackers (ZWTs).
RMS Errors
The solver control panel displays the root-mean-square (RMS) error for the
selected camera or object, which is how many pixels, on average, each tracker is from
where it should be in the image. [In more detail, the RMS average is computed by
taking a bunch of error values, squaring them, dividing by the number of error values to
get the average of their squares, then taking the square root of that average. It’s the
usual way for measuring how big errors are, when the error can be both positive and
negative. A regular average might come out to zero even if there was a lot of error!]
The RMS error should be under 1 pixel, preferably under 0.5 for well-tracked
features. Note that during solving, the popup will show an RMS error that can be larger,
because it is contains contributions from any constraints that have errors. Also, the error
during solving is for ALL of the cameras and objects combined; it is converted from
internal format to human-readable pixel error using the width of the longest shot being
solved for. The field of view of that shot is also displayed during solving.
There is an RMS error number for each tracker displayed on the coordinate
system and tracker panels. The tracker panel also displays the per-frame error, which is
the number being averaged.
The viability of processing long shots depends on the quality of the tracking data
being presented. If the tracking data is good, SynthEyes will proceed through the shot at
a rate determined by the number of frames and trackers, so it's mainly a question of
how long it takes. (You'll want a fast processor and lots of RAM: 32 GB is a starting
point at present.)
You can accelerate the process by having SynthEyes initially solve only every
second or third frame using the Decimate control on the Advanced Solving panel. You
could choose a larger value, but it is a trade-off based on how long trackers last—a
tracker must be valid on a reasonable number of decimated frames to be able to
contribute, so if the decimation value is too high, trackers will only be valid on one or
two frames and the solve will fail. You can use larger values if the shot is very slow
moving and trackers last a long time.
After you get an initial solve using decimation, you can set the decimation back to
one (ie none, using every frame), set the solving mode to Refine, and have SynthEyes
solve for the intervening frames as well.
If the tracking data contains notably bad trackers, ie trackers on actors, moving
vehicles, spurious lens flares, etc, the solve can be thrown completely off, causing the
RMS hpix error to start increasing to very large values, and the solver to encounter
numerical problems (like dividing by zero, but different) that make progress impossible.
Long shots are virtually always done with automatic tracking, not supervised, and the
automatic tracker can put trackers on problematic features.
For that reason, automatic tracks always need to be cleaned up manually, but
that can be difficult on very long shots, where even playing through the shot at full
playback speed requires significant time, let alone examining it slowly enough to visually
detect problematic trackers.
The Find Erratic Trackers tool can help with this, but it's accuracy is subject to
the quality of the imagery. Unfortunately 360VR footage containing rolling shutter and
calibration problems is especially unsuited to automatically detecting problematic
trackers.
As a result of all this, long solves always run the risk of developing problems
during the solve. You don't want to have to repeat a multi-hour solve from scratch
several times. The challenge is to identify problems immediately, so you, the tracking
artist, can identify and correct them, and then be able to resume the solve.
SynthEyes gives you the tools to do that via the Advanced Solver Settings panel
('more' next to Go! on the solver panel), which we'll discuss here. You also have the
ability to adjust some solver parameters to optimize the rate at which your solve can be
processed, and minimize the chances the solve will encounter serious problems. We'll
start with that first.
On long shots, we recommend setting If looks far to Make ZWT, because as the
camera moves forward, trackers that initially looked far away may become very nearby.
They must not transition to far, and they can be trouble while they are initially far away,
so it's best to disable them (assuming you have many trackers per frame, as you
should). The Make ZWT option not only disables them, so they don't cause trouble now,
but makes them zero-weighted trackers (ZWTs), preventing them from later reviving as
zombie trackers and destroying subsequent Refine solves. Though some trackers that
are initially disabled can actually later be successfully revived, it is better to make them
ZWTs and examine and correct or delete them on a case-by-case basis, with no chance
of their becoming destructive zombies.
As the solver is working through the shot, it is alternating between adding more
frames to the beginning or end of the set of solved frames (ie camera or moving object
position or orientation), and adding more trackers to the solve. After adding trackers or
frames, it then iterates to find the best solution, before adding more frames or trackers.
(Simplified for this discussion.)
When frames or trackers are added to the solve, they arrive with initial estimated
positions. By its nature, an estimate will ultimately be shown to be wrong, ie contain
some error. Therefore when you add frames or trackers to the solve, you are adding a
lump of error. If that lump is too large compared to the solve, the solve will fall apart.
To limit the amount of error added to the solve at a time, you can reduce the Max
frames/pass and Max points/pass setting. Reducing either or both settings will reduce
the error added, typically reducing the iteration count (more speed), but increasing the
number of passes required to add all the frames or trackers (less speed). So it's a trade-
off. For noisy 360VR shots, reducing the Max frames/pass value is most beneficial.
When trackers or frames are added, the solver then iterates to find the best
solution, ie that with the least error. Each iteration takes the current solution and makes
it a little better, until there's nowhere better to go (it "converges"). Most of the time that
happens fairly quickly, a few tens of iterations. Sometimes that doesn't happen, and the
solver keeps making tiny corrections. There's a limit (Max iterations/pass) on how many
iterations will be performed before we decide that's dumb.
When the iteration count limit is hit, we need to decide what to do. If the Stop on
iterations checkbox is checked, the solver just stops so you can look at what's
happening immediately. Alternatively, we can just plow on, adding more trackers or
frames, and hope that what we have is good enough, and that the additional data will
help decide what's happening. Plowing on usually works.
The Max iterations/pass limit gives you control over when this happens. On big
long solves, it can be worthwhile to lower the limit down to 80, say, so that we just move
on to the next part of the solve without doing a lot more slow, senseless, iterations.
When a bad tracker or two does appear, typically what you'll see is that the
overall error once it has converged suddenly increases. (Errors are much higher while
iterating). You might be running along with an error around one pixel, then some frames
or trackers are added, and the best error starts going up into the five or ten pixel range
or higher. (It's not necessarily immediate.)
You can cause the solve to stop when this happens by setting the Stop if error
exceeds value. The default value, 15, is intentionally high; values down near 3 might be
more appropriate. Once stopped, you can repair and restart (more discussion below).
Bad trackers can also result in numerical issues in the solver ("Warning: solution
not crisp"). As with the iteration count limit, you can decide what you want to do when
this happens, either Stop, press ahead as if it converged (Advance and Hope), or
Proceed Slowly, which uses a different algorithm which skirts the numerical issues, but
is inherently and unavoidably incredibly much slower.
If you're trying to run a long solve in the background overnight, say, you might try
the Advance and hope approach, otherwise Stop is recommended. The Proceed Slowly
option is probably reasonable only for smaller solves, maybe a few thousand frames
and maybe a thousand trackers. It is used for the first ten seconds of every solve
though!
Once a long solve has stopped without completing, due to an iteration limit, hpix
error limit, or trouble mode setting, you then need to repair the situation and restart the
solve.
To identify the problem area, begin by looking at the solver output. Each pass,
you'll see the range of frames that have been recently added to the solve (possibly at
both beginning and end). You'll want to look for bad trackers in the general area of the
most recent frame range (or the range where the error started to degrade substantially).
You'll typically want to adjust the playback range (little green and red triangular
markers in the time bar) to the general problem area, and let the shot prefetch fill in all
the frames in that area, so you can scrub and play through it efficiently and hunt down
the bad tracker(s). The time bar's right-click menu, especially Set playback range to
visible, is helpful in doing this.
The most-recently-added trackers are automatically selected when the solve
stops before completing. (There's a preference to disable that.) There are many
features to help you find bad trackers (most in the camera view):
the camera view's right-click/View/Show tracker radar,
turning off image display via right-click/View/Show Image,
the View/Show 3-D only if visible,
Tip: If the solver's Constrain checkbox is turned on for long shots, avoid
Distance constraints between trackers from much different parts of the shot,
as that may degrade performance. Use *3 constraints or Place mode (many
locked points) instead.
To give you some idea of what we're talking about, there are three variables for
each tracker location, and six variables for each camera position. For a 300 frame
autotracked shot with 120 trackers, that's 2160 variables. (There are more details for the
lens, object tracking, solver locking, etc., this is just for a quick understanding.) Then
there are 2 equations for every frame that a tracker is valid. If every tracker is valid on
2/3 of the frames, that's 48000 equations, about 100 million coefficients. That's a lot of
equations and variables for a very average solve. SynthEyes uses advanced
techniques to solve it very rapidly.
While users typically believe that computers always produce exactly the same
100% accurate result, reality is more complex. Repeatedly re-solving or refining a
scene will produce slightly different results! This is not a bug, and is actually a useful
feature.
Here are some simplified explanations of the factors, which affect various parts of
advanced software, not just or necessarily the SynthEyes solver. While in practice this
isn't usually something you need to worry about (see the last "When it helps" section)
we include it for the edification of our more discerning customers.
Macroscopic Differences
One reason that re-solving or refining scenes can sometimes produce apparently
large changes in error is that some trackers may be automatically changed to Far or
dropped from solving completely, resulting in an "apples to oranges" comparison. You
can use the "Never convert to Far" checkbox in high-noise situations when trackers are
certainly nearby, but can't force all trackers to be solved, because the ones that aren't
are typically behind the camera, at least during the initial portion of the solving. Those
trackers may get solved by subsequent Refine solves, once the camera path is better
known.
Computer arithmetic has limited precision
While you trust that your accounting software always adds everything up to the
penny perfectly accurately and repeatably, that's really because you don't have fantastic
numbers of pennies, or need to worry about tiny fractions of a penny.
In reality, computers only maintain about 17 digits of accuracy, so 1 + 1.0E-18 is
still1 exactly, not 1.000000000000000001: information has been lost. For a single
addition of a tiny number, this doesn't sound like a problem.
However, if you are adding many numbers of widely differing values, some large,
some small, this becomes a problem. If you add them up first to last like you'd enter
them into a calculator, then the final answer depends very much on the order of the
numbers: you'll get a different result for different orderings.
SynthEyes uses various techniques to limit the error, but it exists. The most
accurate methods would take much longer.
Tasks are distributed across multiple processors
Now think about adding the numbers up, but instead they are added up by a
committee, each member of which is grabbing various pages of the numbers to total.
Sometimes some of the committee members (your processors) need to go answer the
mail, change the radio station, or whatever. So every time, the numbers get added up
different ways, depending on how many committee members are available and what
else they are doing. As a result, the answer can be slightly different.
There are many local optima
The solver is looking for the best explanation for your tracking data. You'd like for
it to find the single best explanation (a global optimum) but that isn't mathematically
possible.
It's like finding the top of a mountain. At the top, every surrounding point is
downhill from the top. But the mountain has many boulders on top of it. Which one is the
highest? And there are pebbles on top of the boulders, and grains of sand on top of the
pebbles, and.... You can survey the top of any boulder/pebble/sand grain, but there are
infinitely many and it takes a while to measure each one, so you can't do them all.
Furthermore, those height measurements are what are subject to the accuracy
limitations described above: they are the sum of many error values. So the height
measurements have some error to them. And the pebbles on the top of a boulder are
almost at the same height to start with, and the grains of sand on top of the pebbles are
even closer together in height. You're comparing their height with tiny differences, the
accuracy matters.
Since you probably want a result well before the end of the universe, SynthEyes
(or anything else) can't find the true global optimum (top of the mountain). It finds a
local optimum, the top of some boulder/pebble/sand grain—hopefully one of the highest.
There are many arbitrary decisions
What happens if there's a tie? This especially happens in the auto-tracker. There
are multiple choices, which are all the same (without perfect foresight). You have to pick
something. Now distribute the work across your committee of processors. The decisions
get made differently each time.
Adding randomness can improve performance
When you're climbing up the side of the mountain looking for the top, you'd like to
go right up the side. You want to go up a ridge, and not get distracted by each little
ripple that's a bit higher on the way up. Some randomness can help with that,
preventing you from getting overcommitted to a particular route that looks good at the
moment, but may be just a comparatively poor local optimum.
SynthEyes injects some randomness for that reason. It substantially reduces the
time to get to the top of the mountain. The Advanced Solver Settings give you control
over that (the fuzz settings), as well as allowing you to limit the number of trackers or
frames that are added to the growing solution at any time. Limiting the number of new
frames or trackers prevents the solver from overcommitting to a potentially bad solution
when the tracking data isn't very self-consistent (for example due to rolling shutter or
lens distortion).
When it helps
By now you're probably wondering whether SynthEyes can give you any kind of
usable result at all. Yes! Though there are basically an infinite number of possible local
optima, and we're going to give you one of them pretty much at random, the fact is that
those results are all usually very very close to one another. And unless you plan to go
snooping around looking at that 10th or 12th or something digit of RMS error, you're not
going to find any real difference between them.
Here's the caveat that makes the randomness useful: when your tracking data is
inconsistent—it has errors—then the different local optimums separate out and have
larger differences. Instead of a single mountain peak with some boulders on tops, you
can a mountain range with a couple different mountains, some higher or lower than
others.
Depending on what happens, you might wind up on one of the lower mountains.
In that case, re-running the solve can allow the randomness to move you onto one of
the other peaks. You can use the advanced solver settings to increase the randomness
for poor scenes, for example by increasing the Refine fuzz value.
To summarize, randomness is usually very small, but when it is not, it is helpful!
When it hurts
Added randomness can be harmful when the shot contains very little perspective
or tripod field of view information. In those cases, for marginal shots, set the relevant
First path fuzz or Tripod fuzz value to zero!
You should also monitor the overall ZWT error, the canary output. Ideally, the
canary error decreases also, because the ZWTs can be predicted better also.
If you see the canary error increase, that indicates overfitting. The model is
unjustifiably complex for the available tracking data you are giving it, and the solve is
overfitting that data. If that happens, you should go back to a simpler lens model, or add
substantially more well-distributed trackers, so that it is possible to accurately determine
and distinguish among the lens model parameters.
Solving Issues
If you encounter the message "Can't find suitable initial frames", it means that
there is limited perspective in the shot, or that the Constrain button is on, but the
constrained trackers are not simultaneously valid. Turn on the checkboxes next to Begin
and End frames on the Solver panel, and select two frames with many trackers in
common, where the camera or object rotates around 30 degrees between the two
frames. You will see the number of trackers in common between the two frames, you
want this to be as high as possible. Make sure the two frames have a large perspective
change as well: a large number of trackers will do no good if they do not also exhibit a
perspective change. Also, it will be a good idea to turn on the "Slow but sure" checkbox.
You may encounter "size constraint hasn't been set up" under various
circumstances. If the solving process stops immediately, probably you have no trackers
set up for the camera or object cited. Note that if you are doing a moving object shot,
you need to set the camera’s solving mode to Disabled if you are not tracking it also, or
you will get this message.
When you are tracking both a moving camera and a moving object, you need to
have a size constraint for the camera (one way or another), and a size constraint for the
object (one way or another). So you need TWO size constraints. It isn't immediately
obvious to many people why TWO size constraints are needed. This is the related to a
well-known optical illusion, relied on in shooting movies such as "Honey, I Shrunk the
Kids". Basically, you can't tell the difference between a little thing moving around a little,
up close, and a big thing moving around a lot, farther away. You need the two size
constraints to set the relative proportions of the foreground (object) and background
(camera).
The related message “Had to add a size constraint, none provided” is
informational, and does not indicate a problem.
If you have SynthEyes scenes with multiple cameras linked to one another, you
should keep the solver panel’s Constrain button turned on to maintain proper common
alignment.
See also the Troubleshooting section.
Quad View
If you are not already in Quad view, switch to it now on the toolbar. You will see
the camera/object path and 3-D tracker locations in each view. You can zoom and pan
around using the middle mouse button and scroll wheel. You can scrub or play the shot
back in real-time (in sections, if there is insufficient RAM). See the View menu for
playback rate settings.
The current frame is identified with a small dotted vertical bar. If you see a
hotspot, you can click or drag in the error curve mini-view to go (approximately) to that
frame. Use the graph editor to locate particular issues (keep reading).
The error curve mini-view shows the playback range (ie the portion between the
two small green and red triangles in the main timebar). You can adjust the playback
range to zoom the error curve display into a specific portion. Right-double-clicking the
error curve mini-view will reset the playback range to the entire shot.
In large shots (thousands of frames and trackers), it may take some time to
generate the error curve mini-view. You can turn on the "Error View only for sole
tracker" preference (again in the User Interface section), and then the display will be
shown only when a single tracker is selected—with the exception that if you select all of
the trackers, then the average information will be shown (ie equivalent to having no
trackers selected when this checkbox is off).
the Create tool (magic wand). Select one of the built-in mesh types, such as Box
or Pyramid. Click and drag in a viewport to drag out an object. Often, two drags will be
required, to set first the position and breadth, then a second drag to set the height or
overall scale. A good coordinate-system setup will make it easy to place objects. To
adjust object size after creating it, switch to the scaling tool . Dragging in the
viewport, or using the bottommost spinner, will adjust overall object size. Or, adjust one
of the three spinners for each coordinate axis size.
When you are tracking an object and wish to attach a test object onto it (horns
onto a head, say), switch the coordinate system button on the 3-D Panel from World to
Object.
Note: the camera-view overlay is quick and dirty, not anti-aliased like the final
render in your animation package will be (it has “jaggies”), so the overlay appears to
have more jitter than it will then. You can sometimes get a better idea by zooming in on
the shot and overlay as it plays back (use Pan-To-Follow).
Warning: temporary inserts like this may exhibit large errors if rolling-shutter-
compensation is turned on, and there is substantial movement. That is
because the insert is not "rolled" to match the source image.
Shortly, we’ll show how to use the Perspective window to navigate around in 3-D,
and even render an antialiased preview movie.
time when the camera is not moving. The Clean up trackers dialog can do this
automatically.
Note: the too-far-away test can cause trouble if you have a small world size
setting but are using measured GPS coordinates. You should offset the scene towards
the origin using the Shift Constraints script.
You should also look for trackers that are behind the camera, which can occur
on points that should be labeled Far, or when the tracking data is incorrect or
insufficient for a meaningful answer.
After repairing, deleting, or changing too-far-away or behind-camera trackers,
you should use the Refine mode on the Solver panel to update the solution, or solve it
from scratch. Eliminating such trackers will frequently provide major improvements in
scene geometry.
mode to locate a rather large spike in the blue error curve of one of the trackers of
a shot.
This glitch was easy to pick out—so large the U and V velocities had to be
moved out of the way to keep them clearly visible. The deglitch tool easily fixes it.
You can look at the overall error for a tracker from the Coordinate System panel
. This is easiest after setting the main menu’s View/Sort by Error, unselecting all
the trackers (control/command-D), then clicking the down arrow on your keyboard to
sequence through the trackers from worst towards best. In addition to the curves in the
graph editor, you can see the numeric error at the bottom of the tracker panel :
both the total error, and the error on the current frame (also visible in the graph editor's
Number Zone). You can watch the current error update as you move the tracker, or set
it to zero with the Exact button.
For comparison, following is a tracker graph that has a fairly large error; it tracks
a very low contrast feature with a faint moving highlight and changing geometry during
its lifespan. It never has a very large peak error or velocity, but maintains a high error
level during much of its lifespan, with some clearly visible trends indicating the
systematic errors it represents.
The vertical scale is the same in these last three graphs. (Note that in the 3 rd
one, the current time is to the left, before frame 160 or so, hence the blue arrow.)
You can sort the trackers within the graph editor’s Active Trackers node by
The SimulTrack view can also be helpful in quickly looking at many or all of the
trackers, especially in Sort By Error mode. The error curve for the tracker is shown in
each tile (as long as right-click/Show Error in the SimulTrack view is on).
Do not blindly correct apparent tracking errors. A spike suggesting a tracking
error might actually be due to a larger error on a different tracker that has grossly
thrown off the camera position, so look around.
There’s a spike around frame 215-220, to find it, expose the Active Trackers,
select them all (control/command-A), and use Isolate mode around that range of
frames. The result:
We’ve found the tracker that causes the spike, and can use the deglitch tool
, or switch back to the tracker control panel and camera viewport, unlock the
tracker, correct it, then re-lock it.
Tip: In the capture above, the selected tracker is not visible in the hierarchy
view. You can see where it is in the scroll bar, though—it is located at the
white spot inside the hierarchy view’s scroll bar. Clicking at that spot on the
scroll bar will pan the hierarchy view to show that selected tracker.
If that is the last glitch to be fixed, switch to the Solve control panel, and re-solve
the scene using Refine mode.
You can also use the Finalize tool on the tracker control panel to smooth one
or more trackers, though significant smoothing can cause sliding. If your trackers are
very noisy, check whether film grain or compression artifacts are at fault, which can be
addressed by image-preprocessor blur, verify that the interlace setting is correct, or see
if you should fine-tune the trackers.
Alternatively, you can fix glitches in the object path by using the deglitch tool
Warning: If you fix the camera path, instead of the tracker data, then later re-
solve the scene, corrections made to the camera path will be lost, and have to
be repeated. It is always better to fix the cause of a problem, not the result.
Path Filtering
If you have worked on the trackers to reduce jitter, but still need a smoother path
(after checking in your animation package), you can filter the computed camera or
object path using the Path Filtering dialog, launched from the Window menu or the
Solver panel.
Warning: filtering the path increases the real error, and causes sliding.
Remember that your objective is to produce a clean insert in the image, not
produce an artificially smooth camera trajectory that works poorly.
You can only use the dialog after successfully solving the scene. It will not run
before the scene is solved, because it operates by analyzing both 2-D and 3-D
information. You can open it before tracking and solving, in order to check and set the
cleanup parameters if you have automatic cleanup selected for AUTO.
If you run Clean Up Trackers on a grossly incorrect solution, error data will be
grossly wrong and tracker cleanup may delete trackers that are good, and keep trackers
that are wrong!
Warning: the tracker cleanup dialog is not aware of the effects introduced by
rolling-shutter-compensation. It will report trackers at the top and bottom of
the image as having large 3D errors, even if they would be a good match if
rolling shutter compensation was taken into account. We will seek to address
that in the future.
This dialog has a generally systematic organization, with a few exceptions. Each
category of trackers has a horizontal row of controls, and the number of trackers in that
category is in parentheses after the category name. A tracker can be a member of
several categories.
Down the left edge, a column of checkboxes control whether or not the category
of trackers will be fixed. Mostly, trackers are fixed by deleting them, but after you have
identified them, you can also adjust them manually if that is appropriate.
When clicked on, the Select buttons in the middle select that category of trackers
in the viewport. They flash as they are selected, making them easier to find. At the top
of the panel, notice that the Clean-up dialog can work on all the trackers, or only the
selected ones. It records the selected trackers as you open the panel, and they are not
affected by selecting trackers with these buttons.
At right are a column of spinners that determine the thresholds for whether a
tracker is considered to be far-ish, short-lived, etc. The initial values of these thresholds
are good starting points but not the last word.
Part of the clean-up trackers dialog fun is to select a category of trackers, and
start changing the threshold up and down, and see how many trackers are affected, and
where they are. It’s a quick way to learn more about your shot.
The following sections provide some more information about how to interpret and
use the panel. For full details, see the tracker clean-up reference.
Bad Frames
The bad-frames category locates individual frames on each tracker where the 3-
D error is over the threshold, if the hpix radio button is selected, or it finds the top 2% of
errors or whatever, if the % radio button is selected.
If you click the Show button, SynthEyes clears out the tracking results for each
bad frame. The intent is that you can see the overall pattern of bad frames, by having
the graph editor open in tracks mode , with squish-no keys active. Each
bad frame will be marked in red.
If you click the bad-frame’s Select button, the trackers with bad frames are
selected in the viewport. This makes the tracks thicker in the squish view, which is also
helpful.
If you turn on the Delete checkbox for Bad Frames, there are two choices for how
to handle that: Disable and Clear. The clear option does what happens during Show: it
clears out the tracking results so it looks like the frame was tracked, but the feature was
not found, resulting in the red section in the squish view. The disable option re-keys the
The percentage threshold appears to the right of the high-error trackers line as
usual. The hpix error threshold appears underneath it, to the right of the
Unsolved/Behind category, an otherwise empty space because that category requires
no thresholds.
As an example of the first criterion, consider a tracker that is visible for 20
frames. However, 8 of those frames are “bad frames” as defined for that category. The
percentage of bad frames is 8 out of 20, or 40%, and at the standard threshold of 30%
the tracker would be considered high-error and eligible for deletion. Typically such
trackers have switched to an adjacent nearby feature for a substantial portion of their
lifespan.
Unlocking the User Interface
The clean-up trackers dialog is modal, meaning you cannot adjust any other
controls while the dialog is displayed. However, it is often helpful to adjust the user
interface with the dialog open, for example to configure the graph editor or to locate a
tracker in the viewports.
The clean-up dialog does offer a frame spinner along the bottom row, which
allows you to rapidly scrub through the shot looking for particular trackers.
The dialog also offers the Unlock UI button, which temporarily makes the dialog
modeless, permitting you to adjust other user-interface controls, bring up new panels,
etc.
The keyboard accelerators do not work when Unlock UI is turned on. You need
to use the main menu controls instead.
The “selected trackers” list processed by Clean Up Trackers is reloaded each
time you turn off Unlock UI—if you are using the Selected Trackers option, but need to
change which trackers those are, you can unlock the user interface and change them.
But, you must have turned off all the Select buttons first, or they will affect what
happens.
Observation: many new users who say they are having trouble setting up a
coordinate system instead do not have a correct solve for the following
reason. Frequently, first "test shots" are handheld, with the camera pointing in
different directions but no real camera translation. They are nodal tripod
shots, with no 3D information present, so any attempted 3D solve will be
incorrect and impossible to set up a coordinate system on (use Tripod Mode
to address them). Other frequent new-user issues include severe lens
distortion, and trackers on moving objects such as actors or cars. Be sure to
look at the 3D views to establish that the solve is qualitatively correct before
attempting to set up a coordinate system!
what you wanted at all, but hey! That’s what you got just throwing your model into the
measuring machine all upside down.
You open up the door, pull out your model, flip it over, put it back in, and close
the door. Looking at the machine a little more carefully, you see a green button labeled
“Listen up” and push it. Inside, a hundred little feet march out a small door, crawl under
the model, and lift it up from the bottom of the machine.
Since it is still pretty low, you shout “A little higher, please.” The feet cringe a
little—maybe the shouting wasn’t necessary—but the little feet lift your model a bit
higher. That’s a good start, but now “More to the right. Even some more.” You’re making
progress, it looks like the model might wind up in a better place now. You try “Spin
around X” and sure enough the feet are pretty clever. After about ten minutes of this,
though the model is starting to have its ground plane parallel to the bottom of the
coordinate measuring machine, you’ve decided that the machine is really a much better
listener than you are a talker, and you have learned why the red button is labeled “Good
enough!” Giving up, you push it, and you quickly have the model in your computer, just
like you had positioned it in the machine.
Hurrah! You’ve accomplished something, albeit tediously. This was an example
of Manual Alignment: it is usually too slow and not too accurate, though it is perfectly
feasible.
Perhaps you haven’t given the little feet enough credit.
Vowing to do better, you try something trickier: “Feet, move Tracker37 to the
origin.” Sure enough, they are smarter than you thought.
As you savor this success, you notice the feet starting to twiddle their toes.
Apparently they are getting bored. This definitely seems to be the case, as they slowly
start to push and spin your model around in all kinds of different directions.
All is not lost, though. It seems they have not forgotten what you told them,
because Tracker37 is still at the origin the entire time, even as the rest of the model is
moving and spinning enough to make a fish sea-sick. Because they are all pushing and
pulling in different directions, the model is even pulsing bigger and smaller a bit like a
jellyfish.
Hoping to put a stop to this madness, you bark “Put Tracker19 on the X axis.”
This catches the feet off guard, but once they calm down, they sort it out and push and
pull Tracker19 onto the X axis.
The feet have done a good job, because they have managed to get Tracker19
into place without messing up Tracker37, which is still camped at the origin.
The feet still are not all on the same page yet, because the model is still getting
pushed and pulled. Tracker37 is still on the origin, and Tracker19 is on the X axis, but
the whole thing is pulsing bigger and smaller, with Tracker19 sliding along the axis.
This seems easy enough to fix: “Keep Tracker19 at X=20 on the X axis.” Sure
enough, the pulsing stops, though the feet look a bit unhappy about it. [You could say
“Make Tracker23 and Tracker24 15 units apart with the same effect, but different overall
size.]
Before you can blink twice, the feet have found some other trouble to get into:
now your model is spinning around the X axis like a shish-kebab on a barbecue
rotisserie. You’ve got to tell these guys everything!
As Tracker5 spins around near horizontal, you nail it shut: “Keep Tracker5 on the
XY ground plane.” The feet let it spin around one more time, and grudgingly bring your
model into place. They have done everything you told them.
You push “Good enough” and this time it is really even better than good enough.
The coordinate-measuring arm zips around, and now the SynthEyes-generated scene is
sitting very accurately in your animation package, and it will be easy to work with.
Because the feet seemed to be a bit itchy, why not have some fun with them?
Tracker7 is also near the ground plane, near Tracker5, so why not “Put Tracker7 on the
XY ground plane.” Now you’ve already told them to put Tracker5 on the ground plane,
so what will they do? The little feet shuffle the model back and forth a few times, but
when they are done, the ground plane falls in between Tracker5 and Tracker7, which
seems to make sense.
That was too easy, so now you add “Put Tracker9 at the origin.” Tracker37 is
already supposed to be at the origin, and now Tracker9 is supposed to be there too?
The two trackers are on opposite sides of the model! Now the feet seem to be getting
very agitated. The feet run rapidly back and forth, bumping into each other. Eventually
they get tired, and slow down somewhere in the middle, though they still shuffle around
a bit.
As you watch, you see small tendrils of smoke starting to come out of the back of
your coordinate measuring machine, and quickly you hit the Power button.
Back to Reality
Though our story is far-fetched, it is quite a bit more accurate than you might
think. Though we’ll skip the hundred marching feet, you will be telling SynthEyes exactly
how to position the model within the coordinate system.
And importantly, if you don’t give SynthEyes enough information about how to
position the model, SynthEyes will take advantage of the lack of information: it will do
whatever it finds convenient for it, which rarely will be convenient for you. If you give
SynthEyes conflicting information, you will get an averaged answer—but if the
information is sufficiently conflicting, it might take a long time to provide a result, or even
throw up its hands and generate a result that does not satisfy any of the constraints very
well.
There are a variety of methods for setting up the coordinates, which we will
discuss in following sections:
Using the 3-point method
Using the automatic Place tool
Manual Alignment
Configuring trackers individually
Alignment Lines
Constrained camera path
Aligning to an existing mesh
Using Phases (not in Intro version)
The three-point method is the recommended approach, as it quickly produces the
most controlled and accurate results. The Place tool quickly guesses at something
usable automatically, but the setup it produces has no specific relationship to what you
want, nor is it accurate: without care, anything you add will slide, and it is your fault, not
ours! (That's true in general, actually.) Manual alignment is slow and generally not very
accurate, though it allows you to get what you ask for, for sure.
The alignment line approach is used for tripod-mode and even single-frame lock-
off shots. The constrained camera path methods (for experts!) are used when you have
prior knowledge of how the shot was obtained from on-set measurements.
You must decide what you want! If the shot has a floor and you have trackers
on the floor, you probably want those trackers to be on the floor in your chosen
coordinate system. Your choice will depend on what you are planning to do later in your
animation or compositing package. It is very important to realize: the coordinate system
is what YOU want to make your job easier. There is no correct answer, there is no
coordinate system that SynthEyes should be picking if only it was somehow
smarter…They are all the same. The coordinate measuring machine is happy to
measure your scene for you, no matter where you put it! You don’t need to set a
coordinate system up, if you don’t want to, and SynthEyes will plough ahead happily.
But picking one will usually make inserting effects later on easier. You can do it either
after tracking and before solving, or after solving.
After you set up a coordinate system and re-solve the scene, it is a good idea to
check that everything went OK using the Constrained Points View. Each constraint you
added will be listed, along with the error: the difference between what you are asking
for, and SynthEyes was able to give you. Normally the error values in the right-hand
column should be zero, or very small compared to the size of the scene. If there are
large errors, it indicates that the constraints are self-conflicting, ie that you are (often
indirectly) telling something to be in two different locations simultaneously.
Three-Point Method
Here’s the simplest and most widely applicable way to set up a coordinate
system. It is strongly recommended unless there is a compelling reason for an
alternative. SynthEyes has a special button to help make it easy. We’ll describe how to
use it, and what it is doing, so that you might understand it, and be able to modify its
settings as needed.
Switch to the Coordinate System control panel. Click the *3 button; it will now
read Or. Pick one tracker to be the coordinate system origin (ie at X=0, Y=0, Z=0).
Select it in the camera view, 3-D viewport, or perspective window. On the coordinate
system panel, it will automatically be changed from Unconstrained to Origin. Again, any
tracker can be made the origin, but some will make more sense and be more
convenient than others.
The *3 button will now read LR (for left/right). Pick a second tracker to fall along
the X axis, and select it. It will automatically be changed from Unconstrained to Lock
Point; after the solution it will have the X/Y/Z coordinates listed in the three spinners.
Decide how far you want it to be from the origin tracker, depending on how big you want
the final scene to be. Again, this size is arbitrary as far as SynthEyes is concerned. If
you have a measurement from the set, and want a physically-accurate scene, this might
be the place to use the measurement. One way or another, decide on the X axis
position. You can guess if you want, or you can use the default value, 20% of the world
size from the Solver panel. Enter the chosen X-axis coordinate into the X coordinate
field on the control panel.
The *3 button now reads Pl. Pick a third point that should be on the ground
plane. Again, it could be any other tracker―except one on the line between the origin
and the X-axis tracker. Select the tracker, and it will be changed from Unconstrained to
On XY Plane (if you are using a Z-Up coordinate system, or On XZ Plane for Y-up
coordinates). This completes the coordinate system setup, so the *3 button will turn off.
The sequence above places the second point along the X axis, running from left
to right in the scene. If you wish to use two trackers aligned stage front to stage back,
you can click the button from LR (left/right) to FB (front/back) before clicking the second
tracker. In this case, you will adjust the Y or Z coordinate value, depending on the
coordinate system setting.
You might wonder which trackers get selected to be constrained: Tracker37 or
Tracker39, etc. You should pick the trackers that create the coordinate system that you
want to see in the animation/compositing package, the coordinate system that makes
your later work easier.
To provide the most accurate alignment, you should select trackers spread out
across the scene, not lumped in a particular corner. You should also use trackers with
low error (from the Tracker or Coordinate System panels) that are comparatively long-
lived through the shot.
Depending on your desired coordinate system, you might select other axis and
plane settings. You can align to a back wall, for example. For the more complex setups,
you will adjust the settings manually, instead of using *3.
You can lock multiple trackers to the floor or a wall, say if there are tracking
marks on a green-screen wall. This is especially helpful in long traveling shots. If you
are tracking objects on the floor, track the point where the object meets the floor;
otherwise you’ll be tracking objects at different heights from the floor (more on this in a
little). If you add additional constraints, you should be sure to verify that they are OK
(configured right, and match the real world) using the Constrained Points View.
accordingly. (It moves meshes too, unless you have turned off Whole affects meshes on
the right-click menu of the 3-D or perspective viewports.)
The Place tool can also run automatically after you click the full AUTO track and
solve button , if the Run auto-place checkbox is checked. (The first time
you click AUTO, you will be prompted for whether you want auto-place or not.) If the
Place tool has run automatically, but you want to use the three-tracker method, go
ahead!
The Place algorithm proceeds through five main stages: plane, rotation, origin,
scale, and results.
In the plane stage, it looks for a large collection of trackers that form a flat plane,
which might then be made into a ground plane, a back or side wall. Keep in mind that
SynthEyes examines only the position of the trackers, it does not understand what the
trackers are on. SynthEyes decides whether to use the plane as a ground, back, or side
plane based on its relative orientation to the camera.
In the rotation stage, it rotates the scene around in the plane, looking for a
rotation that causes many of the trackers to line up. For example, if SynthEyes has
found a ground plane, as it rotates the scene it may find that many trackers line up
when the axes are parallel to trackers on the back wall of the set.
In the origin stage (which itself has two steps), having decided on the plane and
rotation, SynthEyes decides where the origin should go, by looking along each
coordinate axis for a spot where many trackers are clustered together. That spot
becomes the zero coordinate for that axis.
In the scale stage, SynthEyes enforces any pre-existing distance constraints, or if
there are none, it adjusts the median scene size to a fixed nominal value.
IMPORTANT: If you have a measured distance from the real set that you
wish to use to set the size of the scene, while still doing an automatic
placement, you must set up the distance constraint before hitting Place!
In the results stage, each tracker that is part of the plane is set up with a Lock to
the coordinates calculated for it, and it is selected, so you can tell which trackers were
used for the plane.
So to summarize by example, consider an ideal case where the shot is of the
inside of a room towards the back right corner, with trackers on the floor, a back wall,
and some on the right side wall. SynthEyes picks the trackers on the floor as the
ground plane (Z=0 with Z-up coordinates), spins the scene around so that the trackers
on the back wall become parallel to the X axis, finds that clump of trackers on the back
wall and sets them to be at Y=0, and finds the clump of trackers on the right wall and
puts them at X=0. So the overall scene origin is in the back right corner of the room,
which is a nice choice.
If there were more trackers on the back wall, SynthEyes might use the back wall
as the plane, spin the scene around the back wall until the floor was level, then place
the origin the same way, so we'd still get the same result.
Of course, the real world is always more complex. Often there are multiple
solutions that are just as good as each other, and the Place tool may give you a solution
that isn't what you are looking for.
The Place Tool offers a simple solution: click the Place button again! By design,
the tool randomizes the solutions, so each time you click the button, you'll get a different
solution. If there's really only one good answer, you'll see it almost all the time; if there
isn't a good choice you may see radically different solutions each time. Be sure to look
carefully in the Quad view to assess the coordinate system. If you like it, do NOT hit
Place again! Each Place is different; you can get back only by using Undo.
You can control which trackers will be considered as potential members of the
plane: if you hold down the SHIFT key while clicking the Place button, only the
currently-selected trackers will be considered. You can use this feature when you want
the plane to definitely come from one particular part of the scene where you want to do
an insert, for example. You can also use it after an initial Place, if you think the chosen
plane is a good choice, but contains a few inappropriate trackers. Shift-click those
trackers to de-select them, then shift-click the Place button to recalculate using only the
remaining trackers.
When the Place tool runs, it produces a whole set of coordinate system locks, on
each tracker used for the ground (or wall) plane. This system of constraints allows the
same coordinate system to be re-established, even if you subsequently solve the scene,
not only with a Refine-mode solve, but even starting over from a Automatic-mode solve
(note that that's different from the AUTO button on the Summary panel). If the tracking
data has changed around significantly, you'll still get the best approximation to the
previous coordinate system.
If you want to adjust the placement created by the Place tool, you can use the
Manual Alignment method described below. You might want to adjust the position of the
plane if it is actually a little above the true ground level. As you do that, the manual
adjustment will be adjusting not only the tracker and camera positions, but the lock
coordinates as well so everything stays consistent.
Once you've used the Place tool and decided on the coordinate system, stick
with it! You want to keep your existing coordinate system, or at least something very
close to it, once you have positioned objects in the scene, either in SynthEyes or
downstream in your animation package. You don't want to have to re-do any manual
adjustments or the object positioning if you correct a bump, for example (though you
might have to tweak them slightly).
To avoid interfering with your existing Place-generated coordinate system
do not hit the Place button again! (Click Undo if you do by mistake!)
do not turn on the Constrain checkbox on the Solve panel, as that will
force the trackers to have their old coordinates exactly, distorting the
scene.
If you update the trackers and re-solve, you'll see small errors in the Constrained
Points view, but they are OK. If they bother you and you want to set them back to zero,
use the Select By Type script with the "Constrained" box checked to re-select all the
ground-plane trackers, then on the Coordinate System panel, click the Set Seed button.
It will update the trackers to use their current solve coordinates as the Lock-to values,
so there will be no error.
Important: to avoid sliding when inserting an object, you must position the
object relative to the trackers near to it. If you see the object sliding with
respect to a spot in the imagery, create a tracker on that spot, and use that
tracker as a basis to reposition the object. Sliding is a user error!
If the Place tool has run automatically (or manually) and you decide you want to
set up a coordinate system using the three-tracker method instead (and we'd rather you
did), that's no problem. Click the Coords button on the Summary panel or *3 on the
Coordinate System panel, and it will first delete all the existing constraints created by
the Place tool. You can proceed to click on your three trackers to create your coordinate
system normally.
Manual Alignment
You can manually align the camera and solved tracker locations if you like. This
technique is the usual approach for tripod-mode shots. You can also use manual
alignment after using the Place tool on regular camera tracks, though it is more
accurate to set up a coordinate system using the three-point method above.
To align manually, switch to the 3-D control panel and the Quad or Quad
Perspective view. Turn on the Whole button on the 3-D control panel, which will select
the currently-active camera or moving object (typically Camera01) in the viewports and
also turn on the selection-lock button . There's no requirement for the selection-lock
to be on; it is usually convenient, so Whole turns it on automatically. Don't forget to turn
it off (when you turn Whole off, Lock will turn off as well).
Then, use the move , rotate , and scale tools to reposition the
camera/object using the viewports. As you do this, not only the camera/object will move,
but its entire trajectory and its trackers as well. With the selection lock turned on, you
can scale or rotate around any location in the 3D environment.
Retaining Manual Adjustments
Without user action, a manual alignment will not be retained if you re-solve the
shot, either from scratch (Automatic mode) or incrementally (via Refine). The solver is
affected only by constraints, and it has no way to tell what you were doing with your
manual alignment. If you manually align, add some 3D objects to the scene, then adjust
some trackers and re-solve, you will have to re-align manually, which is frequently
difficult to reproduce exactly.
Fortunately, you can avoid this problem, with a little foresight.
First, if you used the Place Tool, then manually adjust, your work will have
already been done for you: the Place Tool establishes a set of constraints that will
reposition the scene the same way after a re-solve.
If you did not use the Place Tool, then you must establish the constraints
manually. (This process works for normal, object, or tripod shots.)
Select several reliable trackers that are distributed throughout the scene,
perhaps six to ten,
open the coordinate system panel,
click Set Seed,
change the Lock type drop-down to Lock Point,
click the Seed button to turn it off if you like; it does not matter here.
If you re-solve the scene again, the scene will be re-aligned to match up these
trackers, and therefore the rest of the scene, as well as possible.
Use on Moving Objects
You can use the same technique for moving-object shots, discussed later. If you
click the World button to change to Object coordinates, you can re-align the object’s
coordinate system relative to the object’s trackers (much like you move the pivot point in
a 3-D model). As you do this, the object path will change correspondingly to maintain
the overall match.
It is also useful to stay in world coordinates and scale a moving object about its
camera, either in the viewports or using the Uniform scale spinner on the 3D Panel.
With this method, you can adjust an object's scale so that the object's position, and
even whole path, are located in the right position compared to the remainder of the 3-D
environment.
Impact on Meshes
By default, meshes will be carried along when you use Whole. However, you
can turn off Whole affects meshes, on the 3-D viewport or perspective-view right-click
menus, and meshes will not be moved. Then, you can import a 3-D model (such as a
new building), then reposition the camera and trackers relative to the building’s (fixed)
position.
Warning: Whole affects meshes used to be off by default (before 1508), now
it is on by default, since that is the more usually expected behavior.
Size Constraints
As well as the position and orientation of your scene, you need to control the size
of the reconstructed scene. There are four general ways to do this:
Reminder: SynthEyes uses unit-less numbers. When you enter 20 units, you
could call it 20 meters, 20 feet, 20 miles, etc. SynthEyes does not care; it is
up to you.
Suppose you have two non-ZWT trackers, A and B, and for example want them
20 units apart. You set up the distance constraint as follows.
1. Open the coordinate system control panel.
2. Select tracker A, ALT-click (Mac: Command-click) on tracker B to set it as the
target of A. In the coordinate system panel, you'll see tracker B's name now in
the Target Point button. Note: if you have set the preferences to "no middle
mouse button" then you must hold ALT/Command and right-click to link, since
ALT/Command-left would be interpreted as a pan.
3. Set the distance (Dist.) spinner to 20. (You can remove a distance constraint by
right-clicking the Target Point button.)
If you set up a distance constraint and have or will also use the *3 tool, use
different trackers for the distance constraint and the *3 setup. Then, select the second
point, which is locked to 20,0,0, and change its mode from Lock Point to On X Axis (On
Y Axis for front/back setups). Otherwise, you will have set up two size constraints
simultaneously, and unless both are right, you will be causing a conflict.
Note that your size constraint does not do anything immediately: it is an
instruction to the solver, and will have no effect until you solve or re-solve (ie in Refine
mode) the scene.
You can set up coordinate systems with *3 and use those points for distance
constraints, but you'll have to understand how to set them up directly, as described in
the next section.
tell them, but are happy to wreak havoc in any axis you do not give them instructions
for.
As examples of other effects you can achieve, you can use the Target Point
capability to constrain two points to be parallel to a coordinate axis, in the same plane
(parallel to XY, YZ, or XZ), or to be the same. For example, you can set up two points to
be parallel to the X axis , two other points to be parallel to the floor,
and a fifth point to be the origin.
Suppose you have three trackers that you want to define the back wall (Z up
coordinate system).
1) Go to the coordinate system control panel
2) If the three trackers are A, B, and C, select B, then hold down ALT
(Mac: Command) and click A.
3) Change the constraint type from Unconstrained to Same XZ plane.
4) Select C, and ALT-click (Command) on A, and set it to Same XZ Plane
also.
This has nailed down the translation, but rotation only partially—the feet will be
busy. You also need to specify another rotation, since B and C can spin freely around A
so far (or around the Y axis about any point in the plane).
You might have two other trackers, D and E, that should stack up vertically.
Select E and Alt/Command-Click tracker D and set it to Parallel to Z Axis
(or X axis if they should be horizontal).
Note: if you have set the preferences to "no middle mouse button" then you
must hold ALT/Command and right-click to link, since ALT/Command-left
would be interpreted as a pan.
You can use the intentional difference in the camera view, 3D View, and
perspective view's locked vs unlocked behavior to locate links among shots that go the
wrong way, for example.
Details of Lock Modes
There are quite a few different constraint (lock) modes that can be selected from
the drop-down list. Despite the fair number of different cases, they all can be broken
down to answering two simple questions: (1) which coordinates (X, Y, and/or Z) of the
tracker should be locked, and (2) to what values.
The first question can have one of eight different answers: all the combinations of
whether or not each of the three coordinate axes is locked, ranging from none
(Unconstrained) to all (Lock Point). Rather than listing each of the combinations of
which axes are locked, the list really talks about which axis is NOT locked. For example,
an X Axis lock really locks Y and Z, leaving X unconstrained. Locking to the XZ plane
actually locks only Y. The naming addresses WHAT you want to do, not HOW
SynthEyes will achieve it.
The second question has three possible answers: (a) to zero, (b) to the
corresponding “Seed and Lock” spinner, or (c) the corresponding coordinate from the
tracker assigned as the Target Point. Answer (c) is automatically selected if a target
point is present, while (a) is selected for “On” lock types, and (b) for “Any” lock types.
Use the Any modes when you have some particular coordinates you want to lock a
tracker to, for example, if a tracker is to be placed 2 units above the ground plane.
Watch Out! If you select several trackers, some with targets, some without,
the lock type list will be empty. Either select fewer trackers, or right-click the
Target button to clear the target tracker setting from all selected trackers.
Origin X, Y, Z Zero
On X Axis Y, Z Zero
On Y Axis X, Z Zero
On Z Axis X, Y Zero
On XY Plane Z Zero
On XZ Plane Y Zero
On YZ Plane X Zero
|| X Axis Y, Z Target
|| Y Axis X, Z Target
|| Z Axis X, Y Target
If you have an estimate for the field of view, you can preposition the camera, then
right-click the Nudge Tool on the Coordinate System panel to create seed/lock
coordinates that lineup exactly with the 2D tracker position on the current frame. Two
will constrain the initial camera orientation.
Tip: if you have a tripod shot that pans a large angle, 120 degrees or more, small
systematic errors in the camera, lens, and tracking can accumulate to cause a banana-
shaped path. To avoid this, set up a succession of trackers along the horizon or another
straight line, and peg them in place, or use a roll-axis lock.
Constrained Points View
After you have set up your constraints, you should check your work using the
Constrained Points viewport layout, as shown here:
Hint: within the Constrained Points view, the up-arrow and down-arrow keys
scroll through only the constrained trackers, rather than through all trackers
as they do normally. This makes it easier to see where these are in the
viewports, for example.
The Axes column shows what axes are locked, among X, Y, Z, or D(distance). If
an asterisk is present, the lock is to a tracker on a different camera/object. If the letter s
is present, it is a stereo lock. If the letter m is present, there is a link to a different object,
and one or the other is a moving object, which is unusual and bears investigation.
The distance column has a value only if a distance constraint is present.
The solved position is shown, along with the 3-D error of the constraint. For
example, if a tracker is located at (1,0,0) but is locked to (0,0,0), the 3-D error will be 1.
It will have a completely different 2-D error in hpix on the coordinate system panel.
Hint: You can adjust the width of the columns by dragging the gutters,
located immediately to the left of the column headers (Tracker, Locked To,
Distance, etc), which extend the entire height of the viewport.
The constrained points view lets you check your constraints after solving, giving
you the resulting 3-D errors, or check your setup before solving, without any error
available yet. You can select the trackers directly from this view and tweak them with
the coordinate system panel displayed.
Upside-down Cameras: Selecting the Desired Solution
Many coordinate system setups can be satisfied in two or more different ways:
completely different camera and tracker positions that are camera matches, and satisfy
the constraints. This applies to object tracks as well as camera tracks.
To review, the most basic 3-point setup consists of a point locked to the origin
and a point locked to a specific coordinate on the X axis, plus a third point locked to be
somewhere on the ground plane (XY plane for Z-up). This setup can be satisfied two
different ways. If you start from one solution, you can get the other by rotating the entire
scene 180 degrees around the X axis. If the camera is upright in the first solution, it will
be upside-down in the second. The third point will have a Y coordinate that is positive in
one, and negative in the other (for Z-Up coordinates).
If you take this basic 3-point setup, change the setup of the second point from a
lock to (20,0,0) to “On X Axis,” and add a separate distance (scale) constraint, there are
now four different possible solutions: the different combinations of the second point’s X
being positive or negative, and the third point’s Y coordinate being positive or negative.
SynthEyes offers two ways to control which solution is used. Without specific
instructions, SynthEyes uses the solution where the camera is upright, not upside-down.
That handles the most common case, but if you need the camera upside-down, have a
setup with four solutions, have a specific object orientation setup, or generally need one
of the other solutions, you need to be more specific about what you want.
SynthEyes lets you specify whether a coordinate should be positive or negative
(a polarity), for each coordinate of each constrained tracker. The Coordinate System
Control panel has buttons next to the X, Y, and Z spinners. The X button, for example,
sequences from X to X+ to X-, meaning that X can have either polarity, that X must be
positive, or that X must be negative.
If there are two solutions, you should set up a polarity for an axis of one point; if
there are four solutions, set the polarity for one axis of two points. For example, set a
polarity for Y of the 3rd tracker, and an X polarity for the 2nd (on-axis) tracker.
As SynthEyes considers different possible orientations, it examines whether the
resulting coordinates will satisfy the polarity constraints you have set up, and disallowed
solutions are ignored.
You can put a polarity constraint on any tracker, whether or not it is otherwise
used in the coordinate system setup.
If you put a polarity constraint on a tracker that has a target point, the polarity
constraint is based not directly on the tracker's coordinate, but on the difference
between the tracker's coordinate and its target's coordinate. This is true whether or not
any axes of the tracker are constrained to the target.
You can add poorly chosen locks, or so many locks, that solving becomes
slower, due to additional iterations required, and may even make solving impossible,
especially with lens distortion or poor tracking. By definition, there will always be larger
apparent errors as you add more locks, because you are telling SynthEyes that a
tracker is in the wrong place. Not only are the tracker positions affected, but the camera
path and field of view are affected, trying to satisfy the constraints. So don’t add locks
unless they are really necessary.
Generally, it will be safer to leave the Constrain checkbox off, so that solving is
not compromised by incorrectly configured constraints. You will want to turn the
checkbox on when using multiple-shot setups with the Indirectly solving method, or if
you are working from extensive on-set measurements. It must be on to match a single
frame.
Pegged Constraints
With the constraints checkbox on, SynthEyes attempts to force the coordinate
values to the desired values. It can sometimes be helpful to force the coordinates to be
exactly the specified value, by turning on the Peg button on the tracker’s Coordinate
system panel.
Pegs are useful if you have a pre-existing scene model that must be matched
exactly, for example, from an architectural blueprint, a laser-rangefinder scan, or from
global positioning system (GPS) coordinates. Pegging GPS coordinates is especially
useful in long highway construction shots, where overall survey accuracy must be
maintained over the duration of the shot.
Pegs are active only when the Constrain checkbox is on, and you can only peg to
numeric coordinates or to a tracker on a different camera/object, if the tracker’s
camera/object is Indirectly solved. You can not peg to a tracker on the same
camera/object, this will be silently ignored.
The 3-D error will be zero when you look at a pegged tracker in the Constrained
Points view. However, the error on the coordinate system or tracking panel, as
measured in horizontal pixels, will be larger! That is because the peg has forced the
point to be at a location different than what the image data would suggest.
Constrain Mode Limitations and Workflow
The constrain mode has an important limitation, while initially solving a shot in
Automatic solving mode: enough constrained points must be visible on the solving
panel’s Begin and End frames to fully constrain the shot in position and orientation. It
can not start solving the scene and align it with something it can not see yet, that’s
impossible!
SynthEyes tries to pick Begin and End frames where the constrained points are
simultaneously visible, but often that’s just not possible when a long shot moves through
an environment, such as driving down a road. The error message “Can’t locate
satisfactory initial frames” will be produced, and solving will stop.
In such cases, the Constrain mode (checkbox) must be turned off on the solving
panel, and a solution will easily be produced, since the alignment will be performed on
the completed 3-D tracker positions.
You can now switch to the Refine solving mode, turn on the Constrain checkbox,
and have your constraints and pegs enforced rigorously. As long as the constraints
aren’t seriously erroneous, this refine stage should be quick and reliable.
Here’s a workflow for complex shots with measured coordinates to be matched:
1. Do the 2-D tracking (supervised or automatic)
2. Set up your constraints (if you have a lot of coordinates, you can read
them from a file).
3. Do an initial solve, with Constrain off.
4. Examine the tracker graphs, assess and refine the tracking
5. Examine the constrained points view to look for gross errors between the
calculated and measured 3-D locations, which are usually typos, or
associating the 3-D data with the wrong 2-D tracker. Correct as
necessary.
6. Change the solver to Refine mode
7. Turn on the Constrain checkbox
8. Solve again, verify that it was successful.
9. Turn on the Peg mode for tracker constraints that must be achieved
exactly.
10. Solve again
11. Final checks that pegs are pegged, etc.
With this approach, you can use Constrain mode even when constrained trackers
are few and far between, and you get a chance to examine the tracking errors (in step
4) before your constraints have had a chance to affect the solution (ie possibly messing
it up, making it harder to separate bad tracking from bad constraints.)
Note: if you have survey data that you are matching to a single frame, you must
use Seed Points mode, must make each tracker a seed, and you must turn on
Constrain.
There are parallel lines under the eaves and window, configured to be parallel to
the X axis. Vertical (Z) lines delineate edges of the house and door frame. The selected
line by the door has been given a length to set the overall scale.
The alignment tool gives you camera placements and FOV for completely
locked-off shots, even a single still photograph such as this.
good configuration of lines, an error of hundreds of pixels could result, and you must re-
think.
SynthEyes will take the calculated alignment and apply it to an existing solution,
such that the camera and origin are at their computed locations on the frame of
reference (indicated in the At nnnf button, for example ). Suppose you are
working on, and have solved, a 100-frame tripod-mode shot. You have built the
alignment lines on frame 30. When you click Align!, SynthEyes will alter the entire path,
frames 0-99, so that the camera is in exactly the right location on frame 30, without
messing up the camera match before or after the frame.
Meshes will be affected by the alignment. To keep them stationary, so that they
can be used as references, turn off Whole affects meshes on the 3-D viewport or
perspective-view’s right-click menus.
You should switch to the Quad view and create an object or two to verify that the
solution is correct.
If the quality of the alignment lines you have specified is marginal, you may find
SynthEyes does not immediately find the right solution. To try alternatives, control-
click the Align! button. SynthEyes will give you the best solution, then allow you to
click through to try all the other (successively worse) solutions. If your lines are only
slightly off-kilter, you may find that the correct solution is the second or maybe third one,
with only a slightly higher RMS error.
Advanced Uses and Limitations
Since the single-frame alignment system is pretty simple to understand and use,
you might be tempted to use it all the time, to use it to align regular full 3-D camera
tracking shots as well. And in fact, as its use on tripod-mode shots suggests, we have
made it usable on regular moving camera and even moving-object shots, which are an
even more tempting use.
But even though it works fine, it probably is not going to turn out the way you
expect, or be a usable routine alternative to tracker constraints for 3-D shots.
First, there’s the accuracy issue. A regular 3-D moving-camera shot is based on
hundreds of trackers over hundreds of frames, yielding many hundreds of thousands of
data points. By contrast, a line alignment is based on maybe ten lines, hand-placed into
one frame. There is no way whatsoever for the line-based alignment to be as accurate
as the tracker solutions. This is not a bug, or an issue to be corrected next week.
Garbage in, garbage out.
Consequently, after your line-based alignment, the camera will be at one location
relative to the origin, but the trackers will be in a different (more correct) position relative
to the camera, so…. The trackers will not be located at the origin as you might expect.
Since the trackers are the things that are locked properly to the image, if you place
objects as you expect into the alignment-determined coordinate system, they will not
stick in the image—unless you tweak the inserted object’s position to make them match
better to the trackers, not the aligned coordinate system.
Second, there is the size issue. When you set up the size of the alignment
coordinate, it will position the camera properly. But it will have nothing to say about the
size of the cloud of trackers. You can have the scene aligned nicely for a 6-foot tall
actor, but the cloud of trackers is unaffected, and still corresponds to 30 foot giants. To
have any hope of success using alignment with 3-D solves, you must still be sure to
have at least a distance constraint on the trackers. This is even more the case with
moving-object shots, where the independent sizing of the camera and object must be
considered, as well as that of the alignment lines.
The whole reason that the alignment system works easily for tripod and lock-off
shots is that there is no size and no depth information, so the issue is moot for those
shots.
To summarize, the single-frame alignment subsystem is capable of operating on
moving-camera and moving-object shots, but this is useful only for experts, and
probably is not even a good idea for them. If you send us a your scene file at tech
support looking for help on that, we are going to tell you not to do it, to use tracker
constraints instead, end of story.
But, you should find the alignment subsystem very useful indeed for your tripod-
mode and lock-off shots!
You can use the Place mode of the Perspective view to lock trackers onto an
imported mesh or lidar file, see Placing Seed Points and Objects.
SynthEyes gives you several options for how seriously the coordinate data is
going to be believed. Any 3-D data taken by hand with a measuring tape for an entire
room should be taken as a suggestion at best. At the other end of the spectrum,
coordinates from a 3-D model used to mill the object being tracked, or laser-surveyed
highway coordinates, ought be interpreted literally.
Trackers with 3-D coordinates, entered manually or electronically, will be set up
as Lock Points on the Coordinate System panel, so that X, Y, and Z will be matched.
Trackers with very exact data will also be configured as Pegs, as described later.
If the 3-D coordinates are measured from a 2-D map (for a highway or
architectural project), elevation data may not be available. You should configure such
trackers as Any Z (Z-up coordinates) or Any Y (Y-up coordinates), so that the XY or XZ
coordinates will be matched, and the elevation allowed to float.
If most of your trackers have 3-D coordinates available to start (six or more per
frame), you can use Seed Points solving mode on the Solver control panel. Turn on the
coordinate system panel’s Seed button for the trackers with 3-D coordinates. This will
give a quick and reliable start to solving. You must use Seed Points and Constrain
modes on the solver panel if you are matching a single frame from survey data.
You can use the Nudge Tool (spinner) on the Coordinate System panel to adjust
seed locations along the depth axis, towards and away from the camera. Right-clicking
the Nudge tool will snap the seed point onto the line of sight of the 2D tracker position
on the current frame. (This works for Far trackers also.) If you have visions of doing this
from multiple frames, check out ZWTs instead, which will be faster and more accurate.
For more information on how to configure SynthEyes for your survey data, be
sure to check the section on Alignment vs Constraints.
Solver Control Panel, the Hard and Soft Lock Control dialog, and the camera’s seed
path information.
Warning: using camera position, orientation, and field of view locks is a very
advanced topic. You need to thoroughly understand SynthEyes and the
coordinate system setup process, and have excellent mental visualization
skills, before you are ready to consider camera locks. Under no
circumstances should they be considered a way to compensate for
inadequate basic tracking skills!!!
Note: there are no hard overall distance constraints, they are always soft. The
Constrain checkbox must be on for them to be effective; they are ignored
otherwise. They can not currently be used as the only means to set scene
size.
Camera position locks are more useful than orientation locks; we’ll consider
position locks separately to start with.
You can also constrain objects, but this is even more complex. A separate
subsystem, the stereo geometry panel, handles camera/camera constraints in stereo
shots.
Tip: If you must use solver locks to make some frames work, eg due to
extensive motion blur, you may want to consider making them zero-weighted
frames, so that those sketchy frames don't throw off tracking throughout the
rest of the shot.
Basic Operation
Constraints are generally created after the overall solve has been done, as a way
to fine-tune the path, with the camera in Refine solving mode.
Set up generally proceeds as follows:
1. Go to the Solver Control Panel. Click the more button to bring up the Hard
and Soft Lock Control panel. Or, select Solver Locking from the Window
menu.
2. Turn on Show Seed Path.
3. Position and animate the camera as desired, creating a key on each frame
where you want the position to be constrained. The Get buttons on the solver
locking panel, especially Get 1f, can help with this, by pulling positions from
any existing solved path.
4. Turn on the L/R, F/B, and/or U/D buttons as appropriate depending on the
axes to be constrained — these stand for left/right, front/back, and up/down
respectively.
5. Adjust the Constrain checkbox as needed. The camera position constraints
behave similarly to constraints on the trackers: if the Constrain checkbox is
on, they are enforced exactly during the solve, but if the Constrain checkbox
is off, they are enforced only loosely after the completion of the solve. Loosely
means that they are satisfied as best as can be, without modifying the
trajectory or overall RMS error of the solution.
The result of this process is to make the camera match the X, Y, and/or Z
coordinates of the seed path at each key. This basic setup can be used to accomplish a
variety of effects, as described above and covered in more detail below. At the end of
the section, we’ll show some even more exotic possibilities.
Hint: you can use Synthia or the graph editor's clipboard as additional ways
to move keys from the solved camera path to the seed camera path.
2. At frame 0, position the camera at the desired height above the ground
plane: 2 meters, 48 inches, whatever.
3. Turn on the U/D button on frame 0, turn it back off at frame 1.
4. Set up a main coordinate system using 3 or more trackers on the floor.
Make sure to not create a size constraint in the process: if using the *3
button on the Coordinate system panel or the Coord button on the
Summary panel, select the 2nd (on-axis) tracker, and in the Coordinate
panel, change it from Lock Point (at 20,0,0) to On X Axis or On Y Axis.
5. Solve with Go! on the Solver panel
Note that you can use whatever more complex setup you like in step 4, as long
as it completely constrains both the translation and rotation, but not the size.
WARNING: You might be tempted to think “Hmmm, the camera is on a dolly, so
the entire path must be exactly 43 inches off the floor, let me set that up!” (by not turning
U/D back off). But this is almost always a bad idea! The obvious problem is that the
dolly track is never really completely flat and free of bumps. If the vertical field of view is
2 meters, and you are shooting 1080i/p HDTV, then roughly your track must be
perfectly flat to 1 millimeter or so to have a sub-pixel impact. If your track is that flat,
congratulations.
The conceptually more subtle, but bigger impact problem is this: a normal tripod
head puts the camera lens very far from the center of rotation of the head—roughly 1
foot or 0.25 meter. As you tilt the head, the position of the camera increases and
decreases up to that much in height! Unless your camera does not tilt during the shot,
or you have an extra-special nodal-pan head, the camera height will change
dramatically during the shot.
A Straight Dolly Track Setup
If your camera rides a straight dolly track, you can use the length of that track to
set the scale, and almost the entire coordinates system if desired. While the camera
height measurement setup discussed above is simpler, it is appropriate mainly for a
studio environment with a flat floor. The dolly track setup here is useful when a dolly
track is set up outdoors in an environment with no clearly-defined ground plane—in front
of a hillside, say.
For this setup, you should measure the distance traveled by the camera head
down the track, by a consistent point on the camera or tripod. For example, if you have
a 20’ track, the camera might travel only 16’ or so because there will be a 2’ dead zone
at each end due to the width of the tripod and dolly. Measure the starting/ending
position of the right front wheel, say.
Next, clear any solved path (or click View/Show seed path), and animate the
camera motion, for example moving from 0,0,0 at the beginning of the shot to 16,0,0 at
the end (or wherever it reaches the maximum, if it comes back).
You now have two main options: A) mostly tracker-based coordinate setup, or B)
mostly dolly-based coordinate setup, for side-of-hillside shots.
For setup A, turn on only the L/R camera axis constraint checkbox on the first
and last frames (only). The X values you have set up for the camera have set up an X
positioning for the scene, so when you set up constraints on the trackers, they should
constrain rotation fully, plus the front/back and up/down directions—but not the L/R
direction since that would duplicate and conflict with the camera constraint (unless you
are careful and lucky).
For setup B, turn on L/R, F/B, and U/D on the first and last frames (only). You
should take some more care in deciding exactly what coordinate values you want to use
for each axis of the animated camera path, because those will be defining the
coordinate system. [By setting keys only at the beginning and end of the shot, you
largely avoid problems with the camera tilting up and down—at most it tilts the overall
coordinate system from end to end, without causing conflicting constraints.]
If the track is not level from end to end, you can adjust the beginning or ending
height coordinate of the tracker as appropriate. But usually we expect the track to have
been leveled from end to end.
With X, Y, and Z coordinates keyed at the beginning and end of the shot, you
have already completely constrained translation and scale, and have constrained 2 of
the 3 rotation axes. The only remaining unconstrained rotation axis is a rotation around
the dolly.
To constrain this remaining rotation requires only a single additional tracker, and
only its height measurement! On the set, you should measure the relative height of a
trackable feature compared to the track (usually this will be to the base of the track, so
you should also measure the height of the camera versus the base). You can measure
this height using a level line (a string and a clip-on bubble level) and a ruler.
On the Coordinate System Control Panel, select the tracker and set it to Any XY
Plane and set the Z coordinate (for Z-up mode), or select Any XZ Plane and set the Y
coordinate (for Y-up mode).
Now you’re ready to go! This setup is a valuable one for outdoor shots where a
true vertical reference is required, but the features being tracked are not structured
(rocks, bushes, etc).
Again, we recommend not trying to constrain the camera to be exactly linear,
though you can easily set this up by locking Y and Z to be fixed for the duration of the
shot, with single-frame locks on X at the beginning and end of the shot. This setup
forces the camera motion to be exactly straight, but moving in an unknown fashion in X.
Although the motion will be constrained, the setup will not allow you to use fewer
trackers for the solve.
Using a Supplied Camera Path
This section addresses the case where you have been supplied with an existing
camera translation path, either from a motion-controlled camera rig, or as a result of
hand-editing a previous camera solution, which can be useful in marginal tracks where
you have a good idea what the desired camera motion is. After editing the path, you
want to find the best orientation data for the given path.
If you have an existing camera path in an external application (either from a rig,
or after editing in maya or max, for example), typically you will import it using Filmbox or
a standard or custom camera-path import script. Be sure that the solved camera path is
cleared first, so that the seed path is loaded.
If you have a solved camera path in SynthEyes, you can edit it directly. First,
select the camera, and hit the Blast button on the 3-D panel. This transfers the path
data from the solved path store into the seed path store. Clear the solved path and edit
the seed path.
Rewind and turn on all 3 camera axis locks: L/R, F/B, and U/D.
Next, configure the solver’s seeding method. This requires some care. You can
use the Path Seeding method only if your existing path includes correct orientation
and field of view data. Otherwise, you can use the Automatic method or maybe Seed
Points. The Refine mode is not an option since you have already cleared the solution to
load the seed path, and don’t have orientation data anyway or you’d use Path Seeding.
You can use Seed Points mode if you are editing the path in SynthEyes—but be
sure to hit the Set All button on the Coordinate System Setup Control panel before
clearing the prior solution, so that the points are set up properly as seeds. You should
probably not make them locks, unless you are confident of the positions already.
With the camera path locked to a complex path (other than a straight line), no
further coordinate system setup is required, or it will be redundant.
You can solve the scene first with the Constrain checkbox off, then switch to
Refine mode, turn on Constrain, and solve again. This will make it apparent during the
second solve whether or not you have any problems in your constraint setup, instead of
having a solution fail unexpectedly due to conflicting constraints the first time.
Camera-based Coordinate System Setup
The camera axis constraints can be used in small doses to set up the coordinate
system, as we’ve seen in the prior sections. Typically you will want to use only 1 or 2
keys on the seed path; 3 or more keys will usually heavily constrain the path and require
exact knowledge of the camera move timing.
Roughly, each keyed frame is equivalent in effect to a constrained tracker
located at the same spot. You should keep that in mind as you plan your setup, to avoid
under- or over-constraining the coordinate system.
Soft Locks
So far we have described hard locks, which force the camera exactly to the
specified values. Soft locks pull more gently on the camera path, for example, to add
stability to a section of the track with marginal tracking. In either case, for a lock to be
active, the corresponding lock button (U/D, L/R, pan, etc) must be on.
The weight values on the Hard and Soft Lock dialog controls whether locks are
hard or soft. If the weight is zero (the default), it is a hard lock. A non-zero weight value
specifies a soft lock.
Weight values range from 1 to 120, with 60 a nominal neutral value. However, we
recommend that when creating soft locks, you start with a weight of 10, and work
upwards through 20, 30, etc until the desired effect is obtained.
Weight values are in decibels, a logarithmic scale where an increase of 20
decibels is an increase by a factor of 10, and 6 decibels is a factor of two. So 40 is 10
times stronger than 20, 40 is 100 times stronger than 10, and 26 is twice as strong as a
weight of 20. (Decibels are commonly used for sound level measurements.)
A lock can switch from hard to soft on a frame-by-frame basis, ie frames 0-9 can
be hard, and 10-14 soft. You may need to key the weight track carefully to avoid slow
transitions from 20 down to 0, for example.
Soft locks are treated as such only when the Constrain check box is on: it is the
solver that distinguishes between hard and soft locks. If Constrain is off, the locks will
be applied during the final alignment, when they do not affect the path at all, just re-
orient it, so soft locks are treated the same as hard locks.
Note that the soft lock weight is not a path blending control. You might naively be
tempted to set up a nominal locked path, and try to animate the soft lock weights
expecting a smooth blend between the solved path and your animated locked path. But
that is not what will happen. The weight changes how seriously SynthEyes takes your
request that the camera should be located at the specified position—but it will affect the
tracker positions and everything else as well. (Use Phases to set up path-blending
effects.)
Overall Distance Locks
When a shot contains little perspective, virtually all the error will be in depth, ie
the distance from the camera to the rest of the scene (or from an object to its camera).
Even the tiniest amount of jitter in the tracking data can produce a large amount of jitter
in the depth: this is not a program problem, but a reflection of the poor quality of the
available information.
SynthEyes allows you to set up a soft lock (only!) on this depth value, the
distance from the camera to origin, or object to camera, using the Solver Locking panel.
The Constrain checkbox must be on for these constraints to be effective!
You can animate this desired distance based on the measured distances from an
original track, using the Get 1f button for the Overall Distance, typically on a small
number of frames.
In this fashion, you can force the depth value to have a very smooth trajectory,
while allowing a dynamic performance in the other axes. This often corresponds
relatively well to what happens on the set.
You can configure and check this constraint using the Distance channels of the
Camera: the Seed Path Distance value shows the value being constrained to, the
Solved Path value shows the actual value (whether or not a distance constraint is
active), and the Solved Velocity Distance value shows the velocity of the actual value.
The velocity value is helpful to determining whether or not the constraint has been fully
effective or not; if the velocity is not very smooth to match the commanded value, you
should increase the weight of the distance constraint on the solver locking panel. Do
that on the first frame, to avoid placing weight keys in the middle of the shot (unless that
is what you want).
Note that you must set up a coordinate system on the camera or object in order
to productively use these constraints, so that the origin is not changing on each
successive solving run. And depending on where the camera or object origin falls, the
distance curve may be simple and slowly changing, or if the origin is too close to the
camera or object the motion may be too dynamic and difficult to work with.
While in theory an overall distance lock can be used to set the overall size of the
scene (with the Constrain checkbox off), that is not currently supported. You must have
other constraints to set the scene size, and you should makes sure those constraints do
not conflict with your overall distance constraints.
Orientation Locks
You can apply Pan, Tilt, and Roll rotation locks as well as translational locks.
They can be used for path editing and, to a lesser extent, for coordinate system setup.
For example, a roll-angle constraint can be used to keep the camera forced
upright. That can be handy on tripod shots with large pans: small amounts of lens
distortion can bend the path into a banana shape; the roll constraint can flatten that
back out.
If the camera looks in two different directions with the roll locked, it constrains
two degrees of freedom: only a single pan angle is undetermined! For example, if looks
along the X axis then along the Y axis, both with roll=0. You might want to think about
that for a minute.
The perspective window’s local-coordinate-system and path-relative handles can
help make specific adjustments to the camera path.
Inherently, SynthEyes is not susceptible to “gimbal-lock” problems. However,
when you have orientation locks, you are using pan, tilt, and roll axes that do define
overall north and south poles, and you may encounter some problems if you are trying
to lock the camera almost straight up or down. If this is the case, you may want to
change your coordinate system so those views are along the +Y and –Y axes, for
example.
Object Tracking
You can also use locks on rigid-body moving objects, in addition to cameras.
However, there are several restrictions on this, because moving objects are solved
relative their hosting camera path, but the locks are world coordinate system values.
If a moving object has path or orientation locks, then the host camera must have
been previously solved, pre-loaded, or pre-keyed, and the camera solving mode set to
Refine or Disabled. While some object tracking is often done without a camera track, if
you want to apply object constraints in world coordinates, you'll need to track, or at least
carefully position, the camera in order to establish the world coordinate system for the
object track. You can use a tripod track for the camera, and a regular track for the
object.
NOTE: In the Intro version, the translation axis locks must either all be on, or
all off, and the rotation axis locks must either all be on, or all off.
Normally, when SynthEyes handles shots with a moving camera and moving
object, it solves camera and object simultaneously, optimizing them both for the best
overall solution. However, when object locks are present, the camera solution must be
known, in order to be able to apply the object locks.
Object locks have very hard-to-think-about impacts on the local coordinate
system of the trackers within the object. Most object locks will wind up over-constraining
the object coordinate system.
We recommend that object locking be used only to work on the object path, not
to try to set up the object coordinate system.
Hint: a simple and reliable Overall Distance constraint will usually be very
effective in improving object paths when the object is jittering closer and
further from the camera due to a lack of perspective.
While an overall distance constraint on a camera controls the distance from the
camera to the origin, an overall distance constraint on a moving object constrains the
distance from the moving object's local coordinate system origin to the camera. This
makes it especially important to set up a well-chosen coordinate system for the moving
object: the origin should be near the center of rotation of the object, so that object
rotation does not excessively impact the overall distance.
Alternatively, turn off Whole affects meshes on the Viewport right-click menu,
turn on Whole, and move the camera around so that things are in the roughly right place
(for when there isn't any actual data).
Warning 1: This topic is for experts. Do not use field of view constraints on a
shot unless you have a specific need encountered on that shot. Do not use
them just because focal length values were recorded during shooting.
FOV/FL values calculated by SynthEyes are more accurate by definition than
recorded values.
Warning 2: Do not use focal length values unless you have measured and
entered a very good value for the plate width. Use field of view values
instead.
The Known lens mode can also be viewed as a simple form of field of view
constraint: one that allows arbitrary animation of the field of view, but that requires that
the exact field of view be known and keyed in for the entire length of the shot. We will
not discuss this mode further, except to note that the same effect, and many more, can
also be achieved with field of view constraints.
As with path constraints, field of view constraints are created with a seed field of
view track, animated lock enable, and lock weight. See the Lens panel, Solver panel,
and lock control dialog.
Both hard and soft locks operate at full effect all the time, regardless of the state
of the Constrain checkbox on the solver panel.
As with path constraints, field of view constraints affect the solution as a whole. If
you have a spike in the field of view track on a particular frame, adding a constraint on
that single frame will not do what you probably expect. All the trackers locations will be
affected, and you will have the same spike, but in a slightly different location. This is not
a bug. Instead, you need to also key surrounding frames. In all cases, identifying and
correcting the cause of the spike will be a better approach if possible.
If the lens zooms intermittently, you can determine an average zoom value for
each stationary portion of the shot, and lock the field of view to that value. You can
repeat this for each stationary portion, producing a smoother field of view track.
Sometimes you may have a marginal zoom shot where you are given the starting
and ending zoom values (field of view or focal length), but you do not know the exact
details of the zoom in between. SynthEyes might report a zoom from 60 to 120mm, but
you know the actual values were 50 to 100mm. You can address this by entering a one
frame field of view constraint at the beginning and end of the shot with the correct
values. As long as your values are reasonably correct in reality, the overall zoom curve
should alter to match your values.
If only the endpoints change, but the interior remains at other values, then
SynthEyes has significant evidence to the contrary from your values, which most likely
indicates the values are wrong, the plate width is wrong, or that there is substantial
uncorrected lens distortion.
Warning: this is a really advanced topic. It can be used quickly and easily,
especially Align mode, but it can just as quickly reduce your solve to rubble.
We’re not kidding, this thing is complicated!
First, what is “spinal editing” and why is it called that? Spinal editing is designed
to work on an already-solved track, where you have an existing camera or object path to
manipulate. The path is the spine that we edit. It is spinal because you can think of the
trackers as being attached to it like ribs. If you manipulate the spine, the ribs move in
response. You’ll be working on the spine to improve or reposition it. The perspective
window’s local-coordinate-system and path-relative handles can help make specific
adjustments to the camera path.
After you have completed an initial solve producing a camera path, you can
initiate spinal editing by launching the control panel with the Window/Spinal Editing
menu item. This will open a small spinal control panel. You can also enable spinal
editing with the Edit/Spinal aligning and Edit/Spinal solving menu items, though then
you lose the feedback from the control panel.
There are two basic modes, controlled by the button at top left of the spinal
control panel: Align and Solve.
Note that the recalculations done by spinal editing are launched only in response
to a specific relatively small set of operations:
dragging the camera or object in a 3-D viewport or perspective view,
dragging the “seed point” (lock coordinates) of a properly-configured
tracker in a 3-D viewport or perspective view,
changing the field of view spinner on the lens control panel or soft-lock
panel.
changing the weight control on the spinal editing dialog.
In order for a tracker’s seed point to be dragged and used for spinal alignment, it
must be set to Lock Point mode.
incorrect location, perhaps slanted a bit. The path needs to be bent into shape, and
spinal path editing can help you achieve that.
Please keep in mind that the results of these manipulated solves are generally
not the same result you would obtain if you started the solve again from scratch in
Automatic solving mode. You might consider re-starting the solve periodically to make
sure you’re not doing a whole lot of work on a marginal solution.
Using soft locks and spinal editing mode is a black art made available to those
who wish to use it, for whatever results can be obtained with it. It is a tool that affects
the solver in a certain way. There is no guarantee that it will do the specific thing that
you want at this moment. If it does not do what you think it “should be doing,” it is not a
bug.
Overview of Phases
Phases provide instructions to the solver. They are set up using the Phases
room, which opens the Phase View and the Phase Panel. Phases are created by right-
clicking in the phase view, and selecting the desired kind of phase from the bottom
section of the menu, where the phases are organized into categories: Coordinates, Edit,
Solver, Stereo, and Tracker. Each phase tells the solver to run a different algorithm,
affecting principally the camera or object paths, the fields of view, or the tracker
locations.
If you select Solver/Solve, you'll get
which is a Solve phase named Phase1. The red color indicates that it is selected, the
wide border indicates that it is the root phase, the small green triangle on the left is its
input pin, the small green triangle on the right is its output pin, neither of which is
connected to anything.
The phase control panel will look something like this:
Your scene will solve pretty much the same as before. As you'll note from the
checkboxes on the solve panel, you can turn on or off various parts of the normal solve
process.
Now, with the Solve phase still selected, right click and select Coordinates/Set
Horizon.
The Set Horizon phase is centered on the location that we initially right-clicked. Since
we're not super-exact, we just dragged it into place next to the solve phase to make it
look better. You could also use the arrow keys. An automatically-added wire connects
the new phase to the previously selected one. (We can wire phases directly if needed.)
The Set Horizon phase is selected, and it is also now the root.
This configuration instructs SynthEyes to solve the scene, then give the results,
consisting of the camera path and tracker locations, to the Set Horizon phase. Set
Horizon will adjust the path and tracker locations, then, because it is the root phase, its
results will be stored back into the main SynthEyes scene file.
So when phases are present, SynthEyes looks for a root phase, and asks it for
its solution. The root phase will compute its solve based upon its inputs—which in turn
compute based on their inputs, and so on. Eventually, some phases aren't connected to
anything, so they pull their input from the initial scene—ie from the main control panels.
The root solve thus triggers work by many different phases. If there is no root set
up, then the solve ignores ALL the phases, operating based only on the main user
interface controls.
Note the two selected trackers out on the horizon. We set up the Solve phase feeding
into the Set Horizon phase as above, then, with these two trackers selected, click Store
Trackers on the phase panel. This tells the Set Horizon phase that these are the two
trackers to be on the horizon. (There are a number of other adjustments available if
needed). If we solve the scene, we get a perspective view like this, where the light blue
horizon line passes through the two trackers (you can see the effect of lens distortion
also—the horizon is not straight):
The camera is up in the air; there is nothing to tell is where to go. Back to the Phase
view, connect a Slide Into Position phase onto the end, to produce:
Now, we select a tracker in the foreground parking lot, and click Store trackers on
the control panel for the Slide Into Position phase. We could have selected several
trackers if we wanted: this phase translates the entire scene so that the average
position of the stored trackers is located at the Wanted XYZ coordinates. Leaving them
at zero makes the selected tracker the origin.
We can obtain the updated solve quickly by double-clicking the red tab at the
upper right of Phase3. It is red to indicate that it has not been solved, unlike the two
solved green tabs of Phase1 and Phase2.
We've now specified the position and orientation of the scene. As with setting up
a coordinate system directly using tracker constraints, we should always specify the
position, orientation, and scale (so that if we solve again, even after changing trackers,
we will get basically the same results).
To set the scale, we add yet another phase, here a Tracker/Tracker Distance
Constraint (we could use Camera Height, Camera Travel, Distance to Tracker, etc
instead, if those made more sense) With a little neatening, dragging the phases
around, we get:
and again we can double-click the red tab to update the solve, producing a final solve.
This is a much different way to set up a coordinate system! Which approach you
use, tracker constraints or phases, depends on your scene and what you want from it.
Here are some key phases you can use to set up a coordinate system. Note that
in a number of cases, some parts of the coordinate system must already be set up for
the phase to act as expected. If you think about what is happening, the need for those
cases should be self-evident.
Camera Height. Sets the camera scale to be a specific height on a specific frame. The
scene must already be properly oriented! Typically used when the camera is on a
dolly, so the height above floor level is known.
Distance to Tracker. Sets the camera scale. Use when you know a tape-measure
distance from the camera to a specific tracker on a specific frame of the imagery,
ie similar to a focus distance.
Set Heading. Spins the scene about a vertical axis so that two given trackers are
pointing in a given heading direction. The ground plane must already be oriented
(2 of 3 rotational axes). For example, two trackers along the centerline of a road;
from satellite imagery we see the road runs northeast.
Set Horizon. Two horizon trackers are made level on a given frame, and also the
heading is set to aim the average of them to the back of the scene.
Set Path 1f. Sets the position and orientation (or a subset thereof) of the camera or
object path on any specific frame to specific coordinates that you enter (possibly
by manually positioning the camera/object). Be sure to turn on "Move whole
scene." You must have a previous scene sizing constraint!
Travel of Cam/Obj. Scale the scene so that the camera or object has moved the given
distance between the two frames, for example, the length of a dolly track.
back in to a slightly tighter field of view. Here is the graph editor's view of the zoom
channel and zoom velocity.
As you can see in the velocity curve frames 0-80, there is some jitter in the zoom
while the zoom is actually stationary. This is to be expected, as matchmoving is a
measurement process, but suppose we would like to eliminate it. If we were to filter the
zoom, we would unduly affect the frames where the zoom is active. We could animate
the filter frequency and strength for a while, but there is a simpler approach.
The Flatten FOV phase will compute the average field of view over a range of
frames and replace the original FOV with that average, with several blend frames at the
in and out point. (There is a similar script.)
We create a Solve phase, connect it to a Flatten FOV phase, and configure the
Flatten FOV phase.
It will average frames 0 to 74 with a 3-frame transition at frame 74. Note: there should
be no Blend at frame 0 since that is the start of the shot. We have run it, and the Flatten
Phase has automatically computed the average FOV and stored it in the Manual FOV
parameter for reference also. The graph editor now looks like this:
As you can see, the velocity curve is perfectly flat during the first part of the shot. There
is no jitter there at all. That was pretty good, let's do it again!
We've added on two additional Flatten FOV phase, and configured one for the middle
section of the shot, frames 98-111, and the last for frames 123-176.
Hint: it is easiest to position the current time at the desired start or end frame,
then click the respective Set 'in' frame or Set 'out' frame button.
As a result, the FOV channel is very flat, except when it is changing:
This is good, but it's not the end of the story. We have a very clean FOV track
now, but we want a consistent solve: the camera path and tracker positions that
correspond to the smooth zoom trajectory, not the jiggly one. We need to refine the full
solution to reflect the changed field of view channel.
We don't want the FOV channel to change further, since we've cleaned it up, so
we add a Set Lens Mode phase that changes the Lens mode from Zoom to Known @
Solved. The "@ Solved" part means that we want to use the incoming solved FOV
channel as the "known" value that will be used for the later solve.
We don't need to completely redo the solve, a Refine will do, so we add a Set
Solver Mode phase and set its solver mode to Refine. As you can see, the controls on
the main user interface have corresponding phases that allow you to change the
settings at any point in a whole pipeline of solver phases.
To actually run the second solve, we add a Solve phase with a final Autoplace for
good measure. The result looks like this:
But wait, there's more! If we solve this scene a second time, it will behave a little
unexpectedly. You can see that when you consider what happens at Phase1.
Since Phase1 has no actual input, it takes its inputs from the scene—which is
already configured for a Known Lens and Refine mode. The later solve won't reproduce
what happened the first time.
Because of this circularity, we need to pay some attention to what we have
changed, and effectively "put it back" at the beginning of the solve.
The phase "Clean Start" helps with this. It clears any existing solve data (from
the phase pipeline) and resets any Refine solver modes back to Automatic (or Refine
Tripod back to Tripod).
We also need to set the Lens mode back to the Zoom lens mode that we want
used for the initial solve.
To add these two phases, click in the empty portion of the phase view to unselect
all the phases, then create the Clean Start and then Set Lens Mode. This will ensure
that these two are not wired to the final solve. Instead, drag from the output pin of the
Set Lens Mode to the Input pin of the Phase1 solve, creating a wire. Here's the final
collection:
Although we have a fair number of steps, most of them don't do very much:
Clean up before we get started
Set the lens mode to Zooming
Do an initial solve, producing a somewhat jittery zoom track
Flatten out the initial non-zooming section
Flatten out the middle non-zooming section
Flatten out the final non-zooming section
Set the lens mode to Known (@Solved)
Set the solver mode to Refine
Do the final solve, corresponding to the flattened zoom track.
If we change some of the tracking data, say, we can easily rerun this process.
Furthermore, we can right-click and select Library/Save Phase File and write this
preconfigured set of phases into our phase library (or somewhere else) so that in the
future we can easily bring it in to solve a similar shot (after adjusting the frame ranges of
the Flatten FOV phases).
no root, then the phases will not be used at all in the solve. This can be helpful for quick
experimentation.)
When you double-click on the solve tab at the upper-right corner of a phase, it
causes that phase to be run. The phases that it uses will only be run if they have not
previously been run, or have been changed since that time. If those other phases have
a valid solve (a green solved tab), then they will not be solved again, which saves time.
The phases keep track of whether they have changed and need to be re-run
based on the parameters of the phase. If you make changes in the main interface that
turn out to affect a phase's operation, you need to make sure the phase is cleared so
that it will be re-run. (For example, a solve phase that does not have a Set Solver Mode
in front of it: if you change the solver mode in the main Solver control panel, the solve
phase's earlier solve data is no longer valid, but that is not apparent to the phase.)
You can force all the phases to be cleared with right-click/Unsolve, or clear a
single phase by right-clicking it and selecting Unsolve this.
It is always safest to do a full solve via Shift-G (Run All) once you have set up the
appropriate Clean Start elements for what you are doing.
You can review the output of a series of phases, after you have closed the popup
solver dialog, using the Solver Output View.
If you have a complex series of phases and are not sure what the solve pipeline
contains at some intermediate phase, you can right-click it and select Retrieve from this,
which reads out the phase's solve data and loads the data into scene. Note that the
readout data consists primarily of path and position data, not the details of the various
mode controls. See the Phase Reference for details of what is read out.
If you want to "hide" some trackers from a given Solve phase, so that they do not
influence, make them Zero-Weighted-Trackers (ZWTs), using the Tracker Modes
phase.
Zero-Weighted Trackers
Suppose you had a visual feature you were so unsure of, you didn’t want it to
affect the camera (or object) path and field of view at all. But you wanted to track it
anyway, and see what you got. You might have a whole bunch of leaves on a tree, say,
and hope to get a rough cloud for it.
You could take your tracker, and try bringing its Weight in the solution down to
zero. But that would fail, because the weight has a lower limit of 0.05. As the weight
drops and the tracker has less and less effect, there are some undesirable side effects,
so SynthEyes prevents it.
Instead, you can click the zero-weighted-tracker (ZWT) button on the tracker
panel, which will (internally) set the weight to zero. The undesirable side effects will be
side-stepped, and new capabilities emerge.
With a weight of zero, ZWTs do not affect the solution (camera or object path and
field of view, and normal tracker locations), and can not be solved until after an initial
camera and/or object solution has been obtained. ZWTs are solved to produce their 3-D
position at the completion of normal solving.
The main solver changes some trackers to Far when they have very little
perspective (and consequently can even show up behind the camera). You can change
those trackers to ZWTs instead, to get rough (distant) 3-D information for them, without
affecting the main solve.
Tip: There is a separate preference color for ZWTs. Though it is normally the
same color as other trackers, you can change it if you want ZWTs to stand
out automatically.
Importantly, ZWTs are automatically re-solved whenever you change their 2-D
tracking, the camera (or object) path, or the field of view. This is possible because the
ZWT solution will not affect the overall solution.
It makes possible a new post-solving workflow.
After solving, if you want to add an additional tracker, create it and change it
to a ZWT (use the W keyboard accelerator if you like). Keep the Quad view open. Begin
tracking. Watch as the 3-D point leaps into existence, wanders around as you track, and
hopefully converges to a stable location. As you track, you can watch the per-frame
and overall error numbers at the bottom of the tracker panel.
ZWTs benefit from the Track/Search from solved tracker search prediction
method. Once the ZWT is solved, that 3-D location is used to predict the tracker's 2-D
search location, so the search region size can be kept quite small (as long as the overall
solve and tracker location are good), reducing the chances of hopping to an unrelated
similar-looking feature.
Bring up the graph editor, and take a quick look at the error curve for any
spikes—since the position is already calculated, the error is valid.
Once you’ve completed tracking, change the tracker back to normal mode.
Repeat for additional new trackers as needed. You can use the same approach
modifying existing trackers, temporarily shifting them to ZWTs and back.
When you do your next Refine cycle using the Solver panel, the trackers will be
solved normally, and influence the solution in the usual way. But, you were able to use
the ZWT capability to help do the tracking better and quicker.
Juicy Details
ZWTs don’t have to be only on a camera, they can be attached to a moving
object as well. You can also configure Far ZWTs. Offset ZWT trackers can not take
advantage of search from solved mode.
The ZWT calculation respects the coordinate system constraints: you can
constrain Z=0 (with On XY Plane) to force a ZWT onto the floor in Z-up mode. A ZWT
can be partially linked to another tracker on the same camera or object. It doesn’t make
sense to link to a tracker on a different object, since such links are always in all 3 axes,
overriding the ZWT calculation. Distance constraints are ignored by ZWT processing.
(Again, ZWTs do not affect the solution and any constraints on them will not affect the
coordinate system.)
If you have a long shot and a lot of ZWTs and must recalculate them often (say
by interactively editing the camera path), it is conceivable that the ZWT recalculation
might bog down the interactive update rate. You can temporarily disable ZWT
recalculation by turning off the Track/ZWT auto-calculation menu item. They will all be
recalculated when you turn it back on.
Stereo ZWTs
A stereo pair of trackers can be made into a stereo ZWT by changing either
tracker to a ZWT—the other member will be changed automatically. A stereo ZWT pair
can produce a position from as little as a single frame in each camera. After a solve has
been produced, for true moving-camera shots, you can track in one camera, make the
pair into a ZWT, and then have an excellent idea where the tracker will be in the other
camera, potentially simplifying tracking.
Important: you must not have already hit Clear All Blips on the Feature
panel or Clean Up Trackers dialog, since it is the blips that are analyzed to
produce additional trackers. If you have, click Blips all frames to restore them.
The Add Many trackers dialog, below, provides a wide range of controls to allow
the best and most useful trackers to be created. You can run the dialog repeatedly to
address different issues.
You can also use the Coalesce Nearby Trackers dialog to join multiple disjointed
tracks together: the sum is greater than the parts!
When the dialog is launched from the Track menu, it may spend several seconds
busily calculating all the trackers that could be added, and it saves that list in a
temporary store. The number of prospective trackers is listed as the Available number,
1880 above. By adjusting the controls on the dialog, you control which of these
prospective trackers are added to the scene when you push the Add button. At most,
the Desired number of trackers will be added.
Basic Tracker Requirements
The prospective trackers must meet several basic requirements, as described in
the requirements section of the panel. These include a minimum length (measured in
frames), and an amplitude, plus average and peak errors.
The amplitude is a value between zero and one, describing the change in
brightness between the tracker center and background. Larger values will require more
pronounced trackers.
The errors numbers measure the distance between the 2-D tracker position and
the computed 3-D position of the tracker, mapped back into the image. The average
error limits the noisiness and jitter in the trackers, while the peak error limits the largest
“glitch” error. Notice that these controls do not change any trackers, but instead select
which of the prospective trackers are actually selected for addition.
You can ask for only spot trackers, only corner trackers, or allow trackers of
either type to be created.
To a Range of Frames
To add trackers in a specific range of frames in the shot, set up that region in the
Frame-Range Controls: from a starting frame to an ending frame. Then, set a minimum
overlap: how many frames each prospective tracker must be valid, within this range of
frames. For example, if you have only a limited number of trackers between frames 130
and 155, you would set up those two as the limits, and set the minimum overlap to 25 at
most, perhaps 20.
To a Specific Area
To add trackers in a particular area of the scene, open the camera view, and go
to a frame that makes the region needing frames clearly visible. Lasso the region of
interest—it does not matter if there are any trackers there already or not. The lassoed
region will be saved. (Fine point: the frame number is also saved, so it does not matter if
you change frames afterwards.)
Open the Add Many trackers dialog, and turn on the Only within last Lasso
checkbox. The only trackers selected will be those where the 3-D point falls within the
(2-D) lassoed area, on the frame at which the lasso occurred.
With this option, SynthEyes specifically ensures that the new trackers are evenly
distributed within the lassoed area (in 2-D). This can make it worthwhile to lasso a large
area rather than giving no lasso area at all, if the new trackers would otherwise be
clustered too close together.
Zero-Weighted vs Regular Trackers
Once all the criteria have been evaluated, and a suitable set of trackers
determined, hitting Add will add them into the scene. There are several options to
control this (which should be configured before hitting Add).
The most important decision to make is whether you want a ZWT or a regular
tracker. Intrinsically, the Add many trackers dialog produces ZWTs, since it has already
computed the XYZ coordinates as part of its sanity-checking process. By using ZWTs,
you can add many more trackers without appreciably affecting the re-solve time if you
later need to change the shot. So using ZWTs is computationally very efficient, and is
an easy way to go if you need more trackers to build a mesh from.
On the other hand, if you need additional trackers to improve the quality of the
track, by adding more trackers in an under-populated region of 3-space or range of
frames, then adding ZWTs will not help, since they do not affect the overall camera
solution. Instead, check the Regular checkbox, and ordinary trackers will be created,
still pre-solved with their XYZ coordinates. You can solve again using Refine mode, and
the camera path will be updated taking into account the new trackers.
If you add hundreds or thousands of regular trackers, the solve time will increase
substantially. Designed for the best camera tracking, SynthEyes is most efficient for
long shots, not for thousands of trackers. To see why this choice was made, note that
even if all the added trackers are of equal quality, the solution accuracy increases much
slower than the rate trackers are added. You can use some of the trackers for the solve,
and keep the rest as ZWTs.
Other New Tracker Properties
Normally, you will want the trackers to be selected after they are added, as that
makes it easy to change them, see which were added, etc. If you do not want this, you
can turn off the Selected checkbox.
Finally, you can specify a display color for the trackers being added by selecting
it with the color swatch, and turning on the Set color checkbox. That will help you
identify the newly-added trackers, and you can re-select them all again later using the
Select same color item on the Edit menu.
It may take several seconds to add the trackers, depending on the number and
length of trackers. Afterwards, you are free to add additional trackers to address other
issues if you like—the ones already added will not be duplicated.
When you open the dialog, you can adjust the controls (described shortly) and
then click the Examine button.
SynthEyes will evaluate the trackers and select those to be coalesced, so that
you can see them in the viewports. The text field, reading “(click Examine)” in the
screen capture above, will display the number of trackers to be eliminated and
coalesced into other trackers.
At this point, you have several main possibilites:
1. click Coalesce to perform the operation and close the panel;
2. adjust the controls further, and Examine again;
3. close the dialog box with the close box (X) at top right (circle at top left on Mac), then
examine the to-be-coalesced trackers in more detail in the viewports; or
4. Cancel the dialog, restoring the previous tracker selection set.
If you are unsure of the best control settings to use, option 3 will let you examine
the trackers to be coalesced carefully, zooming into the viewports. You can then open
the Coalesce Nearby Trackers dialog again, and either adjust the parameters further, or
simply click Coalesce if the settings are satisfactory.
What Does Nearby Mean?
The Distance, Sharpness, and Consistency controls all factor into the decision
whether two trackers are close enough to coalesce. It is a fairly complex decision,
taking into account both 2-D and 3-D locations, and is not particularly amenable to
human second-guessing. The controls are pretty straightforward, though.
As an aside, it might seem that all that is needed is to measure the 3-D distance
between the computed tracker points, and coalesce them if the points are within a
certain distance measured in 3-D (not in pixels). However, this simplistic approach
would perform remarkable poorly, because the depth uncertainty of a tracker is often
much larger than the uncertainty in its horizontal image-plane position. If the distance
was large enough to coalesce the desired trackers, it would be large enough to
incorrectly coalesce other trackers.
Instead, SynthEyes uses a more sophisticated and compute-intensive approach
which is evaluated over all the active frames of the trackers.
The first and most important parameter is the Distance, measured in horizontal
pixels. It is the maximum distance between two trackers that can be considered for
coalescing. If they are further apart than this in all frames, they will definitely not be
coalesced. If they are closer, some of the time, they may be coalesced, increasingly
likely the closer they are.
The second most important parameter, the Consistency, controls how much of
the time the trackers must be sufficiently close, compared to their overall lifetime. So
very roughly, at 0.7 the trackers must be within the given distance on 70% of the
frames. If a track is already geometrically accurate, the consistency can be made
higher, but if the solution is marginal, the consistency can be reduced to permit matches
even if the two trackers slide past one another.
The third parameter, Sharpness, controls the extent to which the exact distance
between trackers affects the result, versus the fact that they are within the required
Distance at all. If Sharpness is zero, the exact distance will not matter at all, while at a
sharpness of one (the maximum), if the trackers are at almost the maximum distance,
they might as well be past it.
Sharpness can be used to trade off some computer time versus quality of result:
a small distance and low sharpness will give a faster but less precise result. Settings
with a larger distance and larger sharpness will take longer to run but produce a more
carefully-thought-out result—though the two sets of results may be very similar most of
the time, because the larger sharpness will make the larger distance nearly equivalent
to the smaller distance and low sharpness.
If you are handling a shot with a lot of jitter in the trackers, due to large film grain
or severe compression artifacts, you should decrease the sharpness, because those
small differences in distance are in fact meaningless.
What Trackers should be Coalesced?
Three checkboxes on the coalesce panel control what types of trackers are
eligible to be coalesced.
First, you can request that Only selected trackers be coalesced. This allows
you to lasso-select a region where coalescing is required. (Note: if you only need 2
particular trackers coalesced, for sure, use Track/Combine Trackers instead.)
Second, frequently you will only want to coalesce auto-trackers, or trackers
created by the Add Many Trackers dialog. By default, supervised non-zero-weighted
trackers are not eligible to be coalesced. This prevents your carefully-constructed
supervised trackers from inadvertently being changed. However, you can turn on the
Include supervised non-ZWT trackers checkbox to make them eligible.
SynthEyes will also generally coalesce only trackers that are not simultaneously
active: for example, it might coalesce two trackers that are valid on frames 0-10 and 15-
25, respectively, but not two trackers that are valid on frames 0-10 and 5-15. If both are
autotrackers, if they are simultaneously active, they are not tracking the same thing. The
exception to this is if they are a large autotracker and a small one, or an autotracker and
a supervised tracker. To combine overlapping trackers, turn off the Only with non-
overlapping frame ranges checkbox.
A satisfactory approach might be to coalesce once with the checkbox on, as is
the default, then open the dialog again, turn the checkbox off, and Examine the results
to see if something worth coalescing turns up.
An Overall Strategy
Although we have talked as if SynthEyes only combines two trackers, in fact
SynthEyes considers all the trackers simultaneously, and can merge three or more
trackers together into a single result in one pass.
Locking to a Camera
The perspective window can be used to overlay inserted objects over the live
imagery, much like the camera view. Select Lock to Current Camera to lock or
release, or use the ‘L’ key. Note that when the view is locked to the camera, you can not
move or rotate the camera, or adjust the field of view.
You can have the perspective view continue to be locked to the current camera
(Active Tracker Host) even if that changes, by turning on Stay Locked to Host. Or you
can select a specific camera or object to lock to, and it will stay locked to it, even if you
unlock and then relock the view.
There are two related preferences in the Perspective area of preferences. The
Stay Locked to Host preference is used when new perspective windows are created;
this preference is ON by default. The Always Lock to Active preference causes the
Lock button to always lock to the active tracker host, instead of a previously-stored
tracker host.
Tip: To emulate the locking behavior of SynthEyes before 1608, turn the Stay
Locked to Host preference OFF, and turn the Always Lock to Active
preference ON.
The projection screen mode listed in the preferences and adjust script has four
settings: Never, Automatic, Locked and Always. It controls when the built-in
projection screen mesh is engaged, to handle distortion, green-screening, and alpha
channels. If it is disabled, the incoming image will be displayed as-is, with no distortion
correction or keying. The Automatic system turns it on when the perspective view is
locked to this camera and distortion, green-screening, or an alpha channel is present,
and leaves it off the rest of the time, using simpler and faster image draw code to
improve performance. The Locked setting generates the screen whenever the
perspective view is locked to this camera.
Finally, the Always setting generates the screen all the time, even when the
perspective view isn't locked to the camera. This lets you see the screen when you have
unlocked from the camera, or from other perspective view windows, which can be
handy for lineup with matted-out green screens, for example.
The projection screen (and the physical script-based screen described below)
operate by texture-mapping a distorted grid object. The accuracy of this process
depends on the number of segments (and thus vertices and faces) in the grid. For more
extreme distortion, you should increase the resolution of the grid.
You can also use the Projection Screen Creator script to create actual mesh
geometry within the scene that is texture-mapped with the shot imagery. As a physical
mesh, it can be exported directly to other applications. Because the mesh is created
once, when you run the script, it cannot be used with zoom shots, and you must
manually re-generate it when the lens FOV or distortion changes.
Freeze Frame
Each perspective view can be independently disconnected from the main user
interface time slider, “frozen” on a particular frame. This can be useful to view a shot
from two different frames simultaneously (to link trackers from different parts of the
same shot), or to view two shots with different lengths simultaneously and with some
independent control. That is especially helpful for multi-shot tracking, where the
reference shot is only a few frames long.
See View/Freeze on this frame on the right-click menu. Using the normal A, s,
d, F, period, and comma accelerator keys within a frozen perspective window will
change the frozen frame, not the main user interface time. To update the main user
interface time from within the perspective window, use the left and right arrows (or move
outside the perspective window!). To re-set the frozen time to the current time, hit
View/Freeze on this frame again. To unfreeze, use View/Unfreeze.
The Scrub mouse mode will scrub the frozen frame number, or the normal frame
number, depending on whether or not the frame is frozen.
Stereo Display
SynthEyes can display anaglyph stereo images, on the right-click menu select
View/Stereo Display. If it is a stereo shot, both images will be displayed if the image is
enabled. If it is not a stereo shot, SynthEyes will artificially create two views for the
stereo display. See the settings on View/Perspective View Settings. They include the
inter-ocular distance and vergence distance, plus the type of glasses you have. (You
should look for glasses that strongly reject the unwanted colors, some paper glasses
are best!) The normal anaglyph views still show the colors in the original images, though
if you select the Luma versions, you will get a gray-scale version of the scene for each
eye; some people prefer this for assessing depth.
Navigation
SynthEyes offers four basic ways to reposition the camera within the 3-D
environment: pan (truck), look, orbit, and dolly in/out. You can select which motion you
want by several different ways: by using the control and ALT keys, by clicking one of the
navigation mode buttons on the Mouse toolbar overlay first, or by dragging within one of
the mouse buttons. You can navigate at any time by using the middle mouse button, or
with the left mouse button when the navigation mode is selected.
Navigating with the Mouse Toolbar
The simplest form of navigation uses the Pan, Look, Orbit, and Dolly buttons on
the Navigate toolbar, which you can re-open if needed by right-clicking and selecting
Toolbars/Mouse.
You can click on one of the four buttons (Pan, Look, Orbit, and Dolly) to select
that mode. It will turn light blue, as the active mouse mode. Left-clicking and dragging in
the viewport will then create the corresponding camera motion.
Alternatively, you can drag within the button itself to create the corresponding
motion without permanently changing the mode. This is good for quickly doing
something other than the current mode, without changing the mode. For example, a
quick Look while you have been panning.
You can also navigate using the middle mouse button. When the mouse mode is
set to something OTHER than Navigate, one of the Pan, Look, Orbit, and Dolly buttons
will still remain lit in a tan color. That color indicates the action the middle mouse button
will create, when depressed in the rest of the window.
Similarly, you can middle-click one of the buttons to change the action of the
middle mouse button, or middle-drag within the button to temporarily create a motion
without changing the mode.
Navigating Maya-Style
You can navigate similarly to Maya by turning on the maya-style navigation in the
Perspective section of the preferences. There's not really a drawback!
When enabled, ALT -left-drag will Orbit, ALT -middle-drag will Pan, and ALT -
right-drag will Dolly. These operations are available continuously within the perspective
view, independent of the mouse mode.
On macOS: Maya uses the Mac's Opt key for navigation. Normally,
SynthEyes would use the command key as the equivalent, but to maximize
compatibility, you can use either Opt or Command to initiate Maya-style
navigation.
On Linux: On Linux, you may have to adjust your modifier key so that ALT is
not taken by your window manager, as you do for Maya itself.
Once you start a drag with one button, it continues in that mode. If you click
either of the other two mouse buttons, the move will be canceled, which is often helpful,
keep reading!
Navigating with Control and ALT
The idea behind using Control and ALT (command on the Mac) is that you can
change the motion at any time, as you are doing it, by pressing or releasing a key, while
keeping the mouse moving. That makes it a lot faster than switching back and forth
between various tools and then moving the camera.
When neither control nor ALT are pressed, dragging will pan the display (truck
the camera sideways or up and down). Control-dragging will cause the camera to look
around in different directions, without translating. Control-ALT-dragging will dolly the
camera forwards or backwards. ALT-dragging will cause the camera to orbit.
Navigating When Locked
Often, the perspective view will be locked to the camera using the Lock button,
so that the view tracks with the solved or seed camera. If you use a 3D navigation mode
when the camera is locked, the lock will be removed (generally causing the background
shot imagery to disappear), and the move will proceed from the initial camera position. If
you cancel the navigation movement by right-clicking during it, the view will return to the
initial position, and the lock and background shot imagery will be restored.
This behavior makes it easy to take a "quick peek" in the vicinity of the solved
camera, to better view the geometry of nearby features.
If you use the Perspective Projection Screen Adjustment script to change the
projection screen mode to Always, you will be able to still see the shot imagery, at the
screen distance you have selected.
More Navigation Details
The center of the orbit will be selected from the first (lowest-numbered)
applicable item in the following list:
1. the net center of all selected trackers, or in place mode, about their seed point,
2. the net center of all selected meshes,
3. the center of any selected object or light,
4. the net center of all selected vertices in the current edit mesh (more on that later), or
5. around a point in space directly ahead of the camera.
HINT: if you are trying to orbit around selected vertices and it is not working
as expected, do a control-D/command-D to clear the current (tracker and
mesh) selection.
You can see which motion will happen by looking at the top-left of the
perspective window at the text display.
The shift key will create a finer motion of any of these types.
The mouse’s scroll wheel will dolly in and out if the perspective window is not
locked to the camera, or if it is, it will change the current time. If locked, shift-scrolling
will zoom the time bar.
If you hold down the Z or ‘/” (apostrophe/double-quote) key when left-clicking, the
mouse mode will temporarily change to be navigation mode; the mode will switch back
when the mouse button is released. You can also switch to navigate mode using the ‘N’
key. So it is always easy to navigate and quick to navigate without having to move the
mouse to select a different mode.
Zooming and Panning in 2-D
The navigation modes described above correspond to physically moving or
rotating the camera. Sometimes we wish to zoom into the image itself, as we can do
with the camera view—which does not correspond to any camera motion at all.
You can zoom into the image by selecting the Zoom mouse mode, then dragging
in the viewport (or in the button itself). Or, use right-drag to zoom as in the camera view.
The Pan 2D mode then allows you to pan within the zoomed-in image.
A 2-D pan and 3-D pan can be very difficult to distinguish, and confusing.
Whenever the image has been zoomed into, SynthEyes displays a wide pinkish border
around the entire image, and the normal Pan, Look, Orbit, and Dolly buttons are
disabled. Clicking any of those buttons will reset the zoom so that the entire image is
displayed. You can also reset the zoom by right-clicking the zoom button.
Other Mouse Toolbar Buttons
The mouse toolbar also contains a Scrub mouse mode button and a Lock button.
The Scrub mode allows you to move rapidly through the entire shot, independent
of the main toolbar, even if Freeze Frame is engaged in this window.
The lock button provides a quick way to turn the lock (background image) on and
off. Note that you may see a slight rotation as you turn it off, as an unlocked perspective
view always keep the camera rotation angle at zero.
Using the Save as Defaults item on the Toolbars submenu will store the current
toolbar positions and status for use as defaults for following SynthEyes runs in the per-
user pertool14.xml file.
Knowledgeable users can hand-edit the pertool14.xml file (or camtool14.xml for
the camera view) to alter or add new toolbars if they follow the format carefully.
Tip: Use Mesh De-duplication features (not in Intro) to cut disk usage with
large meshes and multiple SNI file versions.
If you later change the source mesh file, you can reload it inside the SynthEyes
scene, without having to reestablish its other settings, by selecting it and clicking the
Reload button on the 3-D Panel.
Similarly, if you need to replace a mesh with another version, for example a
lower- or higher-resolution version, use File/Import/Replace Mesh. This will change the
vertex, face, normal, vertex color, and texture coordinate information, but not other
information.
Note: Use File/Import Mesh of an FBX file for files containing a single mesh.
Files with multiple meshes will result in an error.
Note: SynthEyes may not be able to read files written by FBX versions later
than its own. The current FBX version is 2018.1.1. You can also confirm that
from the version dropdown of the Filmbox exporter.
Use File/Import/Filmbox Scene Import to read files with multiple meshes. It allows
multiple meshes with positioning, cameras, lights, rigs, vertex caches, etc to be read. A
mesh imported by Filmbox Scene Import cannot be reloaded, however.
Similarly, you can use File/Import/3DS Scene, Alembic ABC Scene, COLLADA
DAE Scene, DXF Scene, OBJ Scene, and Zipped FBX Scene importers to read multiple
meshes and potentially other objects from the corresponding file types. These are
variants of the Filmbox importer.
Note that you can import DXF and OBJ files via both the built-in Import Mesh or
the corresponding DXF Scene or OBJ Scene importers. The built-in versions are faster
and support easily reloading the mesh from a file. The Scene importers support reading
multiple meshes from a file and breaking them into component meshes, for different
coloration or texturing. The DXF Scene importer additionally supports binary DXF files
and perhaps other newer DXF file features.
The Alembic ABC Scene importer variant is currently intended mainly for vertex
cache files, according to Autodesk. As such, it doesn't necessarily import cameras or
other features. Hopefully Autodesk will rectify that over time.
The Zipped FBX Scene importer variant allows you to open a ZIP file containing
an FBX file, plus other associated files, according to Autodesk. So it may save a step or
two and some storage, but more subtly they may have finessed the reader to better
capture related texture files from within the ZIP file, bypassing filename issues. But that
isn't clear.
Vertex Caches
You can set up a vertex cache file for a mesh, which allows it to reproduce
previously-stored vertex animation, ie from animation programs such as 3dsmax, Maya,
Lightwave, Blender, etc, or from SynthEyes itself if you've done Geometric Hierarchy
Tracking and written a cache file.
A vertex cache will be set up automatically when you import filmbox scenes that
contain vertex caches.
To configure the vertex cache, see the button on the 3D panel. Currently, .ABC
(Alembic), .MCX (Maya), .MDD (Lightwave), and .PC2 (3ds max) file types are
supported.
Note: Don't delete a vertex cache file that is used by a SynthEyes scene—the
files are large and are not embedded in the .sni file. Only the name is
recorded. The file must continue to exist for SynthEyes to be able to display
the animated mesh.
Note: Don't change the name of a mesh that is being driven by an Alembic
(.abc) vertex cache. Several vertex caches can be stored in a single Alembic
file (unlike other file types), so the mesh name is used to select the proper
vertex cache.
After setting or viewing the vertex cache file setting, you'll be asked which
coordinate system to use: the typical Y-Up or the less-common Z-Up (maybe for PC2
files from 3dsmax). You can change this whenever you like. If you want to avoid the
popup, change the preference in the MESHES section of the preferences.
If you have suppressed the question and need to change the setting anyway, you
can either turn it back on, or feed Synthia a command such as these, where mymesh is
the name of your mesh:
make mymesh's vertex cache axis mode z-up.
make mymesh's vertex cache axis mode y-up.
To remove an existing vertex cache file setting from a mesh, click the Vtx Cache
button, then Cancel the file selector and answer Yes to the question about that (or make
mymesh's vertex cache file the file ``.).
What the Heck?: This is an advanced topic. If the discussion so far doesn't
make sense to you, if this isn't a problem you have, then you can just skim
this section. (It will help guide you on whether or not to delete vertex normals
when importing.)
SynthEyes stores meshes using a scheme where each vertex position has at
most a single normal and/or a single texture coordinate, which means that vertices
may be duplicated/renumbered when meshes are imported. For example, along the
edges of a cube, vertices may have two different normals, one for each side. Such
vertices are duplicated into two copies, with the same position, each with a single
normal. This results in rapid processing and display for typical use in SynthEyes.
If your object is facet-shaded (instead of smooth-shaded) upon export from your
application, it can result in inefficient processing in SynthEyes, as SynthEyes honors the
not-so-significant face normals. It can result in much larger meshes (3-4x more vertices)
and prevent smooth vertices from being generated. If at all possible, supply per-vertex
normals, or none at all (SynthEyes builds face normals automatically if needed.)
To address this, when an OBJ is imported, SynthEyes may ask you if you want to
strip the normals from the object. If there will be a large change in the number of
vertices resulting from the import, this typically indicates that per-face normals were
supplied, so deleting them is a good idea. You can then generate smooth per-vertex
normals, or allow SynthEyes to efficiently generate its own per-face normals. (The
Filmbox importer simply ignores per-facet normals automatically.)
If your workflow calls for passing meshes into SynthEyes then back out and into
your application, and you then expect to have the same exact vertex numbering, you
will have to be selective in your work, as follows.
Some SynthEyes importers and exporters use additional data from the time of
import to be able to recreate the original vertex numbering upon export.
Important: At present, the .obj reader and "Mesh as A/W .obj" and Filmbox
importers and exporters are the ones that use original-vertex information. In
other cases, such as Alembic and Blender, it is not necessary.
Lidar Meshes
SynthEyes imports Lidar data from set surveys as described above, from XYZ
files. The XYZ files can contain not only XYZ data for millions of vertices, but RGB color
information for each vertex as well. The vertices will be displayed in the viewports as a
point cloud.
Tip: Use File/Import/Mesh to read XYZ lidar files. The Intro version can store
meshes containing at most 2 million points.
Typical GIS-type lidar files use Z- coordinate data, so SynthEyes assumes that
by default. To process files with other axis settings, select the appropriate setting on the
lidar import settings panel (see the preference in the Meshes section). Note that the
setting for the lidar file is independent of the setting of your scene! Data will be
converted if necessary.
Lidar files will automatically be re-centered if they are too lopsided, ie the
average offset is much larger than the bounding box of the data. Lopsided files cause
numeric inaccuracy, and can be hard to find in the viewports and hard to work with in
general. If you are reading lidar in sections, or otherwise need to disable this, turn off
the Re-center lopsided Lidar checkbox (there is a preference for this in the Meshes
section).
Tip: If you read a lidar file with re-centering turned off and don't see it, click
control-shift-HOME and look in the viewports for a tiny dot that is your scene.
You can then zoom in on it. In the perspective view, click control-F.
Note: The Intro version limits lidar data to at most 2 million points.
Use decimation to control the number of points input and displayed. Large
numbers of points will severely impact performance. The default 10 million points will
generally maintain reasonable performance, while still giving a very dense point cloud.
Although Lidar data does not contain facets, you can use the Edit Mesh tools in
the perspective view to create facets from the vertices, if your task requires it. Use well-
positioned clipping planes to control the Lasso Vertices tool while triangulating lidar data
(any SynthEyes-generated plane will do—Lasso Vertices won't look behind it/them).
Opacity
It can be helpful to make one or more meshes partially transparent; this can be
achieved using the opacity spinner on the 3-D Panel, which ranges from an opacity of
zero (fully transparent and invisible) to an opacity of one (fully opaque; the default).
The opacity setting affects the mesh in the perspective view and in the camera
view only if the OpenGL camera view is used. See View/OpenGL Camera View and
the preference Start with OpenGL camera view. The OpenGL camera view is the
default on macOS and Linux.
Note that while in an ideal world the opacity setting would simulate turning a solid
mesh into an attenuating solid, in reality opacity is simulated using some fast but
Tip: You can lock a mesh's position, rotation, scale, and parenting against
inadvertent changes by clicking its lock icon in the Hierarchy View, or entering
lock selected object or similar in Synthia.
Tip: See Using a 3-D Model for directions on how to use Place mode for
object tracking.
For this latter workflow, set up trackers on the image, import the reference model.
(If you are setting up a mesh for object tracking, eg a head track, leave it at the origin
without rotating it, as coordinates will be generated in the world coordinate system, so
that the origin becomes the object location.) Go to the Camera and Perspective
viewport configuration. Set the perspective view to Place mode. Select each tracker in
the camera view, then place its seed point on the reference mesh in the perspective
view. You can reposition and reorient the perspective view however you like to make
this easy—it does not have to be, and should not be, locked to the source imagery to do
this. This work should go quite quickly.
If you need to place trackers (or meshes) at the vertices of the mesh, not on the
surface, hold the control key down as you use the place mode, and the position will
snap onto the vertices. The vertices of lidar meshes are also snappable.
Grid Operations
The perspective window’s grid is used for object creation and mesh editing. It can
be aligned with any of the walls of the set: floor, back, ceiling, etc. A move-grid mode
translates the grid, while maintaining the same orientation, to give you a grid 1 meter
above the floor, say.
A shared custom grid position can be matched to the location of several vertices
or trackers using the right-click|Grid|To Facets/Verts/Trackers menu item. If 3 or more
trackers (or vertices) are selected, the grid is moved into the plane defined by the best-
fitting approximation to them (ignoring outliers). If two are selected, the grid is rotated to
align the side-to-side axis along the two. If one is selected, the grid slides to put that
tracker at the origin.
To take advantage of this, you should first select 3 or more vertices or trackers,
and align once to set the overall plane. Then select only one of them, and align again to
set it to be the origin. Select two of them, and align again to spin the plane so the X axis
is parallel to the two. (You can do it in the order 3+,1,2 or 3+, 2, 1 ... the 3+ should
always be first!)
You can easily create an object on the plane defined by any 3 (or more) trackers
by selecting them, aligning the grid to the trackers, then creating the object, which will
be on the grid.
You can toggle the display of the grid using the Grid/Show Grid menu item, or the
‘G’ key.
Shadows
The perspective window generates shadows to help show tracking quality and
preview how rendered shots will ultimately appear.
The 3-D panel includes control boxes for Cast Shadows and Catch Shadows.
Most objects (except for the Plane) will cast shadows by default when they are created.
If there are no shadow-catching objects, shadows will be cast onto the ground
plane. This may be more or less useful, depending on your ground plane; if the ground
is very irregular or non-existent, this will be confusing.
If there are shadow-catching objects defined, shadows will be cast from shadow-
casting objects onto the shadow-catching objects. This can preview complex effects
such as a shadow cast onto a rough terrain.
NOTE: shadow catching relies on some features of OpenGL that may not be
present in older video cards (or "integrated graphics"). In this case, only
ground-plane shadows will be produced.
Shadows may be disabled from the main View menu, and the shadow black level
may be set from the Preferences color settings. The shadow enable status is “sticky”
from one run to the next, so that if you do not usually use it, you will not have to turn it
off each time you start SynthEyes.
You can control the resolution of the shadow map from the preferences; you may
want to increase it if you are working with image resolutions over HD or notice jaggies
along the edges of the shadows (ie if a mesh's shadows extend over the entire size of
the image).
Tip: You can create a texture map for the shadows on a mesh, so that you
can export and render the shadow independently in other apps. See the
Shadow Maker button on the Texture Control Panel.
Note that as with most OpenGL fast-shadow algorithms, there can be shadow
artifacts in some cases. Final shadowing should be generated in your 3-D rendering
application.
Note that the camera viewport does not display shadows by design.
Edit Mesh
The perspective window allows meshes to be constructed and edited, which is
discussed in Building Meshes from Tracker Positions. One mesh can be selected as an
edit mesh at any time―select a mesh, then right-click Set Edit Mesh or hit the ‘M’ key. A
mesh's status as the edit mesh is independent of its status as a selected mesh: it can
be the edit mesh while not being selected. To not have any edit mesh, right-click Clear
Edit Mesh or Set Edit Mesh with nothing selected.
When a mesh is an edit mesh, every vertex will be displayed in the perspective
view, as long as it is not used only on back faces (changing the view will make them
visible). Selected vertices are always shown, even if they are on backfaces.
Tip: the selected vertices of the edit mesh are shown in the 3D viewports, to
help you understand where you are working. To take advantage of this, make
sure the the edit mesh is not also selected: if it is selected, it will be red, and
the selected vertices will be too. Select something else, or click on empty
space to select nothing at all.
The Lasso Vertices tool respects the "solidity" of the mesh, it will not select
vertices that are "through" the object from the current viewpoint.
Lasso Vertices also respects any SynthEyes planes ("clipping planes") that you
may have positioned in the scene. By placing one or more clipping planes in the middle
of an unfaceted object (tracker positions or lidar data), you can select only the vertices
on the front or back, say, and then triangulate them in a controlled way.
To work from a standard mesh to a customized version, the Lightsaber deletion
tool may be helpful; it rapidly hacks off unwanted portions of meshes.
See the descriptions of the modes in the perspective view reference.
Preview Movie
After you solve and add a few test objects, you can render a test movie or
sequence with anti-aliasing and motion blur. The movie can then play (in Quicktime
Player or Windows Media Player etc) at the full rate regardless of length. Movies can be
produced with these file extensions, depending on the platform: ASF(Win), AVI(Win),
BMP, Cineon, DPX, JPEG, MP4(Win), MOV (Quicktime), OpenEXR, PNG, SGI, Targa,
TIFF, or WMV(Win). Only image sequences are available on Linux.
You can create a movie either from the vantage point of the camera of the active
camera or moving object, if the perspective views Lock button is engaged (ie so that the
background image is shown, and the view animates with the camera.) Or, if the camera
is not locked, the movie will be made from the unchanging current viewpoint of the
perspective view.
To make the movie, either click RENDER on the main mouse toolbar in the
perspective view, or right-click in the perspective window to bring up the menu and
select the Preview Movie item. Either way, you'll bring up a dialog allowing the output
file name, compression settings, and various display control settings to be set. The
sequence/movie will be produced at the effective resolution of the images, as affected
by the image preprocessor.
To change the movie's image resolution, use the Output Resampling options on
Output tab of the Image Preprocessor, or the DownRez option on the Rez tab. You
should also set the desired Interpolation method on the Rez tab, and some Blur (on
Filtering) if the resolution is being reduced. This will reduce artifacts due to the image
size change. (Resolution changes are done through the image preprocessor, rather
than directly in the render, on the Preview Movie panel, to maximize image quality.)
Usually you will should pick a resolution that results in square pixels, so that the
preview will not be stretched or squished horizontally. There is a checkbox for that on
the Preview Movie panel; when selected it will convert 1440x1080 to 1920x1080 or
720x480 source to 640x480, for example. However, as described previously, it will
produce higher-quality output if you adjust the resolution using the Output Resampling
options of the Image Preprocessor instead.
Important: Video codecs often have specific requirements for the size of the
image being written, such as a width that is a multiple of 16 or can't be larger
than a certain size. These restrictions cannot be determined by SynthEyes; if
a preview movie fails to be written, you might double-check the image size
and consider a more standard alternative.
SynthEyes can anti-alias and motion blur meshes you have inserted in the
scene using SynthEyes. Both are controlled by the Anti-aliasing and motion blur
dropdown; motion blur settings start with "MoBlur," the other settings do not motion blur.
Shutter angle and phase for perspective view previews are set on the Perspective
portion of the preferences panel. The motion blur settings include anti-aliasing, for
frames where the camera/object/mesh is stationary.
Tip: You can't generate preview movies of full-range 360 VR footage. Use
Save Sequence from the image preprocessor instead; it can render meshes.
Older Tip: the Windows 32-bit H.264 codec requires that the Key every N
frames checkbox be off, and the limit data-rate to 90 kb/sec checkbox be off:
otherwise there will be only one frame.
Note that image sequences written from the Preview Movie are always 8
bit/channel. An alpha channel can be written for OpenEXR EXR, PNG, SGI, Targa TGA,
and TIFF files (turn OFF right-click/View/Show Image, so it doesn't cover the entire
image). If you are trying to save a sequence as part of a lens distortion compensation
workflow, you should be using Save Sequence on the Output tab of the Image
Preprocessor instead.
Technical Controls
The Scene Settings dialog contains many numeric settings for the perspective
view, such as near and far camera planes, tracker and camera icon sizes, ambient
illumination, etc. You can access the dialog either from the main Edit menu, or from the
perspective window’s right-click menu.
By default, these items are sized proportionate to the current “world size” on the
solver control panel. Before you go nuts changing the perspective window settings,
consider whether it really means that you need to adjust your world size instead!
Motion – 2-D
Nuke
Particle Illusion
PC2. Point cache format
Photoscan
Poser
Realsoft 3D
Shake (several 2-D/2.5-D plus Maya for 3-D scenes)
SoftImage XSI, via a dotXSI file
Toxik (earlier versions, not updated for 2009)
trueSpace
Vue 5 Infinite
Vue 6 Infinite and Later
VIZ (via Filmbox FBX or 3ds Max scene)
SynthEyes offers a scripting language, SIZZLE™, that makes it easy to modify
the exported files, or even add your own export type. See the separate SIZZLE User
Manual for more information. New export types are being added all the time, check the
export list in SynthEyes and the support site for the latest packages or beta versions of
forthcoming exporters.
General Procedures
You should already have saved the scene as a SynthEyes file before exporting.
Select the appropriate export from the list in the File/Exports area. SynthEyes keeps a
list of the last 3 exporters used on the top level of the File menu as well.
Hint: SynthEyes has many exports. To simplify the list, click Script/System
Script Folder, create a new folder “Unused” in it, and move all the scripts for
applications you do not use into that folder. You will have to repeat this
process when you later install new builds, however.
There is also an export-again option, which repeats the last export performed by
this particular scene file, with the most-recently-used export options, without bring up
the export-options dialog again to save time for repeated exports.
When you export, SynthEyes uses the file name, with the appropriate file
extension, as the initial file name. By default, the exported file will be placed in a default
export folder (as set using the preferences dialog).
In most cases, you can either open the exported file directly, or if it is a script, run
the script from your animation package. For your convenience, SynthEyes puts the
exported file name onto the clipboard, where you can paste it (via control-V or
command-V) into the open-file dialog of your application, if you want. (You can disable
this from the preferences panel if you want.)
Note that the detailed capabilities of each exporter can vary somewhat. Some
scripts offer popup export-control dialogs when they start, or small internal settings at
the beginning of each Sizzle script. For example, 3ds max does not offer a way to set
the units from a script before version 6 and the render settings are different, so there
slightly different versions for 3dsmax 5 and 6+. Settings in the Maya script control the
re-mapping of the file name to make it more suitable for Maya on Linux machines. If you
edit the scripts, using a text editor such as Windows’ Notepad, you may want to write
down any changes as they must be re-applied to subsequent upgraded versions.
Be aware that not all packages support all frame rates. Sometimes a package
may interpret a rate such as 23.98 as 24 fps, causing mismatches in timing later in the
shot. Or one package may produce 29.96 vs 29.97 in another. Handle image
sequences and use frame counts rather than AVIs, QTs, frame times, or drop-frame
time codes wherever possible.
The Coordinate System control panel offers an Exportable checkbox that can be
set for each tracker. By default, all trackers will be exported, but in some cases,
especially for compositors, it may be more convenient to export only a few of the
trackers. In this case, select the trackers you wish to export, hit control-I to invert the
selection, then turn off the checkbox. Note that particular export scripts can choose to
ignore this checkbox.
Multiple Exports
You can configure SynthEyes to produce several exports simultaneously from a
single operation using File/Export Multiple and File/Configure Multi-export. The multi-
export configuration dialog lets you create a list of exports to be run. The multiple-export
list will be used automatically, if it is present, for Batch exports.
The multi-export system uses the name of your sni file to create the names of all
your exports. The exports get placed in the first of the same folder as the last single
export you performed; in the folder specified by the default export preference folder; or
in the folder of the scene file if neither of the above are available.
When the multi-export is run, the parameter dialog for the individual exports will
not be shown, as with Export Again. Accordingly, you should pre-set the options you
want for each export format, by running each export manually the first time and
configuring their options.
If you are producing multiple different exports that have the same file extension
(for example .py for Python, which is used for a number of applications), the exported
files will overwrite each other, so you will lose all but the last without further direction.
You have two methods to keep them separate: file suffixes and subfolder prefixes. Both
options are set by double-clicking an exporter in the multi-export configuration dialog,
bringing up a dialog that allows you to set the subfolder or suffix.
TIP: Use the subfolder method when exporters produce multiple supporting
files, such as meshes, texture maps or vertex caches.
Image Sequences
Different software packages have different conventions and requirements
regarding the numbering of image sequences: whether they start at 0 or 1, whether
there are leading zeroes in the image number, and whether they handle sequences that
start at other numbers flexibly.
For example, if you have a shot that originally had frames img1.tif-img456.tif, but
you are using only images img100.tif-img150.tif of it, SynthEyes will normally consider it
as a 51 frame shot, starting with frame 0 (img100.tif) or, with First frame is 1
preference on, as frame 1 at img100.tif.
Other software sometimes requires that their frame numbers match the file
number, so img100.tif must always be frame 100, no matter what frame# they normally
start at.
See also the section on Advanced Frame Numbering for opening shots.
By being aware of these differences, you will be able to recognize when your
particular situation requires an adjustment to the settings—typically when there is a shift
between the camera path animation and the imagery.
this extreme tilt back, there will be a large discontinuity, and a similarly blurred-out
image.
Problem #3 arises when the camera looks exactly straight up or down (or in
some other specific direction depending on the exact form of angles required by the
application). In this case, the pan and roll angles line up exactly, and you can pick a
value for either arbitrarily as long as you correct the other (ie pan=30, tilt=50 and
pan=95,tilt= -15 both refer to the same orientation). Motion blur problems occur with the
transition to the immediately preceding or following frame, which have different specific
required pan and tilt angles.
Some SynthEyes exporters include special code to correct or minimize these
situations (#3 is correctible only in a few specific cases). Current exporters including this
code: After Effects, Blender, Filmbox, Lightwave(enable/disabled checkbox), Maya. It is
not needed for 3dsmax.
When motion blur issues arise for exporters without this code (submit a feature
request as needed), or when no correction is feasible, you have a few options:
try a different exporter, such as filmbox,
switch to a different rotation angle ordering (either directly in the exporter
settings or on the preferences panel),
change the overall scene coordinate system setup to a configuration that
avoids that specific problem,
in SynthEyes, slightly tweak the orientation of the camera/object on a
gimbal-lock frame to move it away from the degenerate orientation,
hand-edit the rotation angle curves in the downstream application.
Switching to a different coordinate system setup is not unusual for moving-object
setups: a moving object may often be set up to be exactly aligned to coordinate axes,
and thus subject to a precise problem #2 gimbal lock. Switching to a differently oriented
object null addresses this easily.
On a related note, you should always use linear interpolation between keys in
your 3-D application (SynthEyes sets this as the default where possible). If you use a
spline-based interpolation, small amounts of jitter on the frames can amplify to much
larger excursions between the frames, and excessive motion blur: this should be
avoided.
need to import the tracker data file (produced by the correct SynthEyes exporter) into a
particular existing tracker in your compositing package.
There is also a 2-D exporter that exports all tracker paths into a single file, with a
variety of options to change frame numbers and u/v coordinates. A similar importer can
read the same file format back in. Consequently, you can use the pair to achieve a
variety of effects within SynthEyes, including transferring trackers from SynthEyes file to
SynthEyes file, as described in the section on Merging Files and Tracks. This format
can also be imported by Fusion.
The exporter asks about your After Effects version and customizes its export to
match. Be sure to select the correct version. There are settings for versions from
CS3 onwards (to match distortion in After Effects, CS4 or later is required). Note that
depending on the details of Adobe releases, there may not be a setting for your
particular version; if not, use the setting for the last version preceding yours.
Using the Export Action control, you can have SynthEyes start After Effects
automatically, as soon as you run the export. Or, you can save the file to run later, or on
a different machine. If you save the file to run it in After Effects later, do so by selecting
File/Scripts/Run Script File... in After Effects, then select the file you exported from
SynthEyes. The exporter controls are described below.
Important: If the Run now options aren't starting After Effects, you probably
don't have the right AE version selected on the exporter. If it is installed to a
different location, see the controls listing below.
Demo version: Sorry, the demo version is not able to start external
applications, including After Effects. The file will be written and can be run
using After Effects' File/Scripts/Run Script.
The Add Cards operation within SynthEyes's perspective view is very powerful;
you may find it easiest to set up your 3D layers within SynthEyes's more powerful 3D
environment, then export the cards to After Effects, where they become layers with the
texturing already applied.
After Effects does not support meshes directly, as they are somewhat out of its
scope; they are not exported (except for Cards). However, the adventuresome might
export meshes from SynthEyes, import them in Photoshop, then load the Photoshop
"image" containing the mesh into After Effects. Judicious use of layers is a simpler
approach!
Note for CS5 or earlier: the After Effects Javascript exporter exports
SynthEyes lights, but it is not possible for a script to set the correct type of the
light before CS5.5. So you will need to set their type to Point or Directional
accordingly after completing the import. The default is Spot, so it is never
right! Adobe fixed this in CS5.5.
Important: Be sure that you configure the AE Version setting for your AE
version, or the version of whoever will be running the export, or errors will
occur when the script is run, or SynthEyes may not be able to start
AfterEffects automatically.
Export Action The script can be written to create a new project, to add (a new
comp) to an existing After Effects project, to add (new layers) to the
selected or first existing After Effects comp in the current project, or
to update the previous export. See the Updating After Effects
Projects section. There are two versions of each type, Run Now
and Don't Run. Both produce the javascript, the Run Now version
causes After Effects to run it immediately.
For AE Version Select the version of After Effects you are using; more precisely,
the version you want to export for.
Timeline setup Three choices controlling how the shot is positioned in time on the
After Effects timeline. One or the other option may be more useful
for your needs. Active part: the active portion of the shot is
positioned at the start of the composition, with the same displayed
frame numbers as in SynthEyes. Entire shot: the composition will
be the entire duration of the original shot, with the After Effects
"work area" controls set up to indicate the tracked portion of the
shot. Match frames: the shot is repositioned so that each frame
number has the position dictated by its file name(number). The
composition may be the same size (first frame# is 0), one frame
longer (first frame# is 1), or substantially longer (a larger frame#). If
you anticipate editorial changes, we recommend opening the
maximum shot size in SynthEyes initially, and tracking the desired
portion in SynthEyes, and exporting using the "Entire Shot" option,
not Match Frames.
Display in frame#s When checked, the After Effects user interface will operate using
frame numbers, matching SynthEyes. Unchecked, it operates in
seconds, making comparison to SynthEyes more difficult.
Extra Scaling Multiply all the position numbers in SynthEyes by this value, making
the scene larger in After Effects (which measures in pixels)
Force CS centering Offsets the SynthEyes coordinates so that the SynthEyes origin
falls at the center of the After Effects workspace. Although this
makes the coordinate values less convenient to work with, it
reduces the amount of zooming and middle-panning in the After
Effects Top/Left/Front/etc views. It must and will be turned off
automatically for 360VR shots.
Include all cameras Export each camera in the scene in its own composition. If
unchecked, only the active camera is exported.
Layer for moving objects When checked, a 3D layer will be produced for each moving
object, and trackers and meshes will be parented to it, as
appropriate. When unchecked, no layer will be produced, and those
trackers and meshes will be animated frame-by-frame.
Camera Shutter Controls the composition's shutter angle and phase, separated by a
comma. The current After Effects default of 180 corresponds to
film. Keeping the shutter opening centered (-90, or -half the angle)
prevents relative shifts between the imagery and animation.
Include Trackers No/All as regular/Planar as such. On No, trackers are not exported
at all. Otherwise, a null layer will be produced for each non-planar
tracker. When Planar as such is selected, planar trackers will be
exported as planar trackers, producing a matching placeholder
layer ready to receive imagery or effects. When All as regular is
selected, both regular and planar trackers appear as nulls. Note
that generally you should use the Exportable checkbox on the
Coordinate System panel to control which trackers are exported, to
reduce clutter in After Effects. AE is not particularly efficient at
handling hundreds or thousands of trackers.
Relative Tracker Size Controls the size of the tracker nulls in After Effects; adjust
and repeat as necessary. This is a percentage of the layer size,
that is then multiplied by the Extra Scaling setting.
Sort back to front on Choice: None/Start Frame/Middle Frame/End Frame. When
set to one of the frames, the tracker layers are stacked in order,
from near on top to far at the bottom, as measure on the selected
frame. When None is selected, they are stacked in their creation
order, ie typically by tracker number.
Layer Type Choice: Null/Solid/Single Place Holder/Many Place Holders/Single
Comp/Many Comps/Editable Shape. Controls what type of layer is
created in After Effects. The default null is suitable for parenting
other things to and as a reference for positioning. You might use a
solid to cover tracking marks in the scene; also the solid option
allows the (non-animated, non-illumination-based) tracker's color in
SynthEyes to be used by After Effects. Or you can have one or
many AE Place Holder's created, which can then be filled with
images and textures. If a single one is created, it is shared by all
trackers, or have one created for each tracker (you may want to
use the Exportable control to create them for specific trackers).
Similarly, you can have one comp created that is shared by all the
trackers, or have a comp created for each tracker; either way you
can then fill in the comp. Or, create (animatable) Editable Shapes
for each tracker; these shapes have a fill color which will be
animated if the tracker illumination curve has been set. This latter
capability might be used to hide markers. NOTE: use the
Exportable checkbox on the 3D panel to control which trackers are
exported. Asking AE to create 100s of layers, placeholders, or
comps will bog it down considerably!
Comp/Placeholder Size. Text: Comma-separated horizontal and vertical resolution.
When a value is present, it is used as the resolution of each tracker
layer, for example 100,50 to create layers 100 pixels by 50 pixels. If
the field is blank, then the shot's resolution is used instead. Either
effect is an indirect way to get 3D rendering in After Effects, which otherwise lacks this
capability.
Here's what the process looks like:
You run the After Effects exporter, and turn on the C4D integration feature
(which is OFF by default).
The exporter produces a jsx javascript as usual, plus a Filmbox fbx for
C4D. (It also creates an empty .c4d file as a temporary placeholder.)
Optionally, SynthEyes can launch Cinema 4D (see below).
You File/Open the .fbx file in Cinema 4D, then File/Save as a Cinema 4D
.c4d file.
Meanwhile, After Effects has run the javascript, resulting in a CINEWARE
layer that looks at the C4D file.
Meshes now appear in your C4D comp; you can work on compositing
them into the scene.
CINEWARE has many options and capabilities to aid your efforts, though it
requires that you know Cinema 4D. You can access the CINEWARE help from the
button on the CINEWARE effects panel in After Effects.
You can use the Don't overwrite .c4d checkbox to avoid having SynthEyes
overwrite the existing C4D file with an empty file, so that you can carefully re-import the
modified FBX file. The drawback of using that option is that you then can't see that the
C4D file hasn't been updated (it's very apparent when the file is cleared), so if you leave
this option on, it is easily to accidentally use mismatching JSX and C4D files.
SynthEyes can get Cinema 4D started opening the .fbx file, which can save
some time. You can specify which kind of Cinema 4D should be opened: the full
Cinema 4D product, the Cinema 4D Demo version, or the Cinema 4D Lite version
included with After Effects.
The full Cinema 4D product version is the best, of course, if you have purchased
it. It allows full operation without any extra limitations. You should select this option if
you have it.
The Demo version can be started as well, but the Demo version has limitations
that may make it unusable. Most importantly, in order to be able to save .c4d files,
which is necessary for them to be seen by After Effects, the C4D Demo version must be
activated, which starts a countdown clock before the C4D Demo becomes non-
operational.
The C4D Lite version included with After Effects can only be started by After
Effects, not by SynthEyes. This is a licensing restriction imposed by MAXON. On
Windows, there is a small exception: an already-open C4D can be redirected to open a
new file. Normally, you will need to start C4D using File/New/Cinema 4D menu item
from within After Effects, create a dummy file, then open the Filmbox file. For additional
exports, you can continue to File/Close and File/Open within Cinema 4D.
Updating After Effects Projects
The "New project" action in the After Effects export builds a new project file from
scratch, ie it does a File/New within After Effects. You will be prompted to save any
existing project by After Effects.
The "Add to project" action assumes you already have some non-SynthEyes
project, to which you would like to add the SynthEyes export as a new comp. This
option will do that.
If you select "Add to comp," then the layers for the export will be added to an
existing comp in the existing project: either the currently-active comp, or to the first
comp if none are active. Usually you will want to "Add to project," however.
If the "Update existing export" action is selected, the script will modify the
existing, already-open, After Effects project that you have already exported this exact
same scene to. This is helpful when you already have worked on an After Effects
scene, adding additional layers, etc, but have updated some of the tracking in
SynthEyes and don't want to what you added in After Effects.
The update script will update:
camera and object paths
lens distortion information
tracker positions and orientations
plane positions and orientations
light positions
Any new trackers, planes, or lights will be added (though new trackers will always
appear at the top of the layer stackup). The script will not add new shots or moving
objects. Trackers, planes, or lights that have been deleted from SynthEyes will not be
deleted from the After Effects scene.
When you update an existing scene, elements you have added within After
Effects will not be updated in any fashion, since SynthEyes does not know about them.
The SynthEyes Distortion effect is a much better choice than trying to use an AE
distortion filter: it offers an exact match to the computed distortion, offers better image-
quality filtering options, and it integrates cleanly into proper lens-distortion workflows.
Lens distortion for After Effects CC/CS6
For CC/CS6, we supply a native After Effects Effect (see below for installation).
Be sure to set the After Effects version to CS6 or CC as appropriate when exporting.
You can use the After Effects exporter and distortion effect in the following
situations:
1) After doing a solve, using the Calculate Distortion control to calculate a
distortion value on the main Lens panel. This method doesn't control the delivered
imagery particularly well.
2) After solving and then clicking the Lens Workflow button on the Summary
Panel, using the 1-pass deliver-undistorted method.
3) After solving and then clicking the Lens Workflow button on the Summary
Panel, using the 2-pass deliver-distorted method.
In each case, when you do the export, a matching SynthEyes Distortion effect
will be created, so that the raw footage will be undistorted within After Effects, and you
can add effects in its 3D environment that line up with the undistorted imagery.
To help out in case #3 above, the export script produces an additional comp that
redistorts the 3D environment back to match the original distorted imagery. This comp is
named ReCamera01 for the usual Camera01; the 3D environment is Camera01_3D.
(For cases #1 and #2 just ignore the extra comp).
To use this 2-pass setup, add your 3D elements inside the Camera01_3D
environment. Turn off visibility for the original footage layer within Camera01_3D, so
that it's output is only the added elements. The added elements get redistorted then
overlaid onto the original distorted imagery within ReCamera01.
Image Mapper
There's an additional lens-distortion component available for those who may wish
to use it. It implements the map-based distortion approach within AfterEffects. It can be
used to apply lens presets not supported by the main After Effects distortion plugin, for
example.
To use it, apply the SynthEyes Image Mapper effect to the layer containing the
image map, ie the still image or image sequence. Then on its Effect Controls, set the
Image to map to be the appropriate layer.
The composition should have the same resolution as the map: this will be the
resolution of the mapped image. You will also have to set the project settings to 16 or
32bit per channel depth.
macOS:
1. Create a new "SynthEyes" folder within the (existing) folder:
2. /Applications/Adobe After Effects CC 20yy/Plug-ins/Effects
3. Copy SyAFXCube.plugin, SyAFXLens.plugin, SyAFXMapper.plugin, and
SyAFXVRS.plugin into the new SynthEyes folder
4. Restart After Effects if it is already running
Lens distortion for CS4/CS5
The exporter can use the Pixel Bender effects for CS4 and CS5. The exporter
configures these only for case #1 above: distortion on the solver.
You can manually configure the SynthEyes Pixel Bender effects to implement
one-pass and two-pass lens distortion workflows. A third "Advanced Distortion" node
handles undistortion and redistortion that matches the complex high-order distortion and
off-centering generated by the SynthEyes lens preset generator. See the tutorials on
our SynthEyesHQ youtube channel for details on usage.
We have provided a matching "redistort" filter which allows you to redistort
footage to match, and be re-composited with, the original footage. You can read more
about Lens Workflow in this manual.
The SynthEyes/After Effects Pixel Benders filters perform only bi-linear
interpolation, which soften the images slightly compared to the Lanczos and Mitchell-
Netravali options available in SynthEyes. So you may still find it desirable to use
SynthEyes for the better filtering and its additional preprocessing options.
Installation for CS4/CS5
To use the Pixel Benders, you must install three files, seadvdistort.pbk,
seundistort.pbk, and seredistort.pbk, for After Effects to use. These files are in the
SynthEyes plugins folder (click Script/System Script Folder then go up a level to see
plugins) and get installed to the following folders:
Windows C:\Users\username\Documents\Adobe\Pixel Bender\
Mac OS ~/Documents/Adobe/Pixel Bender/
Note that you may have to create the folder. The folder is specific just to your
userid, other users of the same machine, if it is shared, will have to do the same thing.
After Effects will notice the filters when it next starts up.
Distortion effects are not applied to the 2D footage; this export is simpler 2D
tracking. If you do need to send 2D tracking data based on a distorted shot, write the
undistorted version to disk first, either from SynthEyes directly or by using the 3D export
to After Effects first.
Important note: After Effects uses "pixels" as its unit within the 3-D
environment, not inches or feet (ie it does not convert units at all). The default
SynthEyes coordinate system setup keeps the world less than 100 units
across. As AE interprets that as pixels, your 3-D scene can appear to be quite
small in AE, as is the case in the tutorial on the web site, which is why we had
to scale down the object we created and inserted in AE. It is much easier to
adjust the coordinate system in SynthEyes first, so the 3-D world is bigger, for
example by changing the coordinates of the second point used in coordinate
system setup from 20,0,0 to be 1000,0,0, say. You can also use the rescaling
parameter of the exporter. You can also use the rescaling parameter of the
exporter to produce larger scenes, especially if you are exporting to several
different packages.
You'll find it listed as AfterEffects 2-D from Planars on the Export menu, or as the
AE Corner Pin button on the Planar Tracking script bar.
The 2-D planar exporter has the same ability to Run Now as the 3-D exporter,
and supports New, Add, and Update options. You'll find all the controls described above
in the 3-D exporter writeup.
But remember, this exporter exports only planar trackers (2- or 3-D).
Tip: *The SynthEyes export puts the contents of the file onto the clipboard
immediately by itself. As long as you proceed immediately to After Effects,
you can immediately paste into the layer, without having to open the exported
file. This can save quite a bit of time. You only need to open the file if you
come back to it later.
Alembic 1.5+
Alembic (http://www.alembic.io) is a multi-application interchange format along
the general lines of Filmbox (FBX). It is a bit more tailored than Filmbox, however,
specifically emphasizing meshes, especially large animated meshes such as those from
GeoH tracking. While Alembic contains the mesh data and hierarchy, it does not talk
about how the data was created, so that it might be changed by a downstream
application. Example uses include sending final animations to renderers, or to send
simulation results into an animation program. See their website for more information on
Alembic philosopy.
NOTE: This exporter uses the "Ogawa" back end of Alembic 1.5 and later,
which is faster and produces smaller files than earlier Alembic libraries. But...
earlier Alembic libraries are unable to read Ogawa files, so you will need
versions of your applications that also use Alembic 1.5 or later. Alembic 1.5
was released in early 2014.
Alembic does not include match-move shot image setup or mesh texturing or
even such simple capabilities as assigning different colors to various trackers or
meshes, so it is a bit less friendly than other formats (such as FBX). Alembic's
strengths are speed and ability to handle large files including animated geometry within
the file itself, rather than as separate point caches. Some users may prefer it based on
prior experiences with FBX; it remains to be seen whether Alembic importers work
better on average than Filmbox importers. SynthEyes offers it as an option; you can
determine if it meets your particular workflow needs.
Alembic Controls
Export which shots. Selector. Export only the active shot (ie the shot of the current
Active Tracker Host), all shots with the same frame rate as the active shot, all
shots with the same frame rate and start/end frame values (typically, a stereo
pair), or all shots. When a shot is exported, the camera and all moving objects
and their children are exported, as well as meshes parented to them, and
unparented meshes.
Timeline setup. Selector. Controls where the active portion of the shot winds up in the
downstream application, roughly equivalent to asking what part of the timeline is
exported (see also Frame Offset below). With "Active part", the first active frame
of the shot becomes the first frame on the timeline downstream. The shot
imagery should be set up so that the active part of the shot starts at the
beginning of the timeline. With "Entire shot", if the first active frame is #15, it will
be #15 in the downstream application: the entire original shot should be applied
starting at the beginning of the downstream timeline. With "Match Frames" (for
image sequences), the animation is output so that for example image
shot27_img0200.png will appear at from 200 on the downstream application,
even if for example SynthEyes was only given the shot starting at frame 100, and
only frame 150 onwards is being used.
Frame Offset. Numeric. Additional number of frames of shifting applied to the output
timing, subsequent to the Timeline setup processing above. For example, a value
of 100 would move the animation 100 frames later, -100 would move the
animation 100 frames earlier.
Output Axis Setting. Selector. Select Y-Up output, which is typical for Alembic, or
optionally Z-Up output, which may be useful for 3dsmax.
Additional scene scaling. Numeric. Use this to scale the entire scene up or down by
that factor, for example to make it 10x larger or perhaps apply some units
conversion (Alembic doesn't have any idea about "units"). A definitive scene
scaling should always be set up in SynthEyes, this is only for adjusting it to
accommodate some downstream application.
Create trackers chisels. Checkbox. When checked, small inverted pyramids (chisels)
will be created at each tracker location.
Chisel size override. Numeric. When zero, the size of the chisel is determined
automatically based on the SynthEyes world size value. Set a specific value
here, if desired. It is a "SynthEyes" value: the Additional scene scaling will be
applied to this value.
Create screen. Checkbox. When checked, a projection screen will be created to be the
eventual recipient of the shot imagery as a backdrop during effects development.
Screen's Distortion Mode. Selector. The screen can be built in such a way that it
compensates for the distortion calculated during the solve, ie on the Lens panel.
It can also be built to re-apply that distortion, if needed.
Screen distance override. Numeric. The projection screen is normally (ie when this
value is zero) located a distance from the camera determined from the Solver
panel's world size value. Set a specific value here, if desired. It is a "SynthEyes"
value: the Additional scene scaling will be applied to this value
Screen vertical grids. Numeric. The number of grids in the vertical direction for the
generated screen. The horizontal number is determined by multiplying this value
times the image aspect ratio. A lower number is suitable if there is no distortion,
but an increasingly larger value should be if image distortion is present.
Set far and near clips. Checkbox. If set, the Alembic file will contain the clipping values
used by SynthEyes, ie on the Perspective View Settings panel. Downstream
applications may or may not use these values if they are present.
Bentley MicroStation
You can exporter to Bentley’s Microstation V8 XM Edition by following these
directions.
Exporting from SynthEyes
1. MicroStation requires that animated backgrounds consist of a consecutive
sequence of numbered images, such as JPEG or Targa images. If necessary,
the Preview Movie capability in SynthEyes’s Perspective window can be used to
convert AVIs or MOVs to image sequences.
2. Perform tracking, solving, and coordinate system alignment in SynthEyes.
(Exporting coordinates from MicroStation into SynthEyes may be helpful)
3. File/Export/Bentley MicroStation to produce a MicroStation Animation (.MSA) file.
Save the file where it can be conveniently accessed from MicroStation. The
export parameters are listed below.
SynthEyes/MicroStation Export Parameters:
Target view number. The view number inside MicroStation to be animated by
this MSA file (usually 2)
Scaling. This is from MicroStation’s Settings/DGN File Settings/Working Units, in
the Advanced subsection: the resolution. By default, it is listed as 10000 per distance
meter, but if you have changed it for your DGN file, you must have the same value here.
Relative near-clip. Controls the MicroStation near clipping-plane distance. It is
a “relative” value, because it is multiplied by the SynthEyes world size setting. Objects
closer than this to the camera will not be displayed in MicroStation.
Relative view-size. Another option to adjust as needed if everything is
disappearing from view in MicroStation.
Blender Directions
Blender has a tendency to change around frequently, so the details of these
directions might best be viewed more as a guide than the last word. Unlike with
commercial software, new versions of Blender may not run perfectly good scripts than
ran in previous Blender versions, such as scripts produced by SynthEyes. If you have to
use a very recent Blender version that can no longer run the SynthEyes-produced files,
you can use an older working version of Blender that can import your SynthEyes export.
Import your SynthEyes output there, save that scene, then load that scene into the
latest version of Blender, which typically retains backward compatibility for at least
scene files. Emailing a detailed description of required changes in the blender script will
facilitate a timely update.
Blender Directions
The normal Blender exporter handles shot and texture imagery as well as
meshes; it is a full-function export. The shot imagery is placed on a "projection screen" -
-- a piece of physical geometry --- as an animated texture.
Important: When importing OBJ meshes from blender, you should be sure to
delete facet normals supplied by Blender. The blender export does not
renumber vertices, because blender uses a one vertex/one texture coordinate
system similar to SynthEyes. As long as you delete normals, the export will
maintain the same vertex numbers as in Blender.
The Blender exporter has some limited support for using the Cycles panorama
camera when exporting pure VR-mode = Present or Apply shots. This permits a simpler
insertion workflow for 360 VR shots with Blender. You will have to set up the texturing
and other materials within the Cycles environment yourself.
When exporting, you can select whether animated meshes produce an armature
and bones, or a point cache file placed in a secondary folder (ie exporting scene37b will
utilize a scene37b_bpc folder).
Directions for the blender export follow. These are tailored primarily to Blender
2.80 and later. The first directions use the auto-run feature, which is fast and easy once
you have it set up, but cannot be used to update existing scenes.
WARNING: using auto-run, be sure to close any OLD blender first! The
new one can open so quickly it's hard to tell which is which, and if you're
working on an old one you will get very confused.
If you don't have auto-run on (for example because you want to update a file, or
will run blender on a different machine), here's what to do to open the exported python
file in Blender.
1. Once you've completed the export from SynthEyes (hit OK), start Blender.
2. Click on the Scripting tab at top (2.80), or change one of the views to the blender
Text Editor.
3. In the text editor, Open the blender script you exported from SynthEyes.
4. Hit ALT-P or the Run button to run the script
5. To see the match, you'll need to look through the SynthEyes camera, typically
Camera01.
6. The export will turn off Relationship Lines in the main view to reduce clutter.
7. Use a Timeline view to scrub through the shot.
8. If parts of the scene or some of the trackers are inexplicably missing, you
probably needed to use larger projection screen or clipping distances, export
again with larger values.
Blender Settings
Here's some information about the blender export control panel settings. Note
that each control on the blender export control has explanatory tooltips if you hover over
it.
Blender version. Dropdown. Select the the version you are using, or the most recent
version before it. If you don't set this correctly, blender will report errors when you
try to run the exported script.
Script type. Radio. Normally, the export will create a whole new Blender scene with all
the SynthEyes elements. If you have previously done that, then modified the
blender scene, and then changed the tracking, you can use the Update existing
scene to bring the modified tracking data into Blender, so you don't have to
manually merge the scenes. Note that Update mode concentrates on information
affected by solves: position information for the camera and trackers—it doesn't
update mesh structures or textures that you may have changed in blender.
Create armatures. Checkbox. When checked, GeoH animation will produce an
animated blender armature and bones. When not checked, they will result in an
animated vertex cache. The armature is better when you want to edit the
animation itself in blender. The vertex cache is good when you want to lock in the
exact results from SynthEyes.
Update mesh caches. Checkbox. Applies only when Create armatures is off, and you
are using Update existing scene. This checkbox controls whether or not vertex
caches are regenerated each time you update.
Use quads. Checkbox. When checked, meshes are output as quads when possible. If
off, triangles are used.
Rescale scene. Number. You can rescale the entire scene by this factor. Usually the
scene scaling should be done as part of the coordinate system setup; the rescale
value is useful only when you are exporting the same scene to several
downstream applications that have different requirements.
Tracker size. Number. Size of the axis nulls for trackers.
Use Camera Background. Checkbox. When set, SynthEyes will use Blender 2.80's
camera background for the shot images. Otherwise, a textured mesh projection
screen will be produced. Note that a projection screen will always be used if lens
distortion is present (always run Lens Workflow!).
Horizontal Grids. Number. This is the number of horizontal grids in any projection
screen that the exporter produces. The vertical count is determined by dividing
by the aspect ratio.
Screen RelDist. Number. Relative distance to the screen, as a multiple of the world
size.
Clipping RelDist. Number. Far clipping plane distance, as a multiple of the world size.
Make shot start at. Number. Normally the first frame of the shot imager will start at
frame zero in blender. This value allows you to override that.
Delete pre-existing meshes. When checked, the default blender Cube will be deleted.
Remove path prefix. Text string. This prefix is removed from filenames as they are
exported, typically when the resulting python file will be run on a different
machine. For example, you might remove a C: drive letter or /Volumes.
Add path prefix. Text string. This gets added to filenames, for example so that you can
add a C: or /Volumes/ to the beginning of the filenames.
Open in Blender automatically. Checkbox. Check this to have blender start and run
the exported python file quickly and automatically. Not applicable when updating
an existing scene (it will be ignored).
Blender application. This tells SynthEyes where to find the desired version of Blender.
Extra arguments. These command-line arguments are given to blender during auto-
run; for example you can control where it starts up, or pass SynthEyes SyPy3
port/pin information.
Blender Directions─Before 2.57
(This older export has limited feature support.) When working with image
sequences and blender, it will be a good idea to ensure that the overall frame number is
the same as the number in the image file name. Although you can adjust the offset,
Blender incorrectly eliminates a frame number of zero.
1. In SynthEyes, export to Blender - Earlier (Python)
2. Start Blender
3. Delete the default cube and light
4. Change one of the views to the blender Text Editor
5. In the text editor, open the blender script you exported in step 1.
6. Hit ALT-P to run the script
7. Select the camera (usually Camera01) in the 3-D Viewport
8. In a 3-D view, select Camera on the View menu to look through the imported,
animated, SynthEyes camera
9. Select View/Background image
10. Click Use Background Image
11. Select your image sequence or movie from the selection list.
12. Adjust the background image settings to match your image. Make sure the shot
length is adequate, and that Auto Refresh is on. If the images and animation do
not seem to be synced correctly, you probably have to adjust the offset.
13. Decrease the “blend” value to zero, or you can go without the background, and
set up compositing within blender.
14. On the View Properties dialog, you might wish to turn off Relationship Lines to
reduce clutter.
15. Use a Timeline view to scrub through the shot.
Cinema 4D Procedure
The Cinema 4D python exporter requires Cinema 4D Version R12 or later
without exception. To export to earlier versions, use the Lightwave export as described
below.
NOTE: Cinema4D works with whole-integer frame rates only. This can be a
problem with movie files at 29.97 or 23.976 fps, which will appear as 29 or 23
fps. You will see sync errors even if you set the values to 30 or 24 fps. To
work around this C4D limitation, use image sequences, changing the frame
rate to the integer value before exporting.
COLLADA Exports
The COLLADA export ("DAE" extension) is an industry-standard format that is
very capable and can directly export full set reconstruction information, ie all object
positions, meshes, and texturing. Unfortunately it is not necessarily read effectively by
all programs that read COLLADA files. Programs that read Filmbox (FBX files) should
also be able to read COLLADA files. You will have to see what the capabilities are of
the programs you use.
You can export COLLADA files using the "COLLADA DAE" export or via the
"COLLADA DAE via Filmbox" export. The former occurs via a standard Sizzle script that
you can modify, if needed. The latter comes via Autodesk's Filmbox exporter, which can
also export COLLADA. If you need to use COLLADA to get into a particular package
with no direct import and it has trouble, you might try both COLLADA exports to see if
that package reads one or the other better.
Similarly, in some software such as 3ds max and maya, there may be three
different ways to import COLLADA files: a built-in COLLADA importer; via the Filmbox
importer; or via various open-source COLLADA importers. The capabilities of each of
these can be expected to change over time.
DotXSI Procedure
1. In SynthEyes, after completing tracking, do File/Export/dotXSI to create a .xsi file
somewhere.
2. Start Softimage, or do a File/New.
3. File/Import/dotXSI... of the new .xsi file from SynthEyes. The options may vary
with the XSI version, but you want to import everything.
4. Set the camera to Scene1.Camera01 (or whatever you called it in SynthEyes).
5. Open the camera properties.
6. In the camera rotoscopy section, select New from Source and then the source
shot.
7. Make sure “Set Pixel Ratio to 1.0” is on.
8. Set “Use…” pixel ratio to “Camera Pixel Ratio” (should be the default)
9. In the Camera section, make sure that Field of View is set to Horizontal.
10. Make sure that the Pixel Aspect Ratio is correct. In SynthEyes, select Shot/Edit
Shot to see the pixel aspect ratio. Make sure that XSI has the exact same value:
0.9 is not a substitute for 0.889, so fix it! Back story: XSI does not have a setting
for 720x480 DV, and 720x486 D1 causes errors!
11. Close the camera properties page.
12. On the display mode control (Wireframe, etc), turn on Rotoscope.
ElectricImage
The ElectricImage importer relies on a somewhat higher level of user activity
than normal, in the absence of a scripting language for EI. You can export either a
camera or object path, and its associated trackers.
1. After you have completed tracking in SynthEyes, select the camera/object you
wish to export from the Shots menu, then select File/Export/Electric Image.
SynthEyes will produce two files, an .obm file containing the trajectory, and an
.obj file containing geometry marking the trackers.
2. In ElectricImage, make sure you have a camera/object that matches the name
used in SynthEyes. Create new cameras/objects as required. If you have
Camera01 in SynthEyes, your camera should be "Camera 1" in EI. The zero is
removed automatically by the SynthEyes exporter.
3. Go to the Animation pull-down menu and select the "Import Motion" option.
4. In the open dialog box, select "All Files" from the Enable pop-up menu, so that
the .obm file will be visible.
5. Navigate to, and select, the .obm file produced by SynthEyes. This will bring up
the ElectricImage motion import dialog box which allows you to override values
for position, rotation, etc.
Normally, you will ignore all these options as it is simpler to parent the
camera/object to an effector later. The only value you might want to change is
the "start time" to offset when the camera move begins. Click OK and you will get
a warning dialog about the frame range.
This is a benign warning that sets the "range of frames" rendering option to
match the length of the incoming camera data. Hitting cancel will abort the
operation, so hit OK and the motion data will be applied to the camera.
6. Select "Import Object" from the Object pull-down menu.
7. Enable "All Files" in the pop-up menu.
8. Select the .obj file produced by SynthEyes.
9. Create a hierarchy by selecting one tracker as the parent, or bringing in all
trackers as separate objects.
10. If you are exporting an object path, parent the tracker object to the object holding
the path.
Filmbox FBX
Filmbox (FBX) format is an Autodesk-controlled proprietary format that is
available to applications such as SynthEyes under simple free license terms. It is widely
supported across the industry, but not necessarily deeply: like COLLADA, different
applications will read different amounts of information from the file. While we can write
the information into the FBX file, we can't make any other application read the file
correctly. (See the end of this section for application-specific filmbox tips.)
In particular, since FBX was originally intended as a means of motion-capture
data interchange, many applications do not read animated textures correctly, which are
necessary to set up the match-moved shot on the background plate. So you may need
to do that manually.
Despite these limitations, FBX files do contain a lot of data, including multiple
cameras with zoom; a projection screen that can be distorted by lens distortion; moving
objects; the trackers including far and mo-cap style trackers; all regular and far meshes
in the scene, including UV map, normals, and texturing; and lights.
While SynthEyes has a Filmbox importer as well, the FBX format is not a
substitute for the SNI file: you cannot export a SNI file to an FBX, import the FBX, and
expect to see the same thing as the original SNI file. There is no equivalent for much of
the data in the SNI file in an FBX, so much data (including all tracking data) will be lost.
Warning: Filmbox has a degeneracy in its rotation angles that will cause
motion blur problems when the camera faces exactly to the back. We
When the SynthEyes scene is exported, the currently active shot is set up as the
main shot for downstream software. You can select which other cameras will be
exported...
The FBX exporter has a lot of options, which we'll describe here.
Export which cameras. This selects which cameras (and associated moving objects,
trackers, and far meshes) will be exported: only the currently-active camera (or
camera of the currently-active moving object), all cameras with the same frame
rate as the active camera's frame rate (typically to drop survey shots), or all
cameras.
Output format. Filmbox can output in several different versions, including ASCII or
binary, or older versions. The binary version is usual: faster and smaller. The
ASCII exports can be examined by hand in a text editor.
FBX Version. Allows you to export files using older versions of the Filmbox format.
Interpret SynthEyes units as. This option makes a note that the unit-less SynthEyes
values should be interpreted as the given unit by the importing application. For
example, you can tell the importing application to interpret a 12.5 in SynthEyes
as 12.5 inches or 12.5 meters (if the application supports it).
Additional scaling. Multiplies all the SynthEyes coordinates by the given number,
typically to impose a units interpretation on an application that ignores the units
setting above. Use the SynthEyes coordinate system to set up a proper scene
scaling, not this!
Export Trackers. Keep this checked to export all exportable trackers; turn it off to not
export any trackers.
Marker Type. Trackers in SynthEyes will appear in downstream applications as the
given kind of object. FBX supports many marker types, but support for them may
be limited. The Chisel is a small piece of geometry and should therefore be
supported everywhere; it is the default. Null is another good choice, as most
packages support Null axes. Note that "None" is different; it is a marker with no
specific type, ie it will be determined by the importer.
Specify view. Says whether the field of view should be specified by a horizontal or
vertical value, or a focal length. This is the track that will be animated if there is a
zoom. The recommended default is horizontal. We do not recommend focal
length because it is dependent on back plate, which you rarely know.
Relative tracker size. The geometric tracker size will be this fraction of the camera's
world size; used to auto-scale trackers to match the rough scene size.
Relative far override. If this value is non-zero, it is multiplied by the world size to
determine how far the projection screen is placed from the camera.
Create screen. If set, a mesh projection screen is created and textured with the
animated shot imagery. This screen can be deformed by the lens distortion to
remove it. If not set, the shot imagery is supplied to the downstream application
as a background texture; it can not be deformed.
IMPORTANT: Once you have imported the FBX scene, do not modify the
projection screen in any fashion to correct any perceived problem. If you do,
you will destroy the match! Make any changes in SynthEyes and re-export.
Vertical grids. If a projection screen is created, this is the number of vertical grid lines.
The horizontal count is computed based on the shot aspect ratio. While a lower
value can be used if the screen is not being deformed, a much higher value may
be needed for significant distortion.
Screen's Lens Mode. If set to Remove (Normal), the projection screen will be
deformed in order to remove any distortion computed during the solve and
present on the main Lens panel (not the values on the image preprocessor's
Lens panel!). This will let you see the match in your 3D package, without it
needing to understand SynthEyes lens distortion. Set to Apply, the screen will be
distorted in such a way that undistorted imagery becomes distorted. You will
need to know what to do with that. Set to None, the projection screen will not be
distorted in any way, the lens distortion values will be ignored.
Use original numbering. When checked, SynthEyes will export the mesh using the
same vertex, normal, and texture indices as when the file was imported, if this
information is available. This helps workflows that rely on these numbers being
the same.
No repeating vertices. When checked, SynthEyes reworks the mesh data so that no
vertex position occurs more than once in the output. This will prevent susceptible
applications (Maya) from incorrectly creating seams at repeated vertices (when
UV coordinates or normals are present). However, it results in larger and more
complex FBX files compared to the usual approach.
Use quads if possible. When set, the export will contain quads, where possible, as
well as triangles. When off, only triangles will be exported, which may be
necessary for some downstream packages.
Alpha as Transparency. When set, if a mesh's texture has a nontrivial alpha channel,
the alpha will be output in the Transparency Factor channel, which FBX-reading
applications may use to configure their alpha/transparency. Either way, any alpha
channel in the texture file continues to be available to FBX-reading applications,
which may or may not use it.
Deformed mesh format. Select None, 3dsmax (PC2), Maya (MCX), or Bones. If the file
contains GeoH tracking that deforms any meshes, this setting will determine
whether a Skin deformer (bones) is exported, or alternatively which file format is
used for point cache files. The best setting depends on the capabilities of your
downstream application. Applications such as Blender may or may not support
bones, and may require one or the other format. (PC2 is used for Blender,
though you should typically use the Blender exporter instead of the Filmbox,
since Blender's support of Filmbox is incomplete.)
Texture Lighting. Dropdown. Controls which objects will be lit by scene lights, and
which will be unlit (the right choice for objects textured with shot imagery). You
should usually use the default choice, allowing each individual mesh to control
this directly, as set from the Texture panel. There's also an option to light the
projection screen, largely for compatibility with an earlier error.
Start at frame. Set this value to the first desired frame number, typically 0 or 1 (Maya
uses 1). It corresponds to the first frame of the raw shot, independent of any
start/end framing you have set up in SynthEyes to limit the part of the shot that is
tracked. Read on for more options.
First *used* frame goes there. When set, images and animation are shifted so that the
first frame in the "used" portion of the shot will appear at the frame number given
by the "Start at frame" setting. For example, the raw shot goes from frame 0 to
300, but only frames 100 to 200 are used in the edit (see for example the top of
the Shot/Edit Shot panel). With Start at frame set to 1000 and this checkbox
turned on, frame 100 will appear at frame 1000 in the downstream animation
application. With this checkbox off, it will appear at 1100.
Start at sequence's frame#. When set, the animation is shifted so that if an image
sequence starts at frame 39, so will the matching animation downstream. This
checkbox overrides the First *used* frame and Start at frame settings above.
Embed Media. When set, the shot imagery and still textures will be embedded inside
the filmbox file. This prevents them from being misplaced and makes them easier
to move around, but makes the file that much larger.
Set far/near clips. When set, the SynthEyes near and far clipping distances are put into
the FBX file for consistency. When not set, these values will be left to the
discretion of the importing software.
Application-Specific Filmbox Tips
These are hints for particular apps, to better use FBX data from SynthEyes. Note
that often there may be a direct export to the application that will eliminate the need for
such tweaks.
Cinema 4D. Note that C4D imports FBX directly from its File/Open menu item. To
get the animated shot imagery to live-update in the display, do the following:
Click on the Camera01ScreenMaterial in the Material Manager to open it for
editing.
In the Attribute Manager, click on the material's Editor tab, and turn on the
Animate Preview checkbox.
Click on the Color tab, then on the very wide button containing the shot's file
name, to bring up its Bitmap Shader in the Attribute Manager.
Click on the Animation tab, then on the Calculate button at bottom.
NOTE: Cinema4D works with whole-integer frame rates only. This can be a
problem with movie files at 29.97 or 23.976 fps, which will appear as 29 or 23
fps. You will see sync errors even if you set the values to 30 or 24 fps. To
work around this C4D limitation, use image sequences, changing the frame
rate to the integer value before exporting.
NOTE: Filmbox files have different versions. SynthEyes can only read FBX
files with the same, or an earlier, FBX version. It cannot read FBX files written
be later versions of the Filmbox library used by all applications. You can see
that version listed in the dropdown for the FBX exporter. The current version
is FBX 2018.1.1. Other applications will generally be able to write FBX files
compatible with earlier libraries as well, so you may need to write your FBX
file in a version compatible with your current SynthEyes version.
NOTE: When you use the Scene importer on a scene with multiple meshes,
the meshes cannot be reloaded from the file using the 3D panel's Reload
Mesh.
The Scene importer has many more options. You'll definitely need to take a look
at them and adjust them depending on what you're trying to accomplish.
Read cameras. Checkbox. When on, the cameras, their field of view, and their path
information are read in from the file. Shot imagery must be set up manually,
however.
Use existing camera. Checkbox. When on (and Read cameras is on), the existing
default SynthEyes camera is reconfigured to be the first camera from the file,
rather than creating a second camera in the SynthEyes scene. Typically, the FBX
file contains a single camera which is used to configure the default SynthEyes
camera.
Read nulls. Checkbox. Reads the null objects (hierarchy) from the scene, producing
moving objects in SynthEyes. This is required if a rig is being read, or if the
scene requires moving objects. In other cases it may produce unnecessary
objects that can be deleted.
Read meshes. Checkbox. Read the meshes from the scene, creating them in
SynthEyes. When a scene is being read, meshes are positioned at the locations
specified in the scene file.
Keep face normals. Checkbox. Face normals can substantially increase the number of
vertices that SynthEyes must produce for a given mesh; since SynthEyes
generates face normals automatically when needed, it is generally a good
strategy to delete them. Turning on this checkbox will keep them, so you can see
the difference and in case the true original ones are required for whatever you
are doing downstream. This checkbox also affects whether or not per-polygon-
vertex normals are kept, which require even more vertices (one per vertex in
each polygon, ie 3 or 4 times the number of triangles/quads).
Bake mesh scaling. Checkbox. When checked, the scaling factor specified in the
scene is applied to the mesh's vertex data, so that the final result has no scaling
in SynthEyes (ie for rigs). When unchecked, any scaling is maintained in
SynthEyes (visible on the 3D panel with the scaling tool selected). Keep off if
vertex caches are present.
Show backfaces. Checkbox. When set, the back faces of the imported meshes will be
shown in SynthEyes, or not if this is unchecked. Saves the trouble of configuring
it on the 3D panel after import.
Read lights. Checkbox. Check this to read directional and point lights from the file.
(Animated intensity data is not currently read.)
Enable shadowing. Checkbox. Large complex scenes with many objects and large
meshes can result in poor redraw performance due to shadowing. This checkbox
gives a quick way to turn off shadow processing in the imported SynthEyes
scene, so that you can re-enable it selectively if required.
Read Vertex Caches. Checkbox. When checked, SynthEyes will set up vertex caches
found in the FBX file in the SynthEyes scene. Important: the .PC2 or .MCX files
that contain the actual data must continue to exist after the import. You
cannot delete them, or the vertex animation can no longer be generated in
SynthEyes. If you move them, use the 3D Panel's Vtx Cache button to update the
file location. (The older .mcc format is not supported; configure the exporting app
to use the modern .mcx format.)
Read rigs. Checkbox. When checked, SynthEyes will read entire rigs with bones and
Fbx Skin deformers from the file. Read nulls and Read meshes must be on. Note
that SynthEyes always uses "linear" bones mode. Be sure to check this next
dropdown!
Key rigs on frame. Dropdown selector. Controls which frames will receive keys on the
Lock value channels: None, First, Last, Custom, or All. If the imported rig is
completely unposed, use None. If the rig has already been pre-posed to match a
specific frame, from which you will continue to track, you'll want the rig keyed on
the frame, either First, Last (for backward tracking), or a custom frame (see next
control, for when the rig is posed somewhere in the middle of the shot). Use the
All option when the rig has been pre-animated over the entire shot. You can
delete unnecessary keys from the graph editor, of course.
Custom key frame. Frame number. This is the frame on which keys are generated
when Key rigs on frame is set to Custom and Read rigs is checked.
Read scene settings. Checkbox. When checked, the frame rate and shot length
information is read from the file. This is needed for camera paths and animated
rigs, but not for reading plain meshes.
Shift to frame 0. Checkbox. When checked, the first active frame in the shot is brought
down to become frame 0(1 depending on preference) in SynthEyes.
Additional scaling. Numeric factor. Use this to uniformly expand or contract the size of
the scene in SynthEyes, ie to reduce very large scenes, or increase tiny ones
Fusion
Tip: Once you've read this, see separate section on how to get the exported
data into Fusion in Davinci Resolve.
Important: The resize in the Corner Pin export is required so that imagery of
any resolution can be pinned into the main flow, due to how the fusion
cornerposition node works. The resize can be eliminated only if the image
being pinned is the same resolution as the main flow.
There is a less capable legacy 3D comp exporter for Fusion 5, and a generic 2-D
path ".dfmo" exporter.
Fusion Export Controls
Timeline setup. Selector: Active part starts at ..., Entire shot starts at..., or Match Image
frame numbers. Controls how frame numbers are mapped from SynthEyes to
fusion, as with After Effects.
Export for Fusion 9 and up. Check if the file is to be imported on Fusion 9 or later,
unchecked for earlier versions.
Force spotlights. Checkbox. When set, SynthEyes point and directional lights will be
exported to Fusion as spotlights. This is useful because Fusion can only do
shadowing for spotlights. However, if you do this, you may need to reposition
directional lights (not reaim!), and reaim point lights (not reposition!) to ensure
that as a spotlight they illuminate the correct area.
Include ambient. Checkbox. Turn this on to create an ambient light in the Fusion 3D
scene, initialized from the SynthEyes ambient light setting.
HQ on by default. Controls the HiQ setting at bottom right of Fusion.
Relative tracker size. Number. Sets the size of the marker for trackers, as a multiple of
the object's world size.
Relative screen distance. Number. Sets the distance from the camera to the image
plane holding the source imagery in the 3D environment, as a multiple of the
object's world size.
Use PointCloud3Ds. Checkbox. Normally on, producing PointCloud3D nodes. When
off, many Locator3Ds will be produced, which has additional features such as
tracker colors. Turn this off only if there are few exportable trackers (ie turn off
exportable for most).
Planars as planes. Checkbox. When on, 3D planar trackers will be exported as actual
planes, so that a texture image (ie sign replacement) can easily be added. When
off, planar trackers are treated as normal trackers.
Renderable trackers. Checkbox. Controls the Make Renderable checkbox on the
PointCloud3D nodes, ie the tracker axis markers will be visible in renderers and
viewports. Useful initially, but not for final renders.
OpenGL renderer. Checkbox. When set, the OpenGL renderer is used for the
Renderer3D node, if not set, the software renderer is used.
Animate FL, not FOV. Checkbox. When set, the focal length is animated in fusion.
Normally, when the checkbox is clear, the field of view is animated. Standard
caveat applies: focal lengths are half a number, useless unless you know the
exact back plate size, which you don't. So FOV is preferred.
Use Fusion meshes. Checkbox. When set, simple meshes will be exported using
Fusion's builtin Shape3D node. When not set, all meshes are exported as OBJs
and read with an FBX node, enhancing consistency with SynthEyes grid counts
and UV coordinates, etc.
Include prefs/comments. Checkbox. When checked, includes additional scene setup
information (recommended).
Include distortion fixes. Checkbox. When checked, nodes will be created to undistort
the footage before it is fed to the 3D environment, and to re-distort the rendered
footage for compositing with the original (see below). Use in conjunction with the
1- or 2-pass lens workflows.
Handle 360VR shots. Checkbox. When checked (basically always!) 360 VR scenes will
have a 360VR camera and stabilization rig generated.
Map file type. Selector. Selects the image file type written to control undistortion and
redistortion nodes when Include distortion fixes is set.
Remove path prefix. Text. When this text appears at the beginning of a path, for
example an image name, it is removed. Use the add and remove path prefixes to
retarget the exported file to a different machine, for example remove
"/Volumes/SHOTS" and add "S:" to go from Mac to PC, or use them to change
from local to network drives.
Add path prefix. Text. Whenever the path prefix above is removed, this prefix will be
added.
Export to clipboard? Checkbox. When checked, the scene will be put on the
clipboard, where you can paste it into Fusion or Resolve. This allows you to
paste a modified export into your existing comp, and might be quicker than
closing and reopening a scene.
Open the exported file. Checkbox. When set, SynthEyes will launch a new Fusion
that opens the just-exported comp. Not that helpful for repeated reopening due to
Fusion's extended launch time.
Custom Fusion location. Text/File field. This selects the version of Fusion that gets
started, either the .exe on Windows, the .app on Mac, or the executable on Linux.
Working with with Lens Distortion
The export will build image undistortion and redistortion nodes (using distortion
image maps) into the flow when lens distortion is present and Include distortion fixes is
checked. If you use a one-pass workflow, where undistorted imagery is delivered, you
can delete the re-distortion node.
When you use a 2-pass lens distortion workflow, where the rendered 3D images
are re-distorted, after you have the desired 3-D setup you should disconnect the input of
the Camera3D node (or click "Unseen by Cameras" on the Camera3D's Image tab), so
that the resampled (undistorted) image is not used as a background during the 3-D
render. Instead, you should connect a 2D Merge node to the output of the
CameraRedistort node as the foreground, and connect the original footage loader as
the background for the Merge. This ensures that the original footage is never
resampled, for maximum image quality. It also ensures that any little edge artifacts
around the undistorted then redistorted images are not used.
Known Fusion Issues
There are currently some issues with Fusion's handling of point clouds:
1. When a PointCloud3D node is selected, the tracker points may be shown
in the lower-left hand corner of camera views, instead of at their correct
location, on some machines
HitFilm Project
Outputs a HitFilm project file for the 3D scene.
Note: The current export works for HitFilm Pro 2017 and later. It does not
work for HitFilm Express 4. When HitFilm Express 2017 is released, we will
evaluate how feasible it is for the exporter to work, in full or in part, with
HitFilm Express 2017.
This is a very full-featured exporter, here are a few highlights. But be sure to read
the Important HitFilm notes below!
Cameras, including both lens distortion workflows.
Moving objects (as Point Layer hierarchies, including GeoH hierarchies).
Trackers (to Point layers).
3D Planar Trackers become empty comps that can be filled with anything
desired.
Mesh planes are exported as a Media layer if they are textured, or as an
empty comp if not.
Arbitrary 3D Meshes are exported and embedded in the HitFilm project file,
including opacity and texture (HitFilm uses filenames, not embedded data).
Lights.
You can simply open the exported .hfp file with HitFilm, or use the auto-run
features of the export (see below).
HitFilm Export Controls
Open the exported file. Checkbox. When checked, SynthEyes will submit the just-
written project file to the operating system to be opened in HitFilm. Windows:
HitFilm currently produces an error message if it is already open.
Display in frame#s. Checkbox. When checked, the timelines for the composites show
frame numbers, better corresponding to SynthEyes. Not checked, time code is
shown (and is always shown for the editing timeline).
Extra Scaling. Number. Coordinates from SynthEyes are multiplied by this factor. The
default is 10, since SynthEyes scenes typically are a few hundred units by
default, while HitFilm uses pixels, so numbers in the low thousands are more
typical. You can set up your coordinate system scaling specifically in SynthEyes
to either accommodate some Extra Scaling, or set it to one.
in SynthEyes itself, instead of in the export script, which would make it more efficient
and able to use multiple cores.
The Displacement Effect used to implement lens undistortion and redistortion in
HitFilm is not very accurate; in particular when using Lens Distortion Workflow #2
the redistorted image typically exhibits several pixels of error compared to the
original version. Hopefully future HitFilm and exporter developments will make it
possible to avoid this.
Tracker Layers must be selected in HitFilm to be visible in the camera view. (We've
suggested that can be inconvenient.)
Large meshes require substantial storage in HitFilm's project file format, so they may
take some time to export and import.
We're aware of the HitFilm's Alembic animation capability, and SynthEyes's Alembic
export capability. Unfortunately they can't yet talk, as HitFilm requires the mesh and
vertices to be in a special simplified (unrolled) format. That will require a special
future HitFilm option for SynthEyes's Alembic exporter, or HitFilm to support the
MDD or PC2 animated mesh formats.
Houdini Instructions
You can use File/Run Script to run the exported file in Houdini. Here's the older
procedure also:
1. File/New unless you are addding to your existing scene.
2. Open the script Textport
3. Type source "c:/shots/scenes/flyover.cmd" or equivalent.
4. Change back from COPs to OBJs.
Lightwave
The Lightwave exporter produces a lightwave scene file (.lws) with several
options, one of them crucial to maintaining proper synchronization.
The lightwave exporter writes a lightwave object (lwo) file for meshes in the
scene, including the texture, and references them from the lws file. There is a separate
exporter if you would like to export only a Lightwave LWO2 file.
The Lightwave exporter includes these exporter features:
Advanced-Camera setup for 360VR, rolling shutter, and motion blur;
automatically-generated lens distortion projection screen (you must have
run the Lens Workflow script);
trackers can be exported as a vertex-only mesh with vertex colors, instead
of a large number of nulls, or all trackers can go in the mesh and only
selected trackers appear as nulls;
mesh export supports quads and vertex color maps, far meshes, and
GeoH deformation by writing an MDD vertex cache;
object hierarchy export from GeoH tracking;
animated mesh and light colors;
ambient and shadow scene colors;
Modo
The modo exporter handles normal shots, tripod shots, object shots, zooms etc.
It transfers any meshes you've made, including the UV coordinates if you've frozen a
UV map onto a tracker mesh. Handles 360 VR. Can start modo automatically. Requires
modo 11+).
The UI includes the units (you can override the SynthEyes preferences setting);
the scaling of the tracker widgets in Modo—this is a percentage value, adjust to suit;
plus there is an overall scaling value you can tweak if you want to (better to set up the
coordinates right instead).
Limitations
1. Only Image Sequences can be transferred to and displayed by Modo -- modo does
not support AVI or Quicktime backdrops.
2. Image sequences in modo MUST have a fixed number of digits: the first and last
frames must have the same number of digits (may require leading zeroes). YES:
img005..img150. NO: img2..img913. This may be a problem for Quicktime-
generated sequences.
3. (Older versions:) Modo occasionally displays the wrong image on the first frame of
the sequence after you scrub around in Modo. Do not panic.
4. The export is set up to use Modo's default ZXY angle ordering.
Directions
1. Track, solve, etc, then export using the Modo Perl Script item (will produce a file
with a ".pl" extension). Be sure to select the correct modo version for the export
(40x for 401 or 402, 50x for 501..., etc).
2. Start modo, on the System menu select Run Script and give it the file you exported
from SynthEyes.
3. To see the match, you may need to re-set the modo viewport to show the exported
camera, typically Camera01
Nuke
The nuke exporter produces a nuke file you can open directly. Be sure to select
the exporter appropriate to your version of Nuke—the files are notably different between
Nuke versions. The 5.0 exporters are substantially more feature-rich than the Nuke 4
exporter, handling a wide variety of scene types.
The pop-up parameter panel lets you control a number of features. The Nuke
exporter will change SynthEyes meshes to Nuke built-ins where possible, such as for
boxes and spheres. It can export non-primitive meshes as OBJ files and link them in
automatically. If the ‘other’ meshes are not exported, they are changed to bounding
boxes in Nuke. Note that SynthEyes meshes can be scaled asymmetrically; you can
either burn the scaling into the OBJ file (especially useful if you wish to use the OBJ
elsewhere), or you can have the scaling duplicated by the Nuke scene.
You can indicate if you have a slate frame at the start of the shot, or select
renderable or non-rendering tracker marks. The renderable marks are better for
tracking, the non-rendering marks better for adding objects within Nuke’s 3-D view. The
size of the renderable tracker marks (spheres) can be controlled by a knob on the
enclosing group. You can ask for a sticky note showing the SynthEyes scene file, or a
popup message with the frame and tracker count.
Note that Nuke 5.1 and earlier requires only INTEGER frame rates throughout.
SynthEyes will force the value appropriately, but you may need to pay attention
throughout your pipeline if you are using Nuke on 23.976 fps shots, which is “24 fps”
from an HD/HDV camera.
Distortion Grid
There is a Nuke Distortion grid exporter that will output a nuke distortion node
that matches the distortion configured within SynthEyes. You can create a distortion for
adding or removing distortion. Be sure to select the "IP Lens Profile" on the exporter if
you are using a SynthEyes lens distortion profile, not just quadratic or cubic distortion
values.
Planar Tracker Corner Pin Export
The Nuke Corner Pin export creates an animated nuke corner pin node for each
selected exportable planar tracker. If no trackers are selected, then all suitable planar
trackers are exported.
The layers are merged together according to the layer stackup on the Planar
Options panel, ie trackers on top occlude those further down.
PhotoScan
SynthEyes exports an XML file that PhotoScan can read; when PhotoScan can't
solve a complex set images by itself, this allows you to use the full set of manual and
automated tracking tools in SynthEyes instead.
The exported XML file contains the image names, camera position, orientation,
focal length, and back plate size data. You should export the XML file into the same
folder as the images.
All cameras are exported except those that are disabled on the solver panel. Any
frames marked on the skip-frames track are not output; they can be output but disabled
if desired based on a setting internal to the script. Aside from that, it's very simple, there
are no user settings needed for it.
Poser
Poser struggles a little to be able to handle a match-moved camera, so the
process is a bit involved. Hopefully Curious Labs will improve the situation in further
releases.
The shot must have square pixels to be used properly by Poser; it doesn't
understand pixel aspect ratios. So if you have a 720x480 DV source, say, you need to
resample it in SynthEyes, After Effects or something to 640x480. Also, the shot has to
have a frame rate of exactly 30 fps. This is a drag since normal video is 29.97 fps, and
Poser thinks it is 29.00 fps, and trouble ensues. One way to get the frame rate
conversion without actually mucking up any of the frames is to store the shot out as a
frame sequence, then read it back in to your favorite tool as a 30 fps sequence. Then
you can save the 640x480 or other square-pixel size.
Note that you can start with a nice 720x480 29.97 DV shot, track it in SynthEyes,
convert it as above for Poser, do your poser animation, render a sequence out of Poser,
then composite it back into the original 720x480.
One other thing you need to establish at this time---exactly how many frames
there are in your shot. If the shot ranges are 0 to 100, there are 101; from 10 to 223,
there are 214.
1. After completing tracking in SynthEyes, export using the Poser Python exporter.
2. Start Poser.
3. Set the number of frames of animation, at bottom center of the Poser interface, to
the correct number of frames. It is essential that you do this now, before reading
the python script
4. File/Run Python Script on the python script output from SynthEyes.
5. The Poser Dolly camera will be selected and have the SynthEyes camera
animation on it. There are little objects for each tracker, and also SynthEyes
boxes, cones, etc are brought over into Poser.
Open Question: How to render out of Poser with the animated movie
background. The best approach appears to be to render against black with an alpha
channel, then composite over the original shot externally.
Resolve's Fusion
Blackmagic Design's Davinci Resolve contains an embedded version of Fusion.
Because Resolve uses a closed internal database to store projects, instead of files, it's
more difficult to get information into the project files. Fortunately, there's a relatively
easy workaround: pasting it.
Tip: You can refer to a tutorial Using SynthEyes with Resolve 15 by Andrew
Hazelden.
10. You'll now be able to see the output of the Fusion comp in Resolve, and
you can add elements etc in the Fusion comp.
If you later update the tracking, you can export from SynthEyes again, then
delete the SynthEyes-originated nodes, paste again, and re-wire as needed. If you
anticipate doing this repeatedly, you may want to lay out the nodes in Fusion to make
this easy.
The impact of doing or not doing the two optional steps is not obvious. If you do
them, the shot imagery is being managed by Resolve and supplied to Fusion, while if
you don't, the shot is being loaded directly by Fusion.
If you exported a comp to Fusion at an earlier time and gone on to other things
such that the data is no longer on the clipboard, you can open the exported comp file in
any text editor, such as NotePad, Text Edit, or Vim. Control/Command-A to select the
entire contents, then Edit/Copy. Now you are ready to paste into the Fusion pane of
Resolve as described above.
Shake
SynthEyes offers three specific exporters for Shake, plus one generic one:
1. MatchMove Node.
2. Tracker Node
3. Tracking File format
4. 3-D Export via the “AfterFX via .ma” or Maya ASCII exports.
The first two formats (Sizzle export scripts) produce shake scripts (.shk files); the third
format is a text file. The fourth option produces Maya scene files that Shake reads and
builds into a scene using its 3-D camera.
We’ll start with the simplest, the tracking file format. Select one tracker and
export with the Shake Tracking File Format, and you will have a track that can be
loaded into a Shake tracker using the load option. You can use this to bring a track from
SynthEyes into existing Shake tracking setups.
Building on this basis, #2, Tracker Node, exports one or more selected trackers
from SynthEyes to create a single Tracker Node within Shake. There are some fine
points to this. First, you will be asked whether you want to export the solved 3-D
positions, or the tracked 2-D positions. These values are similar, but not the same. If
you have a 3-D solution in SynthEyes, you can select the solved 3-D positions, and the
export will be the “ideal” tracked (predicted) coordinates, with less jitter than the plain 2-
D coordinates.
Also, since you might be exporting from Windows to a Mac or Linux machine, the
image source file(s) may be named differently: perhaps X:\shots1\shot1_#.tga on
Windows, and /Users/tom/shots/shot1_#.tga on the Mac. The Shake export script’s
dialog box has two fields, PC Drive and Mac Drive, that you can set to automatically
translate the PC file name into the Mac file name, so that the Shake script will work
immediately. In this example, you would set PC Drive to “X:\\” and Mac Drive to
“/Users/tom/”.
Finally, the MatchMove node exporter looks not for trackers to export, but for
SynthEyes planes! Each plane (created from the 3-D panel) is exported to Shake by
creating four artificial trackers (in Shake) at the corners of the plane. The matchmove
export lets you insert a layer at any arbitrary position within the 3-D environment
calculated by SynthEyes. For example, you can insert a matte painting into a scene at a
location where there is nothing to track. You can use a collection of planes, positioned
in SynthEyes, to obtain much of the effect of a 3-D camera. The matchmove node
export also provides Windows to Mac/Linux file name translation.
trueSpace
Warning: trueSpace has sometimes had problems executing the exported script
correctly. Hopefully Caligari will fix this soon.
Vue 5 Infinite
The export to Vue Infinite requires a fair number of manual steps pending further
Vue enhancements. But with a little practice, they should only take a minute or two.
1. Export from SynthEyes using the Vue 5 Infinite setting. The options can be
left at their default settings unless desired. You can save the python script
produced into any convenient location.
2. Start Vue Infinite or do a File/New in it.
3. Select the Main Camera
4. On its properties, turn OFF "Always keep level"
5. Go to the animation menu, turn ON the auto-keyframe option.
6. Select the Python/Run python script menu item, select the script exported
from SynthEyes, and run it.
7. In the main camera view, select the "Camera01 Screen" object (or the
equivalent if the SynthEyes camera was renamed)
8. In the material preview, right-click, select Edit Material.
9. The material editor appears, select Advanced Material Editor if not
already.
10. Change the material name to flyover or whatever the image shot name is.
11. Select the Colors tab.
12. Select "Mapped picture"
13. Click the left-arrow "Load" icon under the black bitmap preview area
14. In the "Please select a picture to load" dialog, click the Browse File icon at
the bottom --- a left arrow superimposed on a folder
15. Select your image file in the Open Files dialog. If it is an image sequence,
select the first image, then shift-select the last.
16. On the material editor, under the bitmap preview area, click the clap-board
animation icon to bring up the Animated Texture Options dialog
17. Set the frame rate to the correct value.
18. Turn on "Mirror Y"
19. Hit OK on the Animated Texture dialog
20. On the drop-down at top right of the Advanced Material Editor, select a
Mapping of Object- Parametric
21. Turn off "Cast shadows" and "Receive shadows"
22. Back down below, click the Highlights tab
23. Turn Highlight global intensity down to zero.
Vue 6 Infinite
1. Export from SynthEyes using the Vue 6 Infinite option, producing a maxscript file.
2. Import the maxscript file in Vue 6 Infinite
3. Adjust the aspect ratio of the backdrop to the correct overall aspect ratio for your
shot. This is important since Vue assumes square pixels, and if they aren’t (for all
DV, say), the camera match will be off badly.
Basic Overlays
The simplest kind of insertion is the "what you see is what you get" approach,
where the output is what you see in the 3-D views of your 3-D application. The new CG
imagery simply superimposed on top of the original shot.
Tip: Make sure that your rendering application is not applying lighting to the
projection screen holding the imagery.
While this looks like the thing to do, because it's generally already set up and
showing you the result, it is there to aid you in developing the shot and verifying lineup,
rather than to use as the final shot. Using an overlay directly is suited for simple
examples and previews.
Generally it is easiest to sell the new elements as part of the shot if you can
adjust them in a compositing application, such as After Effects, Fusion, Motion, or Nuke,
rather than trying to force the material settings on your 3-D render to exactly match the
final shot.
Your renderer will produce an alpha channel that shows the newly rendered
pixels and is used to overlay the 3-D imagery over the original images.
For example, you can get this in Fusion by disconnecting the image loader from
the input to the Camera3D node, or by un-checking the Enable Image Plane checkbox
on the Image tab of the Camera3D node.
With the 3-D portions of the scene rendered against black, then you composite
the rendered images over the original images. Again in Fusion, you can connect the
final Renderer3D node to a Merge as the foreground, and the original imager to the
background input of the Merge. (Use the output of the Custom nodes if undistortion and
redistortion nodes are present.)
Sliding
Sliding is usually caused by improper placement of inserted objects: they are in
the air, underground, etc. The real world isn't flat or "square." If you see sliding at a
particular location, put a supervised tracker there, and use its computed 3-D location to
place the inserted object. Use tracker-based mesh modeling to create or adjust the
object as needed. For more information, see the Understanding Sliding tutorial.
Rotoscoping
Overlays are great if the new objects are uniformly in front, but this is often not
the case. If not, you must perform rotoscoping, typically in your compositing application,
so that you have the isolated foreground object(s) and an alpha channel, as in the
rendering-for-compositing examples above.
Rotoscoping can be especially difficult and time-consuming for detailed natural
objects such as bushes and trees. For this reason, consider overlaying existing objects
with new CG elements as an alternative. This is especially typical for architectural
projects.
Matching Lighting
You should match the amount and direction of the lighting in the scene. You
should match not only the brightness of your objects to existing objects in the scene
(especially to ensure that your objects aren't too bright), but you should also match the
depth of the shadows, so that the shadows aren't darker than existing shadows in the
scene.
SynthEyes has a system to help you determine the direction to lights in the scene
(especially sunlight), based on tracking both the location of the shadow and the location
of the object casting the shadow. While the scene may not always have appropriate
trackers available, this method can immediately produce accurate results.
You can use the Shadow Map Maker script to help create a physical object for a
shadow; it can be rendered as a separate layer if needed. To do this, create a mesh
from the trackers in the area the shadow will fall, then run the script to generate a
texture (stored ON the mesh) that corresponds to the shadow. You'll see that texture
with texture mapping turned on in the perspective window. You should then save the
map to disk by opening the Texture Panel, clicking Create Texture (you've already done
that, but you're setting up the next step), click Set and select the filename where it
should be stored, then click the Save button to write the texture map. As a last step, turn
Create Texture back off; this will make the texture map available to be reread from disk
later and minimize the chance that any texture extraction operations overwrite it.
Matching Defocus
You should also match the degree and depth of focus in the original shot:
typically cameras and lenses aren't as sharp as CG images. Depending on your
software, you may be able to use a depth of field setting while rendering, or you may
need to apply a 2-D blur in your compositing software.
Matching Noise
You should apply noise to the CG render to match what the camera has
produced, so that the CG effect doesn't look too clean. The noise should change with
each frame, of course. Compositing packages typically have a variety of features for
doing this.
Eroding Alpha
CG inserts can stick out when the boundary between the CG effect and the
background is too sharp. This is different than Matching Defocus, which applies to the
interior of the effects.
Instead, you want to blur the alpha channel but only inside the existing area.
You don't want any non-zero alpha creeping outside the portion which has been
rendered.
In Fusion, you can configure the Blur to blur only alpha. Then a Custom tool with
i1=1-2*(1-a1), r=r1*i1, g=g1*i1, b=b1*i1, a=i1 will pull the alpha in while maintaining the
premultiplying of Fusion's pipeline. (There's probably a better way; this is a
sledgehammer solution.) Just a pixel or two of alpha blur will do the trick.
Tip: See also the sections on Creating Mesh Objects and Importing Mesh
Objects.
Cards may be parented to moving objects, simply have the moving object active
on the main toolbar when you create the card. The card will inherit and be aligned with
respect to the object's coordinate system, rather than the world coordinate system.
Creating the Card
You can create cards using the "Add Cards" mouse mode of the perspective
window, found in the Other Modes section of the right-click menu, or on the mesh
toolbar. Note that cards are simply planes, positioned and configured easily: you can
place a plane manually and texture it and the result is just as much a card.
With the Add Cards mode active, you can lasso-select within the perspective
view. The trackers that fall within the lasso are examined and used to determine the
plane of the card, in 3-D. If there are a few trackers that are much further away than the
rest, that do not fit well on the plane of the others, they will be ignored.
The bounding box of the lassoed area determines the size of the plane. You can
move the mouse around as you are lassoing to bump up the size of the plane. You
might notice the plane jumping around a bit if the trackers you have lassoed don't form a
particularly flat plane: keep moving until you get the plane you want!
Alternatively, you can pre-select the trackers to be used to locate the plane,
using any method of tracker selection you want. While the Add Card mode is active, use
control-drag*** to do a simple lasso of the trackers, without creating the plane yet. This
makes it easier to navigate around in 3-D and verify that you have selected the trackers
you want, and that they are reasonably planar, before creating the card.
If there are already trackers selected as you start to add a card, those trackers
are used to locate it, and instead of a lasso, the location where you first push the mouse
button and where you release it are used as the corners of the plane; it is a simple
rectangular drag (in the plane defined by the trackers). Pre-selecting the trackers is
more convenient when you want to carefully set locate the edges of the plane. You can
also do a control-A to select all the trackers, and find the best overall (ground, typically)
plane.
As you create a new card, its texturing parameters are copied from the previous
card, if any, so you can configure the first one as you like, then create additional cards
quickly. You may want to choose the resolution specifically for each card—smaller
cards need less, bigger cards need more—to maintain an adequate but not excessive
amount of oversampling of the actual pixels.
Once you have created a card, it will be selected in the viewports, so that you
can work on the texturing. If you would like to compare the position of the plane to the
trackers used to align it, click Undo once. This will show the created, but unselected
card, plus the selected trackers used to align it. If you did not pre-select the trackers,
instead lassoing them within the tool, only the trackers actually used to locate the card
will be selected, not any outliers. You can unlock from the camera and orbit about to get
a better idea of what you have, then click Redo to re-select the card.
Moving a Card
If you want to reposition the card along its own axes, be sure to switch the
perspective window to View/Local coordinate handles, so that you will be sliding the
plane along its own plane, not the coordinate axes. But be sure to read the next
paragraph!
If you re-solve the scene, for example after adding trackers and hitting refine, or
after changing the coordinate system setup, the position of the card can be updated
based on the new positions of the trackers originally used to create it. Run the
Linking/Align Via Links dialog on the perspective window's right-click menu.
Since the trackers may shift around arbitrarily, or if you moved the card after
creating it, the card may no longer be in some exact location you wanted, and you will
need to manually adjust it.
Texturing
For the full details of texture extraction, see the texture extraction chapter. Here
is a quick preview. SynthEyes will pull an average texture from the scene and write it to
a file. You have control over the resolution, format, etc parameters of that, as set from
the Texture Control Panel (opened from the Window menu).
When you create a card using Add Cards, the texturing parameters will be
preconfigured to create a texture and write it to disk. The file name is based on the card
name and the overall saved SynthEyes scene file name: you must have already saved
the file at least once for this to work.
The texture will not be produced until you click the Run or Run All buttons on the
Texture Control Panel, or until you solve, if the Run all after solve checkbox is turned
on.
Once the texture is produced, you can paint on its alpha channel in SynthEyes or
externally. If you do so externally, you should turn off the Create checkbox so the
texture is not later accidentally overwritten.
Tricks
Cards can exactly represent only perfectly flat surfaces, since they are
themselves flat. It is relatively common to use flat cards in 3-D compositing setups
because the amount of perspective shift that should occur, if the item on the card was
more accurately modeled, is not discernable. You can extract and build up composites
with multiple levels of cards with extracted textures, if you build appropriate alpha
channels for them.
If the camera moves a lot, a single card can start to present an inaccurate view,
one that shows its essential flatness. You can create multiple cards, oriented differently
to match different parts of the shot, compute the texture based on the correspondingly
limited portion of the shot, and fade them in and out over the duration of the shot (in
your composition or 3-D animation app).
Note: the triangulation occurs with respect to a particular point of view; a top-
down view is preferable to a side-on one which will probably have an
interdigitated structure, rather than what you likely want. The automatic
viewport-dependent triangulation is a quick and dirty head start; you can use
clipping planes or the Assemble Mesh mouse mode of the perspective
window to set up a specific triangulation reasonably rapidly.
Lock the view back to the camera. Click on the tracker mesh to select it. Select
the 3-D control panel and click Catch Shadows. Select Cylinder as the object-
creation type on the 3-D panel, and create a cylinder in the middle of the mesh
object (it will be created on the ground plane). You will see the shadow on the tracker
mesh. Use the cylinder’s handles to drag it around and the shadow will move across the
mesh appropriately. For more fun, right-click Place mode and move the cylinder around
on the mesh.
In your 3-D application, you will probably want to subdivide the mesh to a
smoother form, unless you already have many trackers. A smoother mesh will prevent
shadows from showing sharp bends due to the underlying mesh.
In practice, you will want to exercise much finer control over the building of the
mesh: what it is built from, and how. The mesh built from the flyover trackers winds up
with a lot of bumpiness due to the trees and sparsity of sampling. SynthEyes provides
tools for building models more selectively.
If you are following along, keep SynthEyes open at this point, as we'll continue on
from here in the section on Front Projection, right before we start with Texture
Extraction.
Adding Vertices
To produce more accurate geometry, especially with natural ground surfaces,
you can increase the mesh density with the Track menu’s Add many trackers dialog,
rapidly creating additional trackers after an initial auto-track and solve has been
performed, but before using Convert To Mesh.
Especially for man-made objects, there may not be a tracker where you need
one to accurately represent the geometry. SynthEyes uses spot tracking which favors
the interior of objects, not the corners, which are less reliable due to the changing
background behind them. So even if you used auto-tracking and the Add many trackers
dialog, you will probably want to add additional supervised trackers for particular
locations.
To produce additional detail trackers, especially at the corners of objects, the
offset tracking capability can be very helpful, which is an advanced form of supervised
tracking. With offset tracking, you can use existing supervised trackers to add new
nearby trackers on corners and fine details, without too much time. You can clone off a
number of offset trackers to handle small details in a building, for example. But beware!
The accuracy of an offset tracker is entirely determined by the quality of the offsets you
build; if you do too quick a job, the offset tracker will not be accurate in 3-D. Offset
tracking works better when the camera motion is simple, even if it is bumpy; for
example, a dolly to the right.
Whether you use Add Many Trackers, create new supervised trackers, or clone
out new offset trackers, you can use Convert to Mesh to add them to the existing edit
mesh if you are auto-triangulating.
Controlling Auto-Triangulation
The convert-to-mesh and triangulate tools operate only on selected trackers or
vertices, respectively. Usually you will want to select only a subset of the trackers to
triangulate. After doing so, you may find that you want to take out some facets and re-
triangulate them differently to better reflect the actual world geometry or your planned
use.
You can accomplish that by deleting the offending facets (after selecting them by
selecting all their vertices), and then selectively re-triangulating.
The triangulation depends on the direction from which the perspective view
camera observes the trackers—it is an essentially 2-D process that works on the view
as seen from the camera. You should ensure the camera does not view the group of
trackers edge-on, as an unusable triangulation will result. Instead, for trackers on a
ground plane, the camera should look down from above.
The triangulator works nicely for trackers and vertices that are roughly planar.
When you are working on an convex object such as a head, or even a non-
convex object such as a body scan, you need to isolate a particular subset of vertices
that is nearly planar, in order to triangulate them automatically. You can do that using
one or more clipping plane(s), which are any well-positioned SynthEyes planes.
The mesh vertex selection tool does not look behind any plane in the workspace,
so by positioning a plane right in the middle of an head, say, you can select and work on
only those vertices on the face, say. By moving the view and plane around, you can
easily select and triangulate any portion of the model.
Using the Assemble Mesh Mouse Mode
You can manually triangulate a mesh using the Assemble Mesh mode of the
perspective window. Obviously that takes some more time, at a bit over one click per
facet, but it goes quickly. It allows the very specific control necessary for objects such
as detailed building models, or for stitching together separately-triangulated meshes.
Tip: You may want to use Assemble Mesh Mode when the vertices form a
volume, and you need to work on the front half of a head, say, where
triangulation will produce unuseful results from all vertices (unless you use a
clipping plane).
To use Assemble Mesh mode, go to the perspective window and select it from
the Mesh Operations submenu of the right-click menu. Do not use the Convert to Mesh
item on any trackers. (Assemble Meshes does also work directly on vertices, such as
an imported Lidar scan.)
Instead, begin clicking on the three trackers you want to form the first facet. As
you click on the third, the facet will be created. Click on a fourth tracker, and a new facet
will be created from two of the prior three and the new one in a reasonably intelligent
fashion. As you click on each additional tracker, a new facet will be created.
You can hit Undo if the triangle created isn't the one you want. To get the triangle
you want, click on the vertex you do not want to deselect it; with only two vertices then
selected, clicking on another vertex will create a facet with it and the two selected.
To start a new triangle in a different location, hold down the control key as you
click a tracker: the previously selected vertices will be deselected, leaving only the new
vertex selected. Clicking two more trackers will produce the first new facet.
Mesh Editing
Often an outlying tracker may need to be removed from the mesh, for example,
the top of a phone pole that creates a “tent” in an otherwise mostly flat landscape. You
can select that vertex, and right-click Remove and Repair. Removed vertices are not
deleted, to give you the opportunity to reconnect them. Use the Delete Unused Vertices
operation to finally remove them.
Tip: There are toolbars with many perspective view operations. Right-click
and look in the toolbars submenu; for example see the Mesh toolbar for
editing operations.
Important!: if you move vertices, or add new ones, they will not update if you
update the mesh after a new Solver operation; see the next section.
After we get into object tracking, you will see that you can use the mesh
construction process to generate starting points for object modeling efforts as well.
What Happens If I Refine?
After you have built a mesh from tracker locations, you may need to update the
tracking and solution. The tracker locations will change, and without further action, the
mesh vertices will not. The tracker locations are stored in the mesh as the mesh is
generated.
However, SynthEyes contains a Linking subsystem that can record the original
tracker producing a vertex. This subsystem stores the linkages automatically when you
use the Convert to Mesh and Assemble Mesh operations of the perspective window.
After a solve, you can update the mesh using the Linking/Update meshes using
links operation on the right-click menu. This does not happen automatically, as there are
several possibilities involved with linking, as you will see below.
Important: the links cover only the vertices generated from trackers. If you
create additional vertices, either manually or by subdividing, those vertices can not and
will not be updated.
Links apply for a specific shot, ie the shot the tracker is a part of. A given vertex
can be linked to different trackers on different shots! When you use Update meshes
using links, the currently-active shot determines which links are used to update the
vertices.
Tip: If you have a mesh selected and the 3-D panel open, you can use the
scaling tool's uniform scale value to not only rescale the mesh, but if you drag
while holding down the ALT/command key, you can reposition it
simultaneously along the line of sight from the camera so that it's visual size
doesn't change at all! (Similar to the Nudge mini-spinner on the Coordinates
panel.)
Tip #2: It's also possible to do this from a 3D viewport and the XYZ panel, by
turning on the selection lock and scaling tool, then scale-dragging in the
viewport starting from the camera as the pivot point.
Three-Point Alignment
In the standard coordinate system setup, you click on 3 trackers to set up a
coordinate system: an origin, and X-axis, and a point on the ground plane. You can do
something very similar to align a mesh to three trackers.
In this case, you set up three links between a tracker and the corresponding
vertex on the mesh. You must set the mesh as the edit mesh so that its vertices are
exposed.
Select the first vertex and the first tracker—both can be selected at once. Use the
corresponding Lasso operation for the vertices and tracker. It can be convenient to
zoom the perspective window, or use two different perspective or 3D viewports to do
that. Select the Add link and align mesh menu item on the Linking submenu, and a
new link will be created and the mesh moved to match them up.
Select the second vertex and second tracker, and do the Add link and align mesh
item again, and both vertices will match, achieved by moving, rotating, and scaling the
mesh.
Repeat for the third vertex and tracker. These do not have to be an exact match!
This time, the mesh will rotate about the 3-D line between the first two trackers/vertices
so that the third vertex and tracker fall on the same plane, just like for the coordinate
system setup. The first two links are "lock points" whereas the third pair is like "on
plane." So the third link is of a different type than the first two, the tracker is linked to a
position relative to the three vertices.
If you change the solve, you should update the position of the primitive using the
Linking/Align via Links dialog, described next.
Align Via Links Dialog
The three-point alignment method produces exact tracker/vertex matches for a
small number of points. Alternatively, you can use the Align Via Links dialog (launched
from the Linking submenu) to align a mesh as best possible to any number of links.
You should establish all your links first, using the Add Links to selected menu
item. Then launch the dialog.
You can align the mesh to the trackers, or cause the camera and all the trackers
to be moved to match the mesh, leaving it right where it is! This second option is useful
when you have been given an entire existing 3-D model of the set that you must match
the solve to.
On the dialog, you'll notice that you can control whether or not the mesh is
allowed to scale to match the links, and whether or not that scaling is separate on each
axis, or uniform. When you have a complex existing mesh to match, you'll certainly
want uniform scaling. But if you are matching a box to some trackers, and do not know
the correct relative sizes of the axes, you should use the non-uniform scaling.
You can also cause the tracker locations to be locked to the mesh locations,
which is handy when you have aligned the world to the mesh position—the created links
will cause the solve to reproduce this same alignment with the mesh later, in additional
solves, without having to re-run this dialog.
To aid your linking operations, you can also remove links, or show the trackers
with links, using operations on the Linking submenu.
When you show (flash) the trackers with links, if there are selected vertices, then
only trackers that are linked to the selected vertices will be flashed.
The U,V coordinates of the mesh can be exported and used in other animation
software, along with the source-image frame as a texture, in the rare event it does not
support camera mapping. Frozen Front Projection is the prelude to the texture
extraction capabilities described next.
Important: You will need to create garbage mattes to block out any edges of
the image that contain black margins, or any other area that does not contain
valid image (eg time-code). See Garbage Mattes.
Tip: If you would like to generate animated texture maps, use the ViewShift
system.
The texture display and extraction capabilities are controlled from the Texture
Control Panel, launched from the Window menu or the button at the bottom of the 3-D
Panel, which contains a small subset of the texture controls for information. There are
also some preferences in the Mesh section.
Important: Make sure all meshes are subdivided until each triangle covers
only a relatively small portion of the screen & 3-D environment. This is
especially true of tracker meshes.
360VR. When doing texture extraction from 360VR shots, be sure that the
meshes are especially well subdivided, especially when using blocking
meshes. Inadequate subdivision can cause seams to open in the extracted
texture, because straight lines in the 3D environment are not straight in
360VR images.
Many of the calculations during texture extraction are performed only once per
triangle, not once per pixel, which saves a tremendous amount of time. However, if
triangles are too big, perspective changes from one end to the other will cause
unnecessary blurring. (The interior pixels of a triangle use bilinear interpolation, instead
of the full perspective transform and lens distortion compensation used for the vertices).
If you extracting from a tracker mesh, you can add additional trackers in problem
areas. Use Track/Add Many to increase tracker density before meshing. Or use Punch
In Tracker mode in the perspective window to add individual later trackers to the mesh.
Alternatively, you can write your mesh to an OBJ file, increase its density in a
third party application, perhaps with smoothing, and reimport it for texture extraction.
Texture Display
SynthEyes can display a texture on meshes, whether SynthEyes has generated
it, or a different CG or 2D painting package.
With the mesh selected, the texture control panel open, and the Create checkbox
off, click Set and open the texture image.
Note: Only static textures can be applied here, not sequences or movies.
There is a subtle way to apply animated textures, as described in Animated
Texture Map Extraction.
You can then set the horizontal and vertical resolution of the image to be
computed. Note that these controls have drop-downs for quick and accurate selection of
standard values, but you can also enter the values directly into the edit fields.
Tip: You can enter the image size you want on the Texture Control Panel,
you do not have to stick to the preconfigured drop-down values, and you can
use larger values than the 4K that is the largest value in the drop-down, if the
situation warrants it (a traveling shot).
Warning: as you go to generate that 32K x 16K texture, keep in mind that the
amount of viable information in it will depend on what images are fed into the
process; for example, you can not produce a high resolution texture from a
single 720x480 input image. You will produce a high-resolution image with
many blurry pixels!
With the image depth drop-down, select the desired resolution to be saved in the
output file: 8-bit, 16-bit, half-float, or floating point. Which channel depths are supported
will depend on the output file format, for example JPEG is only 8-bit.; there's no
notification of that. You should see that in downstream packages, or you can check file
sizes.
Though it is here with the format controls, the filter type drop-down controls how
the internal texture processing is done, as the resolution changes from the input image
resolution to the texture image resolution. The default 2-Lanczos should be used almost
always; bilinear may be a little faster and less crisp, 3-Lanczos will take longer and
might possibly produce slightly sharper images. 2-Mitchell (Mitchell-Netravali) is
between bilinear and 2-Lanczos in sharpness.
Blocking
If there is an actor or other object moving continuously in front of the mesh,
shutting down extraction for the entire time may not be an option. Instead, SynthEyes
offers two methods to prevent portions of the image from being used for extraction:
blocking meshes and garbage mattes.
These may not be necessary, if the disturbance is small and short-lived and there
are enough other frames: the image may not be materially affected.
Blocking Meshes
The idea behind blocking meshes is simple, direct, and physically based. It is
applicable for static portions of the set, rather than actors.
If you have modeled a wall and a desk in front of it, then the mesh for the desk
will block the portions of the wall behind it, preventing those sections from participating
in the wall's texture extraction. If the camera moves sufficiently that the portion behind
the desk is exposed elsewhere in the shot, then the wall's entire texture can be
computed. If not, the wall texture will have a blank spot—one that can not be seen as it
is behind the desk!
A complex scene may have many meshes in it, some used for texture extraction,
some used for blocking, some not. To allow performance to be optimized, the texture
control panel contains a control over whether or not a particular mesh should be tested
to see if it blocks any texture extractions.
It may be set to Blocking or Non-blocking and defaults to non-blocking; usually
only a few must be set to blocking, if any. The blocking control can and should be
adjusted as needed for meshes whether or not they are having their textures extracted
or not. There is a notable calculation overhead for blocking meshes, as something
similar to a render must be performed on each frame for each blocking render.
Garbage Mattes
If an actor or a hard-to-model object is moving in front of the mesh being
extracted, you can instead set up an animated garbage matte to exclude that area.
Important: you will need to set up (un-moving) garbage mattes for black
edges or corners of the image. If you do not, you will have large black
artifacts in a trail through your texture!
The garbage mattes are set up with SynthEyes's roto system, which is normally
used to control the automatic tracker. By the time you get to texture extraction, the time
for that use is well passed, and you can add additional garbage mattes or remove
existing ones without any ill effect.
To make the texture extraction for a particular mesh sensitive to the garbage
splines, turn on the Blocked by Garbage Splines checkbox. (The control defaults to off
to prevent many needless examinations of the complex animated spline shapes.) You
can animate the spline's enable control on the roto panel if needed.
As with its normal use, the garbage mattes set up for texture extraction do not
have to be particularly exact, they can be quick and dirty. For tracking mattes, don't
forget to use the Import Tracker to CP (control point) capability.
lighting changes over the duration of the shot will appear in the patch
structure (rather than being averaged out)
In a long traveling shot, this last limitation can be a useful feature.
You may find it helpful to touch up these textures in Photoshop or equivalent to
mitigate any seams or add a slight blur.
Weighting Control
The pixel averaging or best-pixel determination utilizes a weighting factor based
on the tilt angle of the triangle relative to the camera and the distance from the camera
to the triangle (center). These controls should not have to be adjusted (except maybe to
play) and are preferences instead of on the texture control panel itself. We describe
them here for your understanding.
The tilt angle weight is 1.0 when the camera is looking head-on to the triangle. It
drops off rapidly as the tilt angle increases, based on the texture fallof parameter in the
Meshes section of the preferences. A larger value emphasizes head-on triangles, a zero
value makes all triangles equal. Literally, the value is exp(-falloff*sin^2(angle)) — you
can plot it in Excel on a rainy day.
The distance component is simpler: the camera world size value divided by the Z
distance to the triangle center. So a triangle twice as far has half the weight, and vice
versa.
Tech note: Since doubling all the weights or halving all the weights makes no
difference, the world size value just keeps the numbers well normalized,
without affecting the results.
The tilt and distance components are multiplied to form the final weight.
Tilt Control
Consider a vertical cylinder with a camera orbiting it in a horizontal plane. At any
point in time, the camera gets a good view of the part of the cylinder facing it, but the
portions seen edge on can not be seen well, as the texture is fore-shortened with the
pixels crunched together. As the camera orbits, the portions that can been seen well,
and not seen well, change continuously.
Although the tilt weighting reduces the effect of edge-on triangles, we provide an
additional hard-and-fast control.
To ensure that texture extraction proceeds on the portions that can be accurately
extracted, and ignores the portions that can not, there is a Tilt control.
With the tilt control at zero, all grazing angles are considered. At one, extraction
proceeds only on the portions where they precisely face the camera, with intermediate
cutoff angles at values in between. (Multiply by 90 to get the relevant angle.)
The tilt angle check is performed on a facet-by-facet basis, so meshes where this
calculation is necessary should have adequate segmentation. If there are only 12 facets
around, for example, then there will only be 3 facets between straight-on and edge-on,
roughly 30 degrees at a time will be turned off or on. A segment count of 48 would be a
better choice.
Run Control
The texture extraction process can run under manual or automatic control. You
can run it on only the selected mesh(es) using the Run button, or on all meshes using
the Run All button. In both cases, of course only meshes set up for extraction or
blocking will be processed. It will take less time in total to process all meshes
simultaneously than to process each mesh separately.
You can also have the mesh extraction process run automatically at the end of
each solve operation (normal or refine). You should use this option judiciously, as
extraction can take quite some time and you will not want to do it over and over while
you are working on unrelated tracking or solving issues.
Alpha Generation
When a mesh covers only the interior of an object, the pixels underneath it repeat
reliably in every frame. For example, a side of a building, a poster on a wall, etc.
However, the edge of a castle or a natural scene with rocks or trees will have an
irregular border that is tough to model with mesh geometry.
If the geometry extends past the edge of the object, those pixels will vary over
time, depending on what is behind the object. As the camera moves, those pixels that
are not over a part of the object will sweep across different parts of the background,
potentially producing a broad spread of pixel values.
We can exploit that to produce an alpha channel for the mesh that is opaque
where it covers the object of interest, and transparent for the background. Such meshes
and textures can then readily be used in traditional compositing, especially if the mesh
is always a flat plane.
SynthEyes measures the RMS error (repeatability) of pixels, and offers the Alpha
Control section of the texture control panel. To generate an alpha channel, turn on the
Create Alpha checkbox.
The Low error spinner sets the level at which the alpha channel will be fully
opaque (below this lower limit). The High error spinner sets the level at which the alpha
channel will be fully transparent (above this upper limit). The Sharpness spinner
controls what happens in between those limits, much like a gamma control.
You can increase the Low limit until portions of the alpha channel start to drop
out that should not, and decrease the High limit to the point that the background is fully
clear.
The alpha channel will update immediately as you do this, without having to
recalculate the texture.
You should not expect this process to be perfect; it depends strongly on what the
background is behind the object and how much variability there is in the background
itself. For example, a green-screen background will always stick with the foreground,
because it never varies!
To make it easier to see what you have in the alpha channel, you can use the
Show only texture alpha checkbox and of course the Hide mesh selection checkbox.
You can also use your operating system's image-preview tools to look at the texture
images that have been stored to disk.
To clean up the alpha channel, or create one from scratch, you can paint in it
directly, as described in the next section.
Alpha Painting
SynthEyes offers a painting system that allows you to directly paint in the alpha
channel of extracted textures. You can paint fine detail into textures, especially cards,
rather than trying to create extremely detailed geometry. And you can better capture
natural elements.
Painting can be completely controlled only from the Paint toolbar, accessed from
the perspective view's right-click menu. There must be exactly one mesh selected; it
should not be the edit mesh. There are convenience buttons to show only the alpha
channel, hide the selection, and hide the mesh completely on the toolbar.
There are four mouse modes for painting, all of which use the same three Size,
Sharpness, and Density settings. These settings are adjusted by dragging vertically,
starting within the respective Density etc box on the Paint toolbar.
Fine Point!: The settings affect the last stroke drawn (so you can change it),
as well as the next stroke you draw. To change the parameters before
starting a new stroke, without affecting the old stroke, either click one of
the drawing mode buttons again, or right-click one of the setting buttons.
While the setting buttons are attached to an existing stroke, there is an
asterisk (*) after the name of the button. You can re-attach to the last stroke
by double-clicking one of the settings buttons.
The Size setting controls the size of the brush, in pixels. The sharpness setting
controls the type of fall-off away from the center of the brush. The Density setting
ranges from -1 to +1: at -1, painting makes the pixels immediately transparent, at +1,
painting makes the pixels immediately opaque. In between, pixels are made only
somewhat more transparent or opaque. The transparent and opaque buttons on the
toolbar set the density quickly to the respective value.
The Paint Alpha mode is for 'scribbling' on a mesh while holding down the (left)
mouse button, turning the texture extracted at those pixels transparent or opaque etc as
controlled by the settings. You can paint away extra pieces of geometry, where the
texture is the blurry background, adjust and soften edges to match the desired portion of
the texture, etc.
Note: you must paint on the mesh, you really are painting on the geometry. If you
click off the edge of the mesh, thinking the size of the pen is going to affect the mesh, it
will not, nothing will happen at all.
The Paint Alpha loop is a scribble-type mode, but it creates filled regions, to
rapidly fill a slightly noisy interior in an automatically-created alpha channel, or to knock
a hole around some unwanted texture.
The Pen Z Alpha mode produces straight-line segments between endpoints, with
one endpoint created for each click of the mouse. The "Z" in "Pen Z" refers to the shape
of the paths created, not any particular meaning. Use Pen Z mode to create clean
straight lines along edges, to mask the edge of a building, for example.
The Pen S Alpha mode produces curved spline-based curves between
endpoints, with an endpoint created per mouse click. Again, the "S" refers to the shape
produced, though you can think of it as spline or smooth as well. Use Pen S mode to
create smoother curved edges.
In addition to Undo, the paint toolbar contains buttons to delete the last stroke
(and then the one before that, and before that, ....) and to delete all the strokes.
After finishing painting, click the Save button on the texture panel to re-write the
altered texture(s) to disk.
Your paint strokes are recorded, so that if you later re-calculate the texture, the
paint strokes will be re-applied to the new version of the texture. If you have changed to
the mesh or solve substantially, you may need to re-paint or touch up the alpha to
adjust.
Far Meshes
You may want to create background textures for large spherical or planar
background meshes, ie sky maps. This can be inconvenient, as to work properly the sky
map or distant backdrop must be very large and very far away, ie several thousand
times farther away than the maximum distance of the camera motion.
To simplify this, SynthEyes allows you to create "Far Meshes" similar to far
trackers. Far meshes automatically translate along with the camera, allowing a
conveniently-sized small mesh to masquerade as a large one.
Set the mesh to Far using the button on the 3-D Control Panel. Afterwards, you
will see it move with the camera, do not be alarmed!
in Scene as .obj exporter. When you import these meshes into your target position, you
should position them at the origin with no rotation, just as with the tracker-based
meshes.
The meshes-in-scene exporter can export multiple meshes in one go, but you
should not use that capability if you have extracted textures for them! Instead, select
and export them one at a time, using the Selected Meshes option of the export.
Once you have repositioned the meshes within your animation package, you
should re-apply their respective textures, using the appropriate texture file on disk. (This
is why you should not export multiple meshes as one object, because they will each
need a separate texture file.)
Note: See the section on 360VR for information on how to stabilize 360VR
footage.
You might wonder why we’ve buried such a wonderful and significant capability
quite so far into the manual. The answer is simple: in the hopes that you’ve actually
read some of the manual, because effectively using the stabilizer will require that you
know a number of SynthEyes concepts, and how to use the SynthEyes tracking
capabilities.
If this is the first section of the manual that you’re reading, great, thanks for
reading this, but you’ll probably need to check out some of the other sections too. At the
least, you have to read the Stabilization quick-start.
Also, be sure to check the web site for the latest tutorials on stabilization.
We apologize in advance for some of the rant content of the following sections,
but it’s really in your best interest!
In-Camera Stabilization
Many cameras now feature built-in stabilization, using a variety of operating
principles. These stabilizers, while fine for shooting baby’s first steps, may not be fine at
all for visual effects work.
Electronic stabilization uses additional rows and columns of pixels, then shifts the
image in 2-D, just like the simple but flawed 2-D compositing approach. These are
clearly problematic.
One type of optical stabilizer apparently works by putting the camera imaging
CCD chip on a little platform with motors, zipping the camera chip around rapidly so it
catches the right photons. As amazing as this is, it is clearly just the 2-D compositing
approach.
Another optical stabilizer type adds a small moving lens in the middle of the
collection of simple lens comprising the overall zoom lens. Most likely, the result is
equivalent to a 2-D shift in the image plane.
A third type uses prismatic elements at the front of the lens. This is more likely to
be equivalent to re-aiming the camera, and thus less hazardous to the image geometry.
Doubtless additional types are in use and will appear, and it is difficult to know
their exact properties. Some stabilizers seem to have a tendency to intermittently jump
when confronted with smooth motions. One mitigating factor for in-camera stabilizers,
especially electronic, is that the total amount of offset they can accommodate is small—
the less they can correct, the less they can mess up.
Recommendation: It is probably safest to keep camera stabilization off when
possible, and keep the shutter time (angle) short to avoid blur, except when the amount
of light is limited. Electronic stabilizers have trouble with limited light so that type might
have to be off anyway. A caveat to this: stabilized video should require a lower bit rate,
or admit a higher quality for the same bit rate.
3-D Stabilization
To stabilize correctly, you need 3-D stabilization that performs “keystone
correction” (like a projector does), re-imaging the source at an angle. In effect, your
source image is projected onto a screen, then re-shot by a new camera looking in a
somewhat different direction with a smaller field of view. Using a new camera keeps the
optic center at the center of the image.
In order to do this correctly, you always have to know the field of view of the
original camera. Fortunately, SynthEyes can tell us that.
Stabilization Concepts
Point of Interest (POI). The point of interest is the fixed point that is being
stabilized. If you are pegging a shot, the point of interest is the one point on the image
that never moves.
POI Deltas (Adjust tab). These values allow you to intentionally move the POI
around, either to help reduce the amount of zoom required, or to achieve a particular
framing effect. If you create a rotation, the image rotates around the POI.
Stabilization Track. This is roughly the path the POI took—it is a direction in 3-
D space, described by pan/tilt/roll angles—basically where the camera (POI) was
looking (except that the POI isn’t necessarily at the center of the image).
Reference Track. This is the path in 3-D we want the POI to take. If the shot is
pegged, then this track is just a single set of values, repeated for the duration of the
shot.
Separate Field of View Track. The image preparation system has its own field
of view track. The image prep’s FOV will be larger than main FOV, because the image
prep system sees the entire input image, while the main tracking and solving works only
on the smaller stabilized sub-window output by image prep. Note that an image prep
FOV is needed only for stabilization, not for pixel-level adjustments, downsampling, etc.
The Get Solver FOV button transfers the main FOV track to the stabilizer.
Separate Distortion Track. Similarly there is a separate lens distortion track.
The image prep’s distortion can be animated, while the main distortion can not. The
image prep distortion or the main distortion should always be zero, they should never
both be nonzero simultaneously. The Get Solver Distort button transfers the main
distortion value (from solving or the Lens-panel alignment lines) to the stabilizer, and
begs you to let it clear the main distortion value afterwards.
Stabilization Zoom. The output window can only be a portion of the size of the
input image. The more jiggle, the smaller the output portion must be, to be sure that it
does not run off the edge of the input (see the Padded mode of the image prep window
to see this in action). The zoom factor reflects the ratio of the input and output sizes,
and also what is happening to the size of a pixel. At a zoom ratio of 1, the input and
output windows and pixels are the same size. At a zoom ratio of 2, the output is half the
size of the input, and each incoming pixel has to be stretched to become two pixels in
the output, which will look fairly blurry. Accordingly, you want to keep the zoom value
down in the 1.1-1.3 region. After an Auto-scale, you can see the required zoom on the
Adjust panel.
Re-sampling. There’s nothing that says we have to produce the same size
image going out as coming in. The Output tab lets you create a different output format,
though you will have to consider what effect it has on image quality. Re-sampling 3K
down to HD sounds good; but re-sampling DV up to HD will come out blurry because
the original picture detail is not there.
Interpolation Filter. SynthEyes has to create new pixels “in-between” the
existing ones. It can do so with different kinds of filtering to prevent aliasing, ranging
from the default Bi-Linear, 2-Mitchell, to the most complex 3-Lanczos. The bi-linear
filter is fastest but produces the softest image. The Lanczos filters take longer, but are
sharper—although this can be drawback if the image is noisy.
Tracker Paths. One or more trackers are combined to form the stabilization
track. The tracker’s 2-D paths follow the original footage. After stabilization, they will not
match the new stabilized footage. There is a button, Apply to Trkers, that adjusts the
tracker paths to match the new footage, but again, they then match that particular
footage and they must be restored to match the original footage (with Remove f/Trkers)
before making any later changes to the stabilization. If you mess up, you either have to
return to an earlier saved file, or re-track.
Overall Process
We’re ready to walk through the stabilization process. You may want to refer to
the Image Preprocessor Reference.
Track the features required for stabilization: either a full auto-track, supervised
tracking of particular features to be stabilized, or a combination.
If possible, solve the shot either for full 3-D or as a tripod shot, even if it is not truly
nodal. The resulting 3-D point locations will make the stabilization more accurate,
and it is the best way to get an accurate field of view.
If you have not solved the shot, manually set the Lens FOV on the Image
Preprocessor’s Lens tab (not the main Lens panel) to the best available value. If you
do set up the main lens FOV, you can import it to the Lens tab.
On the Stabilization tab, select a stabilization mode for translation and/or rotation.
This will build the stabilization track automatically if there isn’t one already (as if the
Get Tracks button was hit), and import the lens FOV if the shot is solved.
Adjust the frequency spinner as desired.
Hit the Auto-Scale button to find the required stabilization zoom
Check the zoom on the Adjust tab; using the Padded view, make any additional
adjustment to the stabilization activity to minimize the required zoom, or achieve
desired shot framing.
Output the shot. If only stabilized footage is required, you are done. Alternatively,
use the stabilization rig creator to prepare for the later 3D export, and generation of
the stabilized footage downstream.
Update the scene to use the new imagery, and either re-track or update the trackers
to account for the stabilization
Get a final 3-D or tripod solve and export to your animation or compositing package
for further effects work.
NOTE: If the stabilizer doesn't seem to be doing anything at all, you may have
bypassed (by doing things out of sequence) the automatic "Get Tracks" that
normally happens when you first turn on the stabilization mode. Either select
some specific trackers, or deselect them all, then click the Get Tracks button
to import that tracking data for use in stabilization.
There are two main kinds of shots and stabilization for them: shots focusing on a
subject, which is to remain in the frame, and traveling shots, where the content of the
image changes as new features are revealed.
Stabilizing on a Subject
Often a shot focuses on a single subject, which we want to stabilize in the frame,
despite the shaky motion of the camera. Example shots of this type include:
The camera person walking towards a mark on the ground, to be turned
into a cliff edge for a reveal.
A job site to receive a new building, shot from a helicopter orbiting
overhead
A camera car driving by a house, focusing on the house.
To stabilize these shots, you will identify or create several trackers in the vicinity
of the subject, and with them selected, select the Peg mode on the Translation list on
the Stabilize tab.
This will cause the point of interest to remain stationary in the image for the
duration of the shot.
You may also stabilize and peg the image rotation. Almost always, you will want
to stabilize rotation. It may or may not be pegged.
You may find it helpful to animate the stabilized position of the point of interest, in
order to minimize the zoom required, see below, and also to enliven a shot somewhat.
Some car commercials are shot from a rig that shows both the car and the
surrounding countryside as the car drives: they look a bit surreal because the car is
completely stationary—having been pegged exactly in place. No real camera rig is that
perfect!
Note: When using filter-mode stabilization, the length of the shot matters. If
the shot is too short, it is not possible to accurately control the frequency and
distinguish between vibration and the desired motion, especially at the
beginning and end of the shot. Using a longer version of the take will allow
more control, even if much of the stabilized shot is cut after stabilization.
Start with a larger cut value, and decrease it (to a value that filters more) only
after assessing the impact on the shot. If a shot contains large bumps and you try to
filter them to severely, the entire source image will go offscreen: not only will zoom be
excessive, but the output will be totally black.
Minimizing Zoom
The more zoom required to stabilize a shot, the less image quality will result,
which is clearly bad. Can we minimize the zoom, and maximize image quality? Of
course, and SynthEyes provides the controllability to do so.
Stabilizing a shot has considerable flexibility: the shot can be stable in lots of
different ways, with different amounts of zoom required. We want a shot that everyone
agrees is stable, but minimizes the effect on quality. Fortunately, we have the benefit of
foresight, so we can correct a problem in the middle of a shot, anticipating it long before
it occurs, and provide an apparently stable result.
Animating POI
The basic technique is to animate the position of the point-of-interest within the
frame. If the shot bumps left suddenly, there are fewer pixels available on the left side of
the point of interest to be able to maintain its relative position in the output image, and a
higher zoom will be required. If we have already moved the point of interest to the left,
fewer pixels are required, and less zoom is required.
Earlier, in the Stabilization Quick Start, we remarked that the 28% zoom factor
obtained by animating the rotation could be reduced further. We’ll continue that example
here to show how. Re-do the quick start to completion, go to frame 178, with the Adjust
tab open, in Padded display mode, with the make key button turned on.
From the display, you can see that the red output-area rectangle is almost near
the edge of the image. Grab the purple point-of-interest crosshair, and drag the red
rectangle up into the middle of the image. Now everything is a lot safer. If you switch to
the stabilize tab and hit Autoscale, the red rectangle enlarges—there is less zoom, as
the Adjust tab shows. Only 15% zoom is now required.
By dragging the POI/red rectangle, we reduced zoom. You can see that what we
did amounted to moving the POI. Hit Undo twice, and switch to the Final view.
Drag the POI down to the left, until the Delta U/V values are approximately 0.045
and -0.035. Switch back to the Padded view, and you’ll see you’ve done the same thing
as before. The advantage of the padded view is that you can more easily see what you
are doing, though you can get a similar effect in the Final view by increasing the margin
to about 0.25, where you can see the dashed outline of the source image.
If you close the Image Prep dialog and play the shot, you will see the effect of
moving the POI: a very stable shot, though the apparent subject changes over time. It
can make for a more interesting shot and more creative decisions.
Too Much of a Good Thing?
To be most useful, you can scrub through your shot and look for the worst frame,
where the output rectangle has the most missing, and adjust the POI position on that
frame.
After you do that, there will be some other frame which is now the worst frame.
You can go and adjust that too, if you want. As you do this, the zoom required will get
less and less.
There is a downside: as you do this, you are creating more of the shakiness you
are trying to get rid of. If you keep going, you could get back to no zoom required, but all
the original shakiness, which is of course senseless.
Usually, you will only want to create two or three keys at most, unless the shot is
very long. But exactly where you stop is a creative decision based on the allowable
shakiness and quality impact.
Warning: SynthEyes uses spline interpolation between keys. If you have keys
close together, other frames may have surprisingly high values. If this is likely or already
happening, open the Graph Editor before the Image Preprocessor.
Auto-Scale Capabilities
The auto-scale button can automate the adjustment process for you, as
controlled by the Animate listbox and Maximum auto-zoom settings.
With Animate set to Neither, Auto-scale will pick the smallest zoom required to
avoid missing pieces on the output image sequence, up to the specified maximum
value. If that maximum is reached, there will be missing sections.
If you change the Animate setting to Translate, though, Auto-scale will
automatically add delta U/V keys, animating the POI position, any time the zoom would
have to exceed the maximum.
Rewind to the beginning of the shot, and control-right-click the Delta-U spinner,
clearing all the position keys.
Change the Animate setting to Translate, reduce the Maximum auto-zoom to 1.1,
then click Auto-Scale. SynthEyes adds several keys to achieve the maximum 10%
zoom. If you play back the sequence, you will see the shot shifting around a bit—10% is
probably too low given the amount of jitter in the shot to begin with.
The auto-scale button can also animate the zoom track, if enabled with the
Animate setting. The result is equivalent to a zooming camera lens, and you must be
sure to note that in the main lens panel setting if you will 3-D solve the shot later. This is
probably only useful when there is a lot of resolution available to begin with, and the
point of interest approaches the boundary of the image at the end of the shot.
Keep in mind that the Auto-scale functionality is relatively simple. By considering
the purpose of the shot as well as the nature of any problems in it, you should often be
able to do better.
When the selected trackers are combined to form the single overall stabilization
track, SynthEyes examines the weight of each tracker, as controlled from the main
Tracker panel.
This allows you to shift the position of the point-of-interest (POI) within a group of
trackers, which can be handy.
Suppose you want to stabilize at the location of a single tracker, but you want to
stabilize the rotation as well. With a single tracker, rotation can not be stabilized. If
you select two trackers, you can stabilize the rotation, but without further action, the
point of interest will be sitting between the two trackers, not at the location of the one
you care about.
To fix this, select the desired POI tracker in the main viewport, and increase its
weight value to the maximum (currently 10). Then, select the other tracker(s), and
reduce the weight to the minimum (0.050). This will put the POI very close to your main
tracker.
If you play with the weights a bit, you can make the POI go anywhere within a
polygon formed by the trackers. But do not be surprised if the resulting POI seems to be
sliding on the image: the POI is really a 3-D location, and usually the combination of the
trackers will not be on the surface (unless they are all in the same plane). If this is a
problem for what you want to do, you should create a supervised tracker at the desired
POI location and use that instead.
If you have adjusted the weights, and later want to re-solve the scene, you
should set the weights back to 1.0 before solving. (Select them all then set the weight to
1).
The Output tab on the Image Preparation controls resampling, allowing you to
output a different image format then that coming in. The incoming resolution should be
at least as large as the output resolution, for example, a 3K 4:3 film scan for a 16:9
HDTV image at 1920x1080p. This will allow enough latitude to pull out smaller
subimages.
If you are resampling from a larger resolution to a smaller one, you should use
the Blur setting to minimize aliasing effects (Moire bands). You should consider the
effect of how much of the source image you are using before blurring. If you have a
zoom factor of 2 into a 3K shot, the effective pixel count being used is only 1.5K, so you
probably would not blur if you are producing 1920x1080p HD.
Due to the nature of SynthEyes’ integrated image preparation system, the re-
sampling, keystone correction, and lens un-distortion all occur simultaneously in the
same pass. This presents a vastly improved situation compared to a typical node-based
compositor, where the image will be resampled and degraded at each stage.
After Stabilizing
Once you've finished stabilizing a shot, you'd like to get the updated stable
footage. You can do that either by writing it out of SynthEyes directly, or by writing
information about how to stabilize the shot, so that your downstream software, typically
a compositing package, can generate the stabilized images as part of the other things it
is doing to the shot, ranging from color grading to 3D inserts.
Outputting the Shot
You can write it out to disk using the Save Sequence button on the Output tab. It
is also possible to save the sequence through the Perspective window’s Preview Movie
capability.
Each method has its advantages, but using the Save Sequence button will be
generally better for this purpose: it is faster; does less to the images; allows you to write
the 16 bit version; and allows you to write the alpha channel. However, it does not
overlay inserted test objects like the Preview Movie does.
appropriately. The default settings reduce footage to 8 bits for tracking; you
need to set it back to get the full bit depth out.
You can use the stabilized footage you write for downstream applications such
as 3dsmax and Maya.
But before you export the camera path and trackers from SynthEyes, you have a
little more work to do. The tracker and camera paths in SynthEyes correspond to the
original footage, not the stabilized footage, and they are substantially different. Once
you close the Image Preparation dialog, you’ll see that the trackers are doing one thing,
and the now-stable image doing something else.
You should always save the stabilizing SynthEyes scene file at this point for
future use in the event of changes.
You can then do a File/New, open the stabilized footage, track it, then export the
3-D scene matching the stabilized footage.
But… if you have already done a full 3-D track on the original footage, you can
save time.
Click the Apply to Trkers button on the Output tab. This will apply the
stabilization data to the existing trackers. When you close the Image Prep, the 2-D
tracker locations will line up correctly, though the 3-D X’s will not yet. Go to the solver
panel, and re-solve the shot (Go!), and the 3-D positions and camera path will line up
correctly again. (If you really wanted to, you could probably use Seed Points mode to
speed up this re-solve.)
Important: if you later decide you want to change the stabilization parameters
without re-tracking, you must not have cleared the stabilizer. Hit the Remove
f/Trkers button BEFORE making any changes, to get back to the original
tracking data. Otherwise, if you Apply twice, or Remove after changes, you
will just create a mess.
Also, the Blip data is not changed by the Apply or Remove buttons, and it is not
possible to Peel any blip trails, which correspond to the original image coordinates, after
completing stabilization and hitting Apply. So you must either do all peeling first;
remove, peel, and reapply the stabilization; or retrack later if necessary.
Creating a Stabilizing Rig
SynthEyes stabilization is physically-based: it corresponds to an actual physical
gag consisting of a projection screen carrying the original footage, which moves rapidly
relative to a secondary camera to "re-shoot" the scene to produce the stabilized images.
This normally happens only mathematically inside the image preprocessor, but we can
recreate this setup in SynthEyes's 3D environment in a way that can be exported to 3D
animation packages.
Better yet, this can all be done by a straightforward script, the Stabilization Rig
Creator.
We'll run through two overall workflow using it, one where a final 3D track is NOT
required, and one where it is. A final subsection describes how to remove the rig and
restore the underlying tracking if you need to refine it.
Workflow S1: No overall 3D solve
Begin with 2D tracking, either automatic or supervised, with no 3D solve
Set up an estimated camera field of view on the Lens tab of the image preprocessor
(required to avoid keystoning problems in the corners of the image. This is typically
determined from other shots that are 3D-solved.
Set up the desired stabilization, using the Stabilize tab of the image preprocessor.
Be sure to select desired trackers, or all trackers, before clicking Get Tracks. Once
stabilization is set up, no need to use Apply to Trkers on the Output Tab.
In the image preprocessor, apply any lens preset you might have.
Run the Stabilization Rig Creator script with the following settings:
o Camera already stable: ON (It's stationary!)
o Update 2D tracking: ON
o Simple plane: OFF for most 3D apps, ON to export to After Effects or any
other app that can accept only simple planes, and handles 2D distortion
itself.
o Answer NO about copying the seed FOV to the lens panel.
Further examination of the shot can only be done in the Perspective View!
o Turn OFF right-click/View/Show Image (you want to see stabilization screen,
not the perspective view's normal screen).
o Turn on Lock to see the final stabilized output
Run a full 3D export to your 3D animation and/or compositing packages.
Disable the normal original background image display!
You might need to do some additional work downstream, to make sure the original
shot imagery is applied as a texture to the projection screen.
Workflow S2: Full 3D Solve
Begin with 2D tracking, either automatic or supervised.
3D solve, including lens distortion calculation if needed.
If lens distortion calculated, run the Lens Workflow script in either mode.
If no lens distortion present, click Get FOV on the Lens tab of the image
preprocessor.
Set up the desired stabilization, using the Stabilize tab of the image preprocessor.
Be sure to select explicitly select the desired trackers, or all trackers, before clicking
Get Tracks—otherwise a few bad trackers flagged by the solve may be used! Once
stabilization is set up, no need to use Apply to Trkers on the Output Tab.
Run the Stabilization Rig Creator script with the following settings:
o Camera already stable: OFF
o Update 2D tracking: ON
o Simple plane: OFF for most 3D apps, ON to export to After Effects or any
other app that can accept only simple planes, and handles 2D distortion
itself.
o If there is animated distortion or a zoom with Simple Plane off, an animated
vertex cache (.pc2) will be created. It will go in the same folder as the sni file,
or you can specify the location.
Further examination of the shot can only be done in the Perspective View!
o Turn OFF right-click/View/Show Image (you want to see stabilization screen,
not the perspective view's normal screen).
o Turn on Lock to see the final stabilized output
Run a full 3D export to your 3D animation and/or compositing packages.
Disable the normal original background image display!
You might need to do some additional work downstream, to make sure the original
shot imagery is applied as a texture to the projection screen, and/or to apply any
vertex cache.
Removing the Rig To Update Tracking
If you've created a stabilization rig, but need to go back and work on the tracking
or stabilization some more, you can run the Stabilization Removal script. It will
Restores from the Stabilizing preset created with the stabilization rig (and
disconnects from it so that subsequent changes won't affect it);
Removes the effect of stabilization from the trackers;
Restores the camera to its original wiggly state, if it isn't stationary;
Restores the solved field of view to the original wider field of view;
Turns off stabilization and clears the adjust channels;
Deletes the screen and screen holder object.
If you'd previously run the Lens Workflow script, you'll see the post-lens-
workflow results now. You can run the Lens Workflow script with the
"Undo earlier run" option if desired.
If you need to do things a little differently, you can do the same things manually.
Flexible Workflows
Suppose you have written out a stabilized shot, and adjusted the tracker
positions to match the new shot. You can solve the shot, export it, and play around with
it in general. If you need to, you can pop the stabilization back off the trackers, adjust
the stabilization, fix the trackers back up, and re-solve, all without going back to earlier
scene files and thus losing later work. That’s the kind of flexibility we like.
There’s only one slight drawback: each time you save and close the file, then
reopen it, you’re going to have to wait while the image prep system recomputes the
stabilized image. That might be only a few seconds, or it might be quite a while for a
long film shot.
It’s pretty stupid, when you consider that you’ve already written the complete
stabilized shot to disk!
Approach 1: do a Shot/Change Shot Images to the saved stabilized shot, and
reset the image prep system from the Preset Manager. This will let you work quickly
from the saved version, but you must be sure to save this scene file separately, in case
you need to change the stabilization later for some reason. And of course, going back to
that saved file would mean losing later work.
Approach 2: Create an image prep preset (“stab”) for the full stabilizer settings.
Create another image prep preset (“quick”), and reset it. Do the Shot/Change Shot
Images. Now you’ve got it both ways: fast loading, and if you need to go back and
change the stabilization, switch back to the first (“stab”) preset, remove the stabilization
from the trackers, change the shot imagery back to the original footage, then make your
stabilization changes. You’ll then need to re-write the new stabilized footage, re-apply it
to the trackers, etc.
Approach 1 is clearly simpler and should suffice for most simple situations. But if
you need the flexibility, Approach 2 will give it to you.
Hint: Often you can let the autotracker run, then manually delete the
unwanted trackers. This can be a lot quicker than setting up mattes. To help
find the undesirable trackers, turn on Tracker Trails on the Edit menu.
Note: the order of the spline list has been changed from SynthEyes 2011 and
earlier, to reflect typical industry conventions.
There are two buttons, Move Up and Move Down , that let you change
the order of the splines.
A drop-down listbox, underneath the main spline list, lets you change the camera
or object to which a spline is selected.
This listbox always contains a Garbage item. If you assign Garbage to a spline,
that spline is a garbage matte and any blips within it are ignored.
If a blip isn’t covered by any splines, then the alpha channel determines to which
object the blip is assigned.
Spline Workflow
When you create a shot, SynthEyes creates an initial static full-screen rectangle
spline that assigns all blips to the shot’s camera. You might add additional splines, for
garbage matte areas or moving objects you want to track. Or, you might delete the
rectangle and add only a new animated spline, if you are tracking a full-screen moving
object.
Ideally, you should add splines before running the autotracker the first time, that
will be simplest. However, if you run the autotracker, then decide to add or modify the
splines (using the Roto panel), you can then use the Features panel to create a new
set of trackers:
Delete the existing trackers using control-A and delete, or the Delete
Leaden button on the features panel,
Click the Link Frames button, which updates the possible tracker paths
based on your modified splines. Don’t worry, you will be prompted for this
if you forget in almost all cases.
Click the Peel All button to make new trackers.
The separate Link step is required to accommodate workflows with manual
Peeling using Peel mode button. (You may also be prompted to Link in when entering
Peel mode.)
Stereo Shots: You can use the Copy Splines script to copy a spline from one
eye to the other. You'll usually need to adjust the keyframes somewhat for the
other eye.
Animated Splines
Animated splines are created and manipulated in the camera viewport only while
the rotoscope control panel is open. At the top of the rotoscope panel, a chart shows
what the left and right mouse buttons do, depending on the state of the Shift key.
Each spline has a center handle, a rotate/scale handle, and three or more vertex
control handles. Splines can be animated on and off over the duration of the shot, using
the stop-light enable button .
Vertex handles can be either corners or smooth. Double-click the vertex handle
to toggle the type.
Each handle can be animated over time, by adjusting the handle to the desired
location while SynthEyes is at the desired frame, setting a key at that frame. The handle
turns red whenever it is keyed on that frame. In between keys, a control handle follows
a linear path. The rotospline keys are shown on the timebar, and the and
“advance to key” buttons apply to the spline keys.
To create an animated spline, turn on the magic wand tool , go to the spline’s
first frame and left-click the spline’s desired center point. Then click on a series of points
around the edge of the region to be rotoscoped. Too many points will make later
animation more time consuming. You can switch back and forth between smooth and
corner vertex points by double-clicking as you create. After you create the last desired
vertex, right click to exit the mode.
You can also turn on and use create-rectangle and create-circle spline
creation modes, which allow you to drag out the respective shape.
After creating a spline, go to the last frame, and drag the control points to
reposition them on the edge. Where possible, adjust the spline center and rotation/scale
handle to avoid having to adjust each control point. Then go to the middle of the shot,
and readjust. Go one quarter of the way in, readjust. Go to the three quarter mark,
readjust. Continue in this fashion, subdividing each unkeyed section until the spline is in
the correct location already, which generally won’t be too long. This approach is much
more effective than proceeding from beginning to end.
You may find it helpful to create keys on all the control points whenever you
change any of them. This can make the spline animation more predictable in some
circumstances (or to suit your style). To do this, turn on the Key all CPs if any
checkbox on the roto panel.
Note that the splines don’t have to be accurate. They are not being used to matte
the objects in and out of the shot, only to control blips which occur relatively far apart.
The tracker import capability gives a very flexible capability for setting up your
splines, with a little thought. Here are a few more details. The import takes place when
you click on the spline control point. Any subsequent changes to the tracker are not
“live.” If you need them, you should import the path again. The importer creates spline
keys only where the tracker is valid. So if the tracker is occluded by an actor for a few
frames, there will be no spline keys there, and the spline’s linear control-point
interpolation will automatically fill the gap. Or, you can add some more keys of your
own. You’ll also want to add some keys if your object goes off the edge of the screen, to
continue it’s motion.
Finally, the trackers you use to help animate the spline are not special. You can
use them to help solve the scene, if they will help (often they will not), or you can delete
them or change them into zero-weighted trackers (ZWTs) so that they do not affect the
camera solution. And you should turn off their Exportable flag on the Coordinate System
panel. If they are sitting alone on a camera or moving object that doesn't have other
trackers, you should set the camera or moving object to Disabled, so that the solver
doesn't attempt to solve it.
Note: OpenEXR is currently the only format where you can write a true alpha-
only sequence.
The alpha channel is a fourth channel (in addition to Red, Green, and Blue) for
each pixel in your image. You will need external program, typically a compositor, to
create such an alpha channel. Plus, you will need to store the shot as sequenced DPX,
OpenEXR, SGI, TARGA, or TIFF images, as these formats accommodate an alpha
channel.
Or, you can store the alpha channel in separate files, named appropriately. See
Separate Alpha Channels. In this case, the original files are left with no alpha data, and
the alpha is written in separate files, typically as gray-scale PNG files.
Suppose you wish to have a camera track ignore a portion of the images with a
“garbage matte.” Create the matte with the alpha value of 255 (1.0, white) for the areas
to be tracked, and 0 (0.0, black) for the areas to be ignored. You’ll need to do this for
every frame in the shot, which is why the features of a good compositing program can
be helpful. [Note: if a shot lacks an alpha channel, SynthEyes creates a default channel
that is black(0) for all hard black pixels (R=G=B=0), and white(255) for all other pixels.]
You can make sure the alpha channel is correct in SynthEyes after you open the
shot by temporarily changing the Camera View Type on the Advanced Feature Control
dialog (launched from the Feature Panel) to Alpha, or using the Alpha channel selection
in the Image Preprocessing subsystem.
Next, on the Rotoscoping panel, delete the default full-size-rectangular spline.
This is very important, because otherwise this spline will assign all blips to its
designated object. The alpha channel is used only when a blip is not contained in any
spline!
Change the Shot Alpha Levels spinner to 2, because there are two potential
values: zero and one. This setting affects the shot (and consequently all the objects and
the camera attached to it).
Change the Object Alpha Value spinner to 255. Any blip in an area with this
alpha value will be assigned to the camera; other blips will be ignored. This spinner sets
the alpha value for the currently-active object only.
If you are tracking the camera and a moving object along with a garbage matte
simultaneously, you would create the alpha channel with three levels: 0, garbage; 128,
camera; 255, object. Note that this order isn’t important, only consistency.
After creating the matte, you would set the Shot Alpha Levels to 3. Then switch
to the Camera object on the Shot menu and set the Object Alpha Value to 128. Finally,
switch to the moving object on the Shot menu, and set the Object Alpha Value to 255.
Note that the Shot Alpha Levels setting controls only the tolerance permitted in
the alpha level when making an assignment, so that other nearby alpha values that
might be incidentally generated by your rotoscoping software will still be assigned
correctly. If you set Shot Alpha Levels to 17, the nominal alpha values would be 0, 16,
32, … 255, and you could use only any 3 of them if that was all you needed.
IMPORTANT: You need to have six or more different trackers visible pretty
much all of the time (technically it can get down to 3 for short periods of time
but with very low accuracy). Generally you should plan for at least 8-10 to
make allowance for short-term problems in some of them. More trackers
means more accuracy and less jitter.
Fine Print: no matter how many trackers you have, if they are all on the same
plane (the floor, a piece of paper, a tablet, etc), they only count as four, and
as the rule says, you must have six! If the object being tracked is flat, you will
have to use a known (flat) mesh as a reference, entering known coordinates
for each tracker.
Warning: if the object occupies only a portion of the image, it will not supply
enough perspective shift to permit the field of view of the lens to be estimated
accurately. You must either also do a camera track (even a tripod solve will
do), or you must determine a lens field of view by a different method (a
different shot, say), and enter it as a Known Field of View.
Automatic Tracking
Open the lazysue.avi shot, using the default settings.
Supervised Tracking
The shot is best tracked backwards: the trackers can start from the easiest spots,
and get tracked as long as possible into the more difficult portion at the beginning of the
shot. Tracking backwards is suggested for features that are coming towards the
camera, for example, shots from a vehicle.
Open the lazysue.avi shot, using the default settings.
On the Solver panel, set the camera’s solving mode to Disabled.
On the shots menu, select Add Moving Object. You will see the object at the origin
as a diamond-shaped null object.
On the Tracker panel, turn on Create . The trackers will be associated with the
moving object, not the camera.
Switch to the Camera viewport, to bring the image full frame.
Create a tracker on one of the dots on the shelf. Decrease the tracker size to
approximately 0.015, and increase the horizontal search size to 0.03.
Create a tracker on each spot on the shelf. Track each as far as possible back to the
beginning of the shot. Use the tracker mini-view to scroll through the frames and
reposition as needed. As the spots go into the shadow, you can continue to track
them, using the tracker gain spinner. When a tracker becomes untrackable, turn off
Enable , and Lock the tracker . Right-click the spinner to reset it for the next
tracker.
Continue adding trackers from the end of the shot roughly as follows:
Begin tracking from the beginning, by rewinding, changing the playback direction to
forward , then adding additional trackers. You will need to add these additional
trackers to achieve coverage early in the shot, when the primary region of interest is
still blocked by the large storage container.
Switch to the graph editor in graph mode , sort by error mode . Use the
mouse to sweep through and select the different trackers. Or, select Sort by error on
the main View menu, and use the up and down arrows on the keyboard to sequence
through the trackers. Look for spikes in the tracker velocity curves (solid red and
green). Switch back to the camera view as needed for remedial work.
Switch to the Coordinate System control panel and camera viewport, at the end of
the shot.
Select the tracker at center back on the surface of the shelf; change it to an Origin
lock.
Select the tracker a bottom left on the shelf, change it to Lock Point with coordinate
X=10.
Select the tracker at front right; change it to an On XY Plane lock (or On XZ if you
use Y-axis up for Maya or Lightwave).
Switch to the Solver control Panel.
Switch to the Quad view; zoom back out on the Camera viewport.
Hit Go! After solving completes in a few seconds, hit OK.
Continue to the After Tracking section, below.
After Tracking
Switch to the 3-D Objects panel, with the Quad viewport layout selected.
Click the World button, changing it to Object.
Turn on the Magic Wand tool and select the Cone object.
In the top view, draw a cone in the top-right quadrant, just above and right of the
diamond-shaped object marker.
Hint: it can be easier to adjust the cone’s position in the Perspective view, locked to
the camera, with View/Local coordinate handles turned on.
Scrub the timeline to see the inserted cone. In your animation package, a small
amount of camera-mapped stand-in geometry would be used to make the large
container occlude the inserted cone and reveal correctly as the shelf spins.
Turn on 3-D Pan-to-follow in the camera view, or pan-to-follow in a perspective view,
to continuously recenter on the selected object (or a mesh).
Advanced techniques: use Coalesce Trackers and Clean Up Trackers.
Use the Hierarchy View to move trackers or objects back and forth from camera to
object if needed.
Important: In all cases the tracker coordinates are relative to the moving
object which is their parent. That means that you must not change the
positioning or orientation of the mesh with respect to the moving object, at risk
of breaking the match, which is why we tell you to lock it above!
Difficult Situations
When an object occupies only a relatively small portion of the frame, there are
few trackers, and/or the object is moving so that trackers get out of view often, object
tracking can be difficult. You may wind up creating a situation where the mathematically
best solution does not correspond to reality, but to some impossible tracker or camera
configuration. It is an example of the old adage, “Garbage In, Garbage Out” (please
don’t be offended, gentle reader).
overall coordinate system origin). For moving objects tracks with distance constraints,
you should pay some attention to the location of the origin. You must be certain to set
up a coordinate system for the object!
You should set up the origin either at the natural center of rotation of the object
(for example, at the neck of a head), or at a point located within the point cloud of the
object. If the origin is well outside the object, any small jitter in the orientation of the
object will have unwarranted effects on its position as well.
Note: If the camera is not moving, that's fine, it's a straight object track that
will track the object relative to the camera. If the camera is on a tripod,
panning and tilting, that's also fine, set the camera to tripod mode. This
discussion here about scaling both camera and object applies when both
camera and object translate.
Workflows
Typically it is best to track the camera first, then the object(s), and maybe only
then test for small amounts of lens distortion. With both tracked, you can then apply axis
locks as needed.
There is some subtlety associated with the camera field of view (for non-360VR
cameras), since the field of view affects both camera and object.
So here we're going to point out a common subtle mistake, and what to do
about it.
If you solve the camera, disable the camera, then solve the object, you will create
an subtle mismatch, because without further action, the object solve will change the
field of view: typically you will have left the lens set to Fixed, Unknown (or maybe
Zooming). SynthEyes will compute the object solve and field of view that produces the
lowest error for the object. As the camera field of view is changed to optimize the object
solve, the camera solve becomes incorrect. The stated error may be low, but will no
longer match the situation exactly, because of the field of view change.
Instead, if you disable the camera, you should also set the lens to Known mode,
and accept the question to copy the solved field of view to be the seed field of view.
Of course, this means that the field of view can no longer be changed to optimize
the object solve, and it may be the case that the object solve has more and better
trackers than the camera solve, such that you want to have the object solve impact the
ultimate field of view.
To do that, instead you can change the camera to Refine, instead of Disabled,
when beginning the object solve. Then the field of view will be chosen to optimize both.
An alternative is to Disable the camera, complete the object tracking, then set
both camera and object to refine mode, to optimize them together. Note that for small
differences, typically the field of view will stick at its current value.
If the shot is suitable, you can use the Moving-Obj Path or Moving-Obj Position
Phases to set the relative scale of the camera and object. In both cases, the camera
must be moving, not locked or on a tripod. Use the Moving-Obj Path phase when the
object comes to a stop for some portion of the shot, or less desirably, has a section
where it travels in an exactly straight line. Use the Moving-Obj Position phase when
trackers on the moving object come very close to, or become aligned with, trackers on
the camera.
For further details, see the phase overview and the Phase Reference manual
Note: what we're describing here is different than having a survey for a set,
which may consist of anything from a few tape measurements to a Lidar point
cloud. SynthEyes can work with those, but that's not the subject of this
section!
Once you've tracked a survey shot, you'll usually use it as part of a multi-shot
setup, or in conjunction with measured 3-D data.
How to Shoot
Here are our recommendations for survey photography. These will help you
produce the best results in the least time, though they are not requirements. They can
all be violated.
Shoot all images from a single camera
Use a low-distortion prime lens
If the camera has a zoom lens, do not use it at its widest setting. Reduce
the field of view by 20% or so to reduce distortion
Do not change the zoom setting from one image to the next
All images should be the same resolution
Shoot the images in a defined order to save time tracking (ex: take an
image starting at far left, then 2 steps to the right for the next image, 2
more steps for the next, etc)
All images will be padded up to the width of the widest and height of the highest.
In the future we may re-scale them instead. Either way, it is necessary!
The files from any Add are inserted after the currently-selected image in the
image list. They are inserted in a sorted order determined by Windows/Linux/OSX.
Since it will be useful to have the images in at least a rough order that makes
sense, you can use Add files as many times as needed to explicitly set up the order. Or,
you can use multiple Adds if the images are coming from different folders.
You can also use the Move Up and Move Down buttons to change the order of
the images into something that makes sense.
After you click OK on the IFL editor, you will be presented with the standard Shot
Settings panel. You should not have to change anything. Specifically: do not change
the image or pixel aspect. They are set automatically, based on the required padding.
Pixel aspect must always be 1.0 for survey images. Do not turn on rolling shutter
compensation, that will make a terrible mess!
Once the survey has been created, you can edit the list again via Shot/Edit
Survey Shot. While editing, you'll see the images update in the main camera view as
you move images around, and clicking an image in the list will show it in the camera
view.
You can add additional images to a survey shot at a later time, but....
Warning: Once you have started tracking, always add new images at the end
of the shot, otherwise you will get the tracking data out of sync with the
images.
Warning 2: The image file list is NOT restored if you later use Undo or Redo
in the main SynthEyes interface. It is not part of the SynthEyes file. You can
cancel out of the Survey Shot IFL editor and the initial IFL will be restored, but
using Undo later will not restore the IFL.
Tracking
Survey shots must always be supervised-tracked, since the images generally
jump around quite a lot. You will place each tracker in the right location on each relevant
frame. SynthEyes provides the workflow to do that efficiently.
To set up for tracking a survey shot:
Go to the Track room
Turn on Track/Lock z-drop on
Turn on Track/Hand-held: Sticky
Do not turn on the Create tool on the Tracker panel!
You will add trackers one at a time, locating that tracker on each frame in the
survey before going on to the next tracker.
For each image feature to be tracked:
Go to a frame that contains it
Hold down the C key and click and drag in the camera view to position the
tracker
Use the 's', 'd', ',', or '.' keys to move to another frame containing that
image feature
Click and drag again in the camera view to position the tracker
Repeat for each frame containing the feature (tracker) being created
To remove the tracker from a specific frame, click (off) the Enable stoplight
on the tracker control panel
Notice that you do not have to add frames in any particular order, and you do not
have to toggle the enable track on and off repeatedly. Each time you place the tracker, it
is enabled for that frame only. Similarly, turning the tracker enable off affects only that
particular frame. This behavior is specific to Survey shots, because the frames are
largely unordered.
Important: Once you have added trackers, any additional images must be
added at the end of the shot.
Solving
In most cases, you must solve survey shots as a Zooming lens. This is a
consequence of the different image sizes, lenses, etc. SynthEyes configures survey
shots as Zooms automatically.
If all images are exactly the same size, and were shot on the same physical
camera with the same prime lens, or you are 100% positive that the zoom lens was
never zoomed because you shot it, only then can you solve the shot as a Fixed,
Unknown lens.
Survey shots are processed slightly differently during solving, because the
images are largely unordered.
If the solver cannot locate initial seed frames, you can set them manually on the
Solver panel. In this case, or if the solver is using an incorrect solution, you can also set
the direction hint as well (second drop-down on the solve panel). In any case, you are
looking for two frames with many trackers in common, that look at the same trackers
from about 30 degrees apart. The direction hint is then the direction of motion from the
first camera position to the second.
The solver decides what additional frames to solve, once it gets started, without
regard to their order.
You may need to think a bit more about your trackers if the solver does not solve
all the frames: that means that you do not have sufficient overlap between any solved
trackers and the trackers on unsolved frames. Whereas normally the tracker count
channel in the Graph Editor helps to identify trouble spots, the unordered nature of
surveys makes that information uninformative.
In that situation, you may find it helpful to use Script/Select by Type to select all
the solved trackers, examine the bar graph in the graph editor, and see if you can locate
the solved trackers on some of the unsolved frames as well.
In this section, we’ll demonstrate how to use a collection of digital stills as a road-
map for a difficult-to-track shot: in this case, a tripod shot for which no 3-D recovery
would otherwise be possible. A scenario such as this requires supervised tracking,
because of the scatter-shot nature of the stills. The tripod shot could be automatically
tracked, but there’s not much point to that because you must already perform
supervised tracking to match the stills, and there’s not much gained by adding a lot
more trackers to a tripod shot. It will take around 2 hours to perform this example, which
is intentionally complex to illustrate a more complex scenario.
The required files for this example can be found at
https://www.ssontech.com/download.html: both land2dv.avi and DCP_103x.zip are
required. The zip file contains a series of digital stills, and should be unpacked into the
same working folder as the AVI. You can also download multix.zip, which contains the
.sni scene files for reference.
Start with the digital stills, which are 9 pictures taken with a digital still camera,
each 2160 by 1440. Start SynthEyes and do a File/Add Survey Shot. Add the
DCP_####.JPG images.
Create trackers for each of the balls: six at the top of the poles, six near ground
level on top of the cones. Create each tracker, and track it through the entire (nine-
frame) shot using the survey-shot workflow. You can use control-drag to make final
positioning easier on the high-resolution still. Create the trackers in a consistent order,
for example, from back left to front left, then back right to front right. After completing
each track, Lock the tracker.
The manual tracking stage will take less than an hour. The resulting file is
available as multi1.sni.
Set up a coordinate system using the ground-level (cone) trackers. Set the front-
left tracker as the Origin, the back-left tracker as a Lock Point at X=0,Y=50,Z=0, and
the front-right tracker as an XY Plane tracker.
You can solve for this shot now: switch to the Solver panel and hit Go! You
should obtain a satisfactory solution for the ball locations, and a rather erratic and
spaced out camera path, since the camera was walked from place to place. (multi2.sni)
It is time for the second shot. On the Shot menu, select Add Shot (or
File/Import/Shot). Select the land2dv.avi shot. Set Interlacing to No; the shot was
taken was a Canon Optura Pi in progressive scan mode.
Bring the camera view full-screen, go to the tracker panel, and begin tracking the
same ball positions in this shot with bright-spot trackers. Set the Key spinner to 8, as the
exposure ramps substantially during the shot. The balls provide low contrast, so some
trackers are easiest to control from within the tracker view window on the tracker panel .
The back-right ground-level ball is occluded by the front-left above-ground ball, so you
do not have to track the back-right ball. It will be easiest to create the trackers in the
same order as in the first shot. (multi3.sni)
Next, create links between the two sets of trackers, to tell SynthEyes what
trackers were tracking the same feature. You will need a bare minimum of six (6) links
between the shots. Switch to the coordinate system panel, and the Quad view. Move far
enough into the shot that all trackers are in-frame.
(Mac: Command-click). Select the next tracker in the camera view, and ALT-click the
corresponding point in the Top view; repeat until all are assigned. If you created the
trackers consistently, you can sequence through them in order.
You can do this with a 3D view and camera view, or two camera views. The
disadvantage of the two camera views is that both are linked to the same timebar. The
perspective view affords a way around that...
Match By Name
Another approach is to give each tracker a meaningful name. In this case,
clicking the Target Point button will be helpful: it brings up a list of trackers to choose
from.
A more subtle approach is to have matching names, then use the Track/Cross
Link By Name menu item. Having truly identical names makes things confusing, so the
cross link command ignores the first character of each name. You can then name the
trackers lWindowBL and rWindowBL and have them automatically linked. After setting
up a number of matching trackers, select the trackers on the video clip, and select the
Cross Link By Name menu item. Links will be created from the selected trackers to the
matching trackers on the reference shot.
Note! Cross Link By Name will not link to a camera/object that has its solve
mode set to Disabled, to allow you to control possible link targets if needed.
You must temporarily change the solve mode to link to a disabled object.
Details on links: a shot with links should have links to only a single other shot,
which should not have any links to other shots. You can have several shots link to a
single reference.
Ready to Solve
After completing the links, switch to the Solver panel. Change the solver mode to
Indirect, because this camera’s solution will be based on the solution initially obtained
from the first shot.(multi4.sni) Make sure Constrain is off at this time.
Hit Go! SynthEyes will solve the two shots jointly, that is, find the point positions
that match both shots best. Each tracker will still have its own position; trackers linked
together will be very close to one another.
In the example, you should be able to see that the second (tripod) shot was
taken from roughly the location of the second still. Even if the positions were identical,
differences between cameras and the exact features being tracked will result in
imperfect matches. However, the pixel positions will match satisfactorily for effect
insertion. The final result is multi5.sni.
ViewShift Fundamentals
The basic idea of ViewShift is that you've done a 3D track, have some 3D
meshes in a scene (call them 'reflectors'), and project the (source) shot's imagery onto
them; you can then "re-shoot" the reflector meshes from a second (viewing) camera
from a different vantage point. That viewing camera might be in a different shot
altogether, or might be the same source camera on a different frame in the same shot.
It's a combination camera-mapping projection and render. The ViewShifted renders can
then be composited into the viewing camera's shot.
Tip: When the source and viewing shots are different, they should share a
coordinate system, either by using the same setup in each, or by using linked
trackers and Indirectly solving mode.
Tip: ViewShift pays attention to the Back Faces and Invert Normals settings.
Back faces are invisible unless the mesh's Back Faces checkbox (on the 3D
panel) is turned on, which make both front and back faces visible.
Note: The ViewShift can only be as good as the camera track(s) and the
meshes used.
Here's what a simple object removal example can look like, where we want to
remove one of the cars from the shot:
There's a big plane—that's the reflector for this example. Here it's just a plane,
but the reflector can any 3D mesh, for example a reconstructed terrain model.
Tip: ViewShift does not require a UV texture map for the reflector mesh(es)—
because it's integrated, it's more direct than trying to patch together an
equivalent in a 3D package.
Despite the number of controls, fundamentals should be pretty clear, such as the
viewing camera and source camera at the top, as well as the output file and
compression. You can consult the ViewShift reference section for descriptions of each
of the controls. They have reasonable default values so you don't have to worry about
them.
We'll work through the more complex issues and controls in following sections.
Output Modes
The output mode selector has four choices:
With alpha
Over background (as shown above)
UVMap distortion map [Not in SynthEyes Intro]
Texture map [Not in SynthEyes Intro]
With alpha. In this mode, only the actually-rendered pixels (to be discussed)
appear in the output image, with an appropriate alpha channel. This is the typical mode
to feed a downstream compositing application.
Over background. Here, the rendered pixels are actually composited over the
original viewing shot. Use this for previews, and for finals if the situation warrants.
UVMap distortion map. [Not in SynthEyes Intro] This mode enables you to defer
the pixel-moving to your compositing app, by outputting the same kind of colorful UV
image distortion map used for lens distortion. While this can be handy, it prevents the
use of two key ViewShift features: illumination matching and any source frame mode
except 1:1, since there's no way for the illumination or frame# information to be carried
in the image distortion maps. You can turn off Clip UVmaps if your images have an
overscan area and you save the maps in a floating-point format.
Texture map. [Not in SynthEyes Intro] Use this mode to generated animated
texture maps, instead of doing the normal ViewShift processing. For this mode, there
should be only a single reflector mesh, and it must have a UV map. Use the texture
width and height controls to set the texture size.
Note: ViewShift shifts pixels from a single source frame to the destination: it
does not mix pixels from different frames, which can result in banding (as
sometimes seen in static texture extraction).
In common to all of the modes, there are two Start Frame and End Frame fields,
one for the Viewing shot and one for the Source. Each has a checkbox for Limited
output (source) frames, and two buttons to set the Start and End frames.
If the Limited output frames checkbox is off, then output will be generated for the
entire length of the output shot. If the checkbox is on, then output will be generated only
from the Start Frame to the End Frame (inclusive). Clicking the Set Start Frame or Set
End Frame button will set that value from the current frame of the shot, as well as turn
on the Limited output frames checkbox.
The Limited source frames checkbox and controls work similarly, except that the
values are only the range of frames that might be used, as determined by the source
mode, rather than an absolute statement or requirement that they be used.
Here are all the Source modes:
Same
1:1
Scaled
Absolute
Relative
Distance
Same direction
Disjoint leading
Disjoint following
We'll discuss most of the modes in the following sub-sections, but we'll leave the
Disjoint modes for a later section, after introducing some more preliminaries about how
to control ViewShift.
Basic Timing Modes
The first six modes are deterministic, based solely on the range of frames
specified for the source and viewing shot. We'll discuss these here.
Same. The same frame is used for the source as for the destination.
1:1. The Start Frame of the source is used for the Start Frame of the output, the
frame after the source's Start Frame is used for the frame after the output's Start Frame,
etc, on a one for one basis throughout the shot. No frame will be taken past the source's
End Frame.
Scaled. Frames are drawn proportionately from the source shot: for a frame 25%
into the output frame range, a frame is drawn 25% of the way into the source frame
range.
Absolute. The animated Frame control spinner directly controls which source
frame is used for each output frame. Can be initially animated by hand, but also the
similarity and disjoint overlap modes write their results into this track, so that you can
then switch to Absolute mode and adjust or extend those results if needed.
Note: The Frame Control value is always zero-based, ie 0 means the first
frame of the shot sequence or movie, regardless of any Start at 1 or Match
Frame Numbers processing. This is because it is also used as a relative
frame number in this next mode...
Relative. The source frame is the output frame offset by the value of the Frame
control spinner (constant or animated, positive or negative). Typically this is used to
create simple time-shift effects.
It's important to see that only the Same or 1:1 modes maintain the original
shot timing, and should be used wherever possible for inserts (split takes). The other
modes are suitable for removals, not so much inserts, because they can alter the timing
of the source shot, speeding it up or slowing it down. Even more importantly, judder
effects can appear, especially if the source and output timing is only slightly different,
and the timing dictates that a frame be repeated or dropped.
Distance Mode
Distance mode is useful for shots with fairly steady camera translation, either for
a single shot, or when combining split takes. It looks at the path of the source and
viewing cameras, and selects the source frame that matches the Distance spinner
(which can be animated based on the viewing frame).
Combining split takes, Distance mode allows you to use a source perspective
similar to the viewing perspective, to minimize perspective disparity.
Important: Split takes, where the source and viewing cameras are different,
should both use a common coordinate system, so that the camera paths are
similar. Otherwise this mode doesn't make sense. Lining them up could be
done by setting up a coordinate system the same way in both shots, using
trackers that are present in both; or by using links and Automatic/Indirectly
solving modes (or maybe From Seed Points/Indirectly).
When working with a single shot, Distance mode is a simple approach to make
sure that the source pixels for a removal don't contain the object being removed. (That
requirement is addressed directly with the Disjoint modes, however.)
If the distance is zero, this mode selects the source frame where the source
camera is "physically" closest to the viewing camera in 3D coordinates. A distance of
zero is used only when the source and viewing cameras are different, since otherwise
you might as well use the Same frame mode!
If the distance is positive, it selects a source frame where the source camera is at
least that far in front of the viewing camera. Here, "in front" means that source frame is
later in the shot than the frame where the two are closest. This mode can be used when
the source and viewing cameras are the same, or different.
If the distance is negative, it selects a source frame where the source camera is
at least that far behind of the viewing camera. Here, "behind" means that source frame
is earlier in the shot than the frame where the two are closest.
Distance mode initially finds the closest source frame to the first viewing frame.
Subsequently it looks for the closest source frame to the each following viewing frame
within a a 30 frame window around the previous source frame. Then it looks for
the frame at the earlier or later distance. This strategy prevents it from jumping wildly
around within the shot, though there are probably some situations where it may be
problematic.
After a shot is run using Distance mode, the utilized frame numbers are written to
the Frame control track. You can examine those values to check the operation, and if
necessary, edit them then switch to Absolute mode.
Reminder: if you use Distance mode for split takes, the timing of the source
shot will not be constant, and may contain speed-ups, slow-downs, jumps, or
drops, depending on the camera motions.
Important: You should align the two tripod shots to the same common
coordinate system. That could be done by setting up a coordinate system the
same way in both shots, using two trackers present in both; by using links and
Tripod/Indirectly solving modes; or temporarily by some manual tweaking
using the 3D panel's Whole mode.
Same direction mode initially finds the best source frame matching the first
viewing frame. Subsequently it looks for the best-matching source frame to the each
following viewing frame within a 30 frame window around the previous source
frame. This strategy prevents it from jumping wildly around within the shot.
Same Direction considers the actual amount of overlap, not just the same
direction: by considering overlap, the relative roll is accounted for as well. Note that
there's nothing comparable to the Distance mode's distance in Same Direction.
After a shot is run using Same Direction mode, the utilized frame numbers are
written to the Frame control track. You can examine those values to check the
operation, and if necessary, edit them then switch to Absolute mode.
Reminder: if you use Distance mode for split takes, the timing of the source
shot will not be constant, and may contain speed-ups, slow-downs, jumps, or
drops, depending on the camera motions.
If there are several viewing splines, pixels are transferred if they fall within any of
the viewing splines, similarly if they fall within any of the source splines, or any of the
garbage splines.
ViewShift must be told whether or not to use each type of spline via the Use
Source Splines, Use Viewing Splines, and Use Garbage Splines checkboxes (all off by
default). Those settings determine not only whether splines are examined at all, but
whether splines should be interpreted as source or viewing splines when the source and
viewing camera are the same (more details on that case below).
Each shot has a large default spline (around the entire image) used for auto-
tracking. To prevent interference with ViewShift, it must be deleted or disabled at the
beginning of the shot.
When creating new splines, be sure to set the spline's object selector to the
camera (Camera01 etc) or Garbage appropriately — new splines are garbage by
default.
Tip: Use the Import Tracker to CP button to speed up creating splines if you
can create a tracker that follows the feature of interest, for example on the
car; you can then import it to the spline's center point to cause the entire
spline to move. Then just tweak the corners over the length of the shot as
needed.
If you have a ViewShift where you want to use both (different) source and
viewing splines, and the source and viewing camera are the same, here's what to do...
create a dummy moving object (make its solving mode Disabled), assign the splines
you want to use as sources to this moving object, and set the source camera(/object) to
be the moving object. (ViewShift will use the source camera of the moving object for the
rest of its processing.)
Tip: If you have many removal meshes set up, or at least ones with names
that are too long to fit, you can right-click the set removal meshes button, and
the relevant meshes will be selected in the scene.
Warning: Don't have removal meshes extend past the edge of the reflector
meshes (especially if the reflector's edge is not straight). The extending
portion will be simplified to a straight line—it won't pick up the shape of the
reflector's edge. That can result in some pixels not being removed that you
might have expected to be removed. Make the reflector mesh bigger!
Note that you'll still have to have suitable reflector meshes behind the removal
mesh(es). For a distant scenic view, that might be a "Far" mesh parented to the camera,
or even a sphere surrounding it for tripod shots.
Tip: ViewShift pays attention to the Back Faces and Invert Normals settings—
if you put the camera inside a sphere, be sure to turn on Invert Normals or
Back Faces, or the sphere will be invisible!
When using removal meshes, you'll need to pay close attention to the tracking
accuracy and mesh accuracy, at penalty of having a thin outline around the removal
mesh. (See the next section for information on controlling the edge). For complex
effects on an automobile in a tighter shot, the mesh boundary will typically be important,
and an actual car mesh may be needed.
The Ignore meshes on the panel do just that, cause specific meshes to be
ignored as if they are hidden, so that they are not used as reflector meshes. (If you don't
want them used as Removal meshes, don't select them as such.)
Ignore meshes are not "garbage meshes"... there is no such thing as garbage
meshes. Ignored meshes are ignored, whereas garbage meshes would knock actual
holes in the set of pixels being shifted. There doesn't seem to be any actual use case
for garbage meshes.
Similarly, there is no mesh version of source splines: any mesh that should be a
"source mesh" should be used it as a regular reflector mesh where it it more useful and
informative.
The disjoint overlap modes, Disjoint leading and Disjoint trailing, automate the
process of finding the closest frame from which replacement pixels can be pulled.
Operation is fundamentally simple: starting from the frame we want to replace on, scan
forwards or backwards until the area being replaced, as seen from the source camera
on the source frame, doesn't overlap the area containing the bad thing, again on the
source frame. An illustration will help:
In the image on the left, we have a spline surrounding the car being removed.
Matching the spline, we've put an extra little blue mesh onto the road plane to serve as
a stationary reference for where the car is on this frame (frame 20), which is the region
of frame 20 that we want to replace. Scrubbing forward through the shot, the blue mesh
matching frame 20 stays put in 3D, while the car and its spline move forward. If we
looked at frame 26, say, the car and spline would still be sitting partially on the blue
mesh. But by the frame above on the right, frame 34, the car has moved completely off
the blue mesh. Victory! We can use 34 as the source frame, pulling the pixels under the
blue mesh back into frame 20 to remove the car. The Disjoint modes repeat this
analysis for each viewing frame.
The difference between Disjoint leading and Disjoint following is the direction in
which ViewShift looks for a non-overlapping frame. In Disjoing leading mode, ViewShift
scans to larger frame numbers: the source camera leads the viewing camera. In Disjoint
following mode, ViewShift scans backward to earlier frames: the source camera follows
the viewing camera.
At the beginning or end of the shot, there typically won't be any additional frames
in the desired direction to pull from. If ViewShift can't find a source frame in the intended
direction, it will try in the other direction instead.
If you have multiple regions being removed, they are considered as a group. For
example, if you're removing three cars, it looks for a frame where all three cars can be
completely removed without bringing in part of any part of the three cars.
ViewShift will also make sure that it doesn't shift any pixels that are part of a
garbage spline on the source shot. (See the note at the end of Controlling ViewShift
with Splines for how to have separate source and viewing splines even for the same
shot.)
Note: Disjoint modes aren't generally useful with different source and viewing
shots, as there's no issue with overlap between the source and target pixels.
You might use the mode if you want the source frame to avoid some
animated source garbage splines. But that seems unlikely.
After a shot is run using a Disjoint overlap mode, the utilized frame numbers are
written to the Frame control track. You can examine those values to check the
operation, and if necessary, edit them then switch to Absolute mode.
Note that ViewShift doesn't consider or indicate whether areas being shifted
extend off the edge of the source image, which results in the desired removal being
incomplete. If that happens, in many cases it is unavoidable, so we leave it up to you to
address that if it is possible, by adjusting the Frame control track or perhaps by adding
source garbage splines at the edge of the image.
Reminder: if you use Disjoint mode, the timing of the source shot will not be
constant, and may contain speed-ups, slow-downs, jumps, or drops,
depending on the camera motions.
alpha and soften in there. Doing the softening in SynthEyes is very convenient for quick
previews in either case.
Tip: When using Pull in or Push out, you should turn up the Edge antialiasing
to at least Mid, to generate more accurate results.
Illumination Compensation
When performing removal effects with time shifts, changes in scene lighting and
camera irising during the shot can create very noticeable mismatches that give away
the effect. Fortunately, SynthEyes and ViewShift offer the tools to be able to
compensate for it very exactly. Although this can take a little more time, the results can
be very worth it.
While turning on ViewShift's Compensate Illumination checkbox is an obvious
first step, you must make the illumination data available to ViewShift on the illumination
track of one or more lights. That data will be based on actual measurements by one or
more trackers. For more reference on this topic, see Light Illumination. The general
process follows here.
Create a light, ie New Light on the Light panel. Drag it around in 3D to
someplace plausible. Note that lighting on your meshes won't affect the
ViewShift process.
Create a "probe" tracker on a relatively bright, static, portion of the image—in
a shot like the car removal above, it should be on a patch of road that cars do
not drive over. The tracker size should be relatively large, because its
average interior will be measured and the larger area produces lower noise
levels. You can use more than one if there is too much occlusion going on, or
if the length of the shot warrants. If so, have overlap between successive
trackers for better results. (You can also force the tracker to a set of locations
that do not correspond to a fixed feature, but migrate along with whatever
you're doing—in this case be sure to make the tracker zero-weighted so it
does not affect any subsequent solves.)
Select your probe tracker(s).
Run the Trackers/Set Illumination from Trackers script. Enter the name of
your light (typically Light01). Set 3D position to Use 2D only. This sets the
light's illumination track as the average of illumination of the trackers.
Don't forget to turn on Compensate Illumination before running your
ViewShift.
Note: you can examine and refine the Light's illumination track from the
Graph Editor after creating it.
The illumination track on a light is associated with the shot of the trackers used to
generate it. ViewShift uses only the light(s) associated with the source and viewing
shots. If the source and viewing shot/camera are the same, then there can be only one
such light; if they are different there can be at most two, one for the source and one for
the viewer. Lights on other shots, or lacking an illumination track, are ignored.
Whether there is one light involved or two, pixels being shifted are normalized
(divided) by the illumination on the source frame, then multiplied up to match the
illumination of the target frame.
Note: That math assumes that you have linear color with a black level at zero.
If that isn't the case, and the illumination levels vary substantially, visual mis-
matches may result. You may need to correct the color or do additional
tweaks. We could add yet more settings for the black level, but there's
enough clutter and they would be difficult to adjust accurately.
If the source and destination shots are different, and only one of the two has a
light with an illumination track, that single light will be used to normalize the transferred
pixels to match the first frame of the shot. That may not be ideal, but it will give a stable
level that facilitates downstream compositing without requiring further frame-by-frame
adjustments.
Note: A clean plate can result in noiseless replaced areas, you may have to
add noise back in!
Note: Cleaning a single plate makes sense only for relatively compact shots,
since on longer traveling shots the cleaned plate will wind up off the edge of
other parts of the shot. In such cases, you should probably use the original
shot as the ViewShift source with a Disjoint timing mode, or perhaps do a
static Texture Extraction on an extended reflector mesh.
In the simplest case, you clean up one of the frames of the live, already-tracked,
shot that you've prepped for ViewShift with reflector mesh(es), splines, etc. If you
cleaned up frame 37, then the camera information needed to use that clean plate is...
exactly frame 37's information.
To quickly handle setup once you've got your clean plate and initial ViewShift
scene setup, set the current SynthEyes frame to the frame corresponding to the clean
plate (eg frame 37 here), then run the Script/ViewShift/Prepare for Clean Plate script. It
will do the following (all of which you could do yourself):
Prompt you for the location of the clean plate.
Add it to your scene as a very short (single frame) shot, ie Camera02.
Copy the camera position and field of view information from the current
frame to frame zero of the new shot.
Adjust the ViewShift to set the source to the single-frame Camera02.
Tip: Before running, ViewShift automatically flushes the source shot's cache if
it is only a single frame, so that you can modify your clean plate and see the
modifications the next time you ViewShift, without having to Script/Flush
Cache on the source camera, as would otherwise be necessary.
If your clean plate is not from the shot you're cleaning, you'll need to either
add the single-frame shot yourself, or use the script to get started.
Then, you'll need to set up the exact camera location. There are a variety of
methods depending on what you have and know. The coordinate system for the second
shot must exactly match the first. The main possibilities are
Use Seed Points solving mode with some known XYZ positions, even on
the single frame.
Use Pin Mesh in camera-pinning mode.
You'll also need the camera field of view; depending on your clean plate, you
might be able to use the field of view determined from the main shot.
The source side camera should match the viewing camera, with a 1:1 timing
mode and the same range of frames. Source splines will be effective if enabled, though
viewing and garbage splines and removal meshes will not.
You can compensate for illumination, but you should think carefully about what
you're doing downstream to decide if you want to do so or not. (Ie if you compensate for
it, you'll need to have a matching animated light downstream.)
You cannot use animated texture movies or sequences directly as mesh textures
in SynthEyes. While we could do so, it would perform poorly when scrubbing. The RAM
caching engine is required for suitable performance.
To get that, open the texture map as an additional shot, with Shot/Add Shot.
Then apply the shot as a texture to the mesh from the perspective view's right-click
menu/Texturing/Texture Mesh from Shot. If the images have an alpha channel, be sure
to turn on Keep Alpha on the Shot Setup panel, then use Texture from Shot with Alpha.
(These modes are typically used projection screens. Use Remove Front Projection to
remove any of these modes.)
To prevent any added cameras from interfering with any later solves, set their
solver mode to Disabled.
Multiple ViewShifts
Once you've gotten the hang of ViewShift, you're probably going to want to cram
several of them, or even lots of them, into a single SynthEyes sni scene. ViewShift has
you covered!
You might have noticed the word "phase" scattered about the ViewShift panel,
because ViewShift setup information is stored as a Phase. You can see the ViewShift
phase(s) on the Phase panel.
You can create ViewShift phases a number of ways:
Shot/ViewShift — a new ViewShift is created if there are none existing,
Duplicate Phase on the ViewShift panel,
right-click/ViewShift/ViewShift in the Phase view (creating a totally new
one),
right-click/Library/Copy Phases and then Paste Phases,
the equivalent control/command-C and V in the Phase view,
right-click/Library/Read Phase File if you've previously stored ViewShift(s)
away to a library file.
Notice that the Phase panel shows a condensed summary of the currently-
selected ViewShift phase in the Phase view. You can get to its full dialog from there by
clicking the Details button on the phase panel (in addition to Shot/ViewShift).
NOTE: The Phase panel is not available in SynthEyes Intro. You must open
the ViewShift dialog using Shot/ViewShift. Create additional ViewShifts using
Duplicate; move among them using the Previous and Next buttons that are
present only in the Intro version of ViewShift. ViewShift is subject to the Intro's
resolution limit.
You can open multiple floating ViewShift dialogs at the same time. Normally they
display the currently-selected phase (in the phase view).
You can click the Lock/unlock to this phase button to lock a specific ViewShift
dialog to the currently-selected ViewShift phase. Repeat as necessary to lock several
dialogs to different ViewShifts.
When you have multiple ViewShifts, you may wish to have different splines on
the same shot each associated with different ViewShift phases. That can be achieved
with some care: you can create multiple moving objects as stand-ins for their primary
camera. For example, create one moving object for each ViewShift phase using
different splines. Associate each ViewShift's splines to its associated moving object (ie
on the Roto panel), and use its moving object instead of the camera as the Viewing
cam/obj or Source cam/obj on the ViewShift. (Now you know why objects are mentioned
there!) The ViewShift knows that you really mean to use the camera on the shot, not the
moving object. To avoid interfering with any later solving that you do, you should put
these extra moving objects into Disabled solving mode, and you can turn off their
Exportable checkbox.
The situation is simpler for meshes: on each ViewShift, you can just tell it to
ignore unrelated meshes. (Ie, select them and click Set ignore meshes.)
Lights for illumination matching are used only for the associated source or
viewing shots, based on the camera/shot of the trackers that are used to create the
illumination track.
It's only sort-of a nodal compositing chain (it's really intended for solving!): when
the phase runs, it goes and re-reads the output file of its input node (instead of the
original shot), writing the new output file, which can then serve as input for the next
phase, if there is one. Each phase is completed before the next begins.
This might be upgraded to a more compositing-like pipeline if interest warrants.
Important: ViewShifts always take their input from the real scene. Don't
connect solving phases to any ViewShift's input: the results of solving phases
are ignored!
ViewShifting 360VR
You can ViewShift 360VR shots too! The primary catch is ... splines! Splines in
360VR really live on the surface of a sphere, not on the (equi)rectangular image.
SynthEyes doesn't have spherical splines, they'd be a bit difficult to work out and to use.
You can use regular 2D splines on the image plane, though they give some
pretty strange shapes: for example, lassoing the ground requires a horizontal line
across the image that then sneaks around the sides and bottom. And a smaller spline
that moves off a side edge needs to wrap onto the opposing edge. That can't be done
automatically since that would conflict with the previous case. Instead, you need to have
a second spline. But if you can keep track of what splines on the equirectangular
images mean, you can do it.
It's similary complex for the 360VR outline of meshes, so those aren't supported.
Neither is intersecting splines on the surface of the sphere, whether they are regular
splines or mesh outlines.
As a result, the disjoint timing modes are not available for 360VR shots. You'll
have to hand-animate an equivalent absolute or relative Frame control track.
For camera operator/platform removals, you may find it convenient to create a
mesh representing the "footprint" of the camera operator/platform on the ground plane.
You can parent that mesh to the camera, and set it to be a Far mesh on the 3D panel.
That keeps it directly under the camera. As long as the camera is at a fairly-constant
height (so that the footprint stays right at ground level), you can then remove the
camera operator/platform using the footprint mesh as a Removal mesh, in conjunction
with an overall ground mesh.
Tip: Try to use a source patch from the same camera field as where the
platform must be removed, if possible. Mismatches in color and brightness
between different camera fields (especially if the sun is in one of them) will
result in a mismatch between the patch being moved and the rest of the
region.
Further, if you have a Far footprint mesh set up, you can use the
Script/Mesh/Trail of Vertices script to create a ground 'plane' mesh even if the ground is
not flat (but still must be fairly smooth). You can also do it from tracked moving objects,
if your object-to-world scaling is correct.
So there are some fascinating possibilities for the suitably motivated and skilled.
You'll need to understand what you're trying to do and why, as well as how to
accomplish the subtasks. There's no list of buttons to push!
You will need to know a fair amount about 3-D movie-making to be able to
produce watchable 3-D movies. 3-D is a bleeding edge field and you should allow lots of
time for experimentation. SynthEyes technical support is necessarily limited to
SynthEyes; please consult other training resources for general stereoscopic movie
theory and workflow issues in other applications.
SynthEyes's perspective view can display anaglyph (red/cyan is a good
combination) or interlaced views from both cameras simultaneously, controlled by the
right-click menu's View/Stereo Display item and the Scene Settings panel. You can
select either normal color-preserving or gray-scale versions of the anaglyph display.
When using an interlaced display, you should probably have more than one display and
float the interlaced perspective to that monitor. Only the actual perspective view will be
interlaced, not the entire user interface.
24/7 Perspective
With stereo, there are two different views all the time, and even a single frame
from each camera is enough to produce a 3-D solve. At least in theory, you never have
to worry about “tripod shots” that do not produce 3-D. Every shot can produce 3-D.
Every stereo shot can also be used in a motion-capture setup to produce a separate
path for even a single moving feature. That’s clearly good news.
But before you get too excited about that, recall that in a stereo camera rig, the
cameras are usually under 10 cm apart. Compare that to a dolly shot or crane shot with
several meters of motion to produce perspective. And each of the hundreds of frames in
a typical moving-camera shot produces additional data to help produce a more accurate
solution.
So, even though you can produce 3-D from a very short stereo shot, the
information will not be very accurate (that’s the math, not a software issue), and longer
shots with a moving camera will always help produce better-quality 3-D data.
On a short shot with no camera rig translation (with the rig on a tripod), you can
get 3-D solves for features near to the camera(s). Features that are far from the
cameras must still be configured as “Far” to SynthEyes, meaning that no 3-D depth can
be determined. Similarly, for motion-capture points, accuracy in depth will degrade as
the points move away from the camera. The exact definition of “far” depends on the
resolution and field of view of the cameras, you might consider something far if it is
several hundred times the inter-ocular distance from the camera.
Easier Sizing
If we know the inter-ocular distance (and we always should have a measurement
for the beginning or end of the shot), then we know the coordinate system sizing
immediately. There is no need for distance measurements from the set, and no problem
with consistency between shots.
That makes coordinate system setup much simpler. On a stereo shot, when an
inter-ocular distance is set up, the *3 coordinate system tool generates a somewhat
different set of constraints, one that aligns the axes, but does not impose its own size,
allowing the inter-ocular distance to have effect.
Keep in mind that the sizing is only as good as the measurement. If the
measurement is 68 +/- 1 mm, that is over 1% uncertainty. If you have some other
measurement that you expect to come out at 6000 mm and it comes out at 6055, you
shouldn’t be at all surprised. Some scenes with little perspective will not vary much
depending on inter-ocular distance, so the inter-ocular distance may size the scene
accurately.
If you have a crucial sizing requirement, you should use a direct scene
measurement, it will be more accurate. (In that case, switch to a Fixed inter-ocular
distance, instead of Known.)
CMOS cameras are also subject to the Rolling Shutter problem, which affects
monocular projects as well as stereoscopic ones. The rolling shutter problem will also
result in geometric errors, depending on the amount of motion in the imagery. To cover
a common misconception, this problem is not reduced by a short shutter time. If at all
possible, use synchronized CCD or film cameras. Note that when CMOS cameras are
used in common mirror-based stereo rigs, the rolling shutter effect will go in the
opposite direction (bottom to top) for the mirrored image: if you have to mirror the image
vertically, you have to correct the rolling shutter with a negative values as well.
One-Toe vs. Two-Toe Camera Rigs
Ideally, a camera rig has two cameras next to each other, perfectly aligned. If
both camera viewing axes are perfectly parallel, they are said to be converged at
infinity, and this is a particularly simple case for manipulation. Usually, one or both
cameras toe in slightly to converge at some point closer to the camera, just as our eyes
converge to follow an approaching object. Mechanically, this may be accomplished
directly, or by moving a mirror. We refer to the total inwards angle of the cameras as the
vergence angle.
It might seem that there is no difference between one camera toeing in or two,
but there is. Consider the line between the cameras. With both cameras properly
aligned and converged at infinity, the viewing direction is precisely perpendicular to the
line between the cameras. If one camera toes in, the other remains at right angles to the
line between them. If both cameras toe in, they both toe in an equal amount, with
respect to the line between them.
If you consider an object approaching the rig along the centerline from infinity,
the two-toe rig remains stationary with both cameras toeing in. The one-toe rig moves
backwards and rotates slightly, in order to keep one camera at right angles to the
perpendicular line between the camera centers.
SynthEyes works with either kind of rig. Though the one-toe rigs seem a little
unnatural (effectively they make the audience turn their heads), the motions are very
small and not really an issue for people, except for those who are trying to do their
tracking to sub-pixel accuracy! The one-toe rigs are mechanically simpler and seem
more likely to actually produce the motion they are supposed to (are the two-toe rigs
really moving at exactly matching angles? Are the axes parallel? Maybe, maybe not!).
From Where to Where?
The inter-ocular distance is a very important number in stereo movie-making: it
is the distance between the eyes, or the cameras, with a typical value around 65 mm. It
is frequently manipulated by filmmakers, however; more on that in a minute.
Although you can measure the distance between your buddy’s eyes within a few
millimeters pretty easily, when we start talking about cameras it is a little less obvious
where to measure.
It turns out that this question is much more significant than you might think as
soon as you allow the camera vergence to change: if the cameras are tilted inwards
towards each other, the point at which you measure will have a dramatic effect.
Depending on where you measure, the distance will change more or less or not at all.
The proper point to consider is what we call the nodal point, as used for tripod
mode shots and panoramic photos. It’s not technically a nodal point for opticians. It is
the center of the camera aperture, as seen from the outside of the camera. See this
article on the pivot point for panoramic photography for more details.
The inter-ocular distance (IOD) is the distance between the nodal points of the
cameras.
Dynamic Rigs
Though the simplest rigs bolt the two cameras together at a fixed location, more
sophisticated rigs allow the cameras to move during a shot.
The simplest and most useful motion may not be what you think: it is to change
the inter-ocular distance on the fly. This preserves the proper 3-D sensation, while
avoiding extreme vergence angles that make it difficult to keep everything on-screen in
the movie theater.
The more complex effect is to change the vergence angle on the fly. This must
be done with extreme caution: unless the rig is very carefully built, changing the
vergence angle may also change the inter-ocular distance—or even change the
direction between them as well. If a rig is to change the vergence angle, it must be
constructed to locate the camera nodal point exactly at the center of the vergence
angle’s rotation.
A rig that changes only the inter-ocular distance does not have to be calibrated
as carefully. A changing IOD should always be exactly parallel to the line between the
camera nodal points, which in turn means that on a one-toe camera, the non-moving
camera must be perpendicular to the translation axis, or a two-toe camera must have
equal toe-in angles relative to the translation axis.
The penalty for a rig that does not maintain a well-defined relationship between
the cameras is simple: it must be treated as two separate cameras. The most
dangerous shots and rigs are those with changing vergence, either with mirrors or
directly, where the center of rotation does not exactly match the nodal point. Unless you
have calibrated, it will be wrong. You will be in the same boat as people who shoot
green-screen with no tracking markers—and that boat has a hole…
example Pan, Tilt, and Roll, or Roll, Pitch, and Yaw). The same six numbers are used
for the basic position and orientation of any object.
Those particular six numbers are not convenient for describing the relationship
between the two cameras in a stereo pair, however! In the real world, there is only one
real position measurement that can be made accurately, the inter-ocular distance, and it
controls the scaling of everything.
Accordingly, SynthEyes uses spherical coordinates—which have only a single
distance measurement—to describe the relationship between the cameras.
Of the two cameras, we’ll refer to one as the dominant camera (the one we want
to think about the most, typically the right), and the other as the secondary camera. The
camera parameters describe the relationship of the secondary (child) camera to the
dominant (parent) camera. Which camera is dominant is controlled on the Stereo
Geometry panel. In each case, when we talk about the position of a camera, we are
talking about the position of its nodal point (inside the front of the lens), not of the base
of the camera, which doesn’t matter.
You can think about the stereo parameters in the coordinate space of the
dominant camera. The dominant camera has a “ground plane” consisting of its side
vector, which flies out the right side from the nodal point, and its “look” vector, which
flies forward from the nodal point towards what it is looking at. The camera also has an
up vector, which points in the direction of the top of the camera image. All of these are
relative to the camera body, so if you turn the camera upside down, the camera’s “up”
vector is now pointing down!
Here are the camera parameters. They have been chosen to be as human-
friendly as possible. Most of the time, you should be concerned mainly with the Distance
and Vergence; SynthEyes will tell you what the other values are and they shouldn’t be
messed with much.
Distance. The inter-ocular distance between the cameras. Note that this value is
measured in the same units as the main 3-D workspace units. So if you want an overall
scene to be measured in feet, the inter-ocular distance should be measured in feet as
well. Centimeters is a reasonable overall choice.
Direction. This is the direction (angle) towards the nodal point of the secondary
camera from the dominant camera, in the ground plane. If the secondary camera is
directly next to the dominant camera, in the most usual configuration, the direction value
is zero. The Direction angle increases if the secondary camera moves forward, so that
at 90 degrees, the secondary camera is in front of the primary camera (ignoring relative
elevation). See additional considerations in Two-Toe Revisited, below.
Elevation. This is the elevation angle (above the dominant camera’s ground
plane). At zero, the secondary camera is on the dominant camera’s ground plane. At 90
degrees, the secondary camera is above the dominant camera, on its up axis.
Vergence. This is the total toe-in angle by which the two cameras point in
towards each other. At zero, the look directions of the cameras are parallel, they are
converged at infinity. At 90 degrees, the look directions are at right angles. See Two-
Toe Revisited below.
Tilt. At a tilt of zero, the secondary camera is looking in the same ground-plane
as the dominant camera. At positive angles, the secondary camera is looking
increasingly upwards, relative to the dominant camera. At a tilt of 90 degrees, the
secondary camera is looking along the dominant camera’s Up axis, perpendicular to the
dominant camera viewing direction (they aren’t looking at the same things at all!).
Roll. Relative roll of the secondary camera relative the dominant. At a roll angle
of zero, the cameras are aren’t twisted with respect to one another at all; both camera
look vectors point in the same direction. But as the roll angle increases, the secondary
camera rolls counter-clockwise with respect to the dominant camera, as seen from the
back.
You can experiment with the stereo parameters by opening a stereo shot,
opening the Stereo Geometry panel, clicking More… and then one of the Live buttons.
Adjusting the spinners will then cause the selected camera to update appropriately with
respect to the other camera.
Two-Toe, Revisited
The camera parameters, as described above, describe the situation for “single-
toed” camera rigs, where only one camera (the secondary) rotates for vergence. The
situation is a little more complex for two-toe rigs, where both cameras toe inwards for
vergence. These modes are “Center-Left” and “Center-Right” in the Stereo Geometry
panel’s dominance selection.
The dominant camera never moves during two-toed vergence, yet we still
achieve the effect of both camera tilting in evenly. How is that possible?
Consider a vergence angle of 90 degrees. With a one-toe rig, the secondary
camera has turned 90 degrees in place without moving, and is now looking directly at
the primary camera.
With a two-toe rig at a vergence of 90 degrees, the secondary has turned 90
degrees so it is looking at right angles to the look direction of the dominant camera.
But, and this is the key thing, at the same time the secondary camera has swung
forward to what would otherwise be Direction=45 degrees, even though the Direction is
still at zero. As a result, the secondary camera has tilted in 45 degrees from the
nominal look direction, and the dominant camera is also 45 degrees from the nominal
look direction—which is the perpendicular to the line between the two cameras.
The thing to keep in mind is that the line between the two cameras (nodal points)
forms the baseline; the nominal overall ‘rig’ look direction is 90 degrees from that.
SynthEyes changes the baseline in centered mode to maintain the proper matching
vergence for the two cameras; it does that by changing the definition of where the zero
Direction is. The Direction value is offset by one-half the vergence in centered mode.
If you put the stereo pair into one of the Centered modes and use Live mode,
you’ll see the camera swinging forward and backward in response to changes in the
vergence. Once you understand it, it should make sense. If it seems a bit more complex
and demanding than single-toe rigs… you’re right!
Electronic Calibration
To electronically calibrate, print out the calibration grid from the web site using a
large-format black and white (drafting) printer, which can be done at printing shops such
as Fedex Kinkos. Attach the grid to a convenient wall, perhaps with something like Joe’s
Sticky Stuff from Cinetools. Position the rig on a tripod in front of the wall, as close as it
can get with the entire outer frame visible in both cameras (zoom lenses wide open).
Adjust the height of the rig so that the nodal point of the cameras is at the same height
as the center point of the grid.
Re-aim the cameras as necessary to center them on the grid. This will converge
them at that distance to the wall; you may want to offset them slightly outwards or
inwards to achieve a different convergence distance, depending on what you want.
Shoot a little footage of this static setup. Record the distance from the cameras
to the wall, and the width of the visible grid pattern (48” on our standard grid at 100%
size).
For camcorders with zoom lenses, you should shoot a sequence, zooming in a
bit at a time in each camera. You can use one remote control to control both cameras
simultaneously. This sequence will allow the optic center of the lens to be determined—
camcorder lenses are often far off-center.
Once you open the shots in SynthEyes, create a full-width checkline and use the
Camera Field of View Calculator script to determine the overall field of view. Use the
Adjust tools on the image preprocessor to adjust each shot to have the same size and
rotation angle. Use the lens distortion controls to remove any distortion. Correct any
mirroring with this pass as well, see the mirror settings on the image preprocessor’s Rez
tab. Use the Cropping and re-sampling controls to remove lens off-centering. A small
Delta Zoom value will equalize the zoom. See the tutorial for an overview of this
process.
Your objective is to produce a set of settings that take the two different images
and make them look exactly the same, as if your camera rig was perfect. Once you’ve
done that, you can record all of the relevant settings (see the Export/Stereo/Export
Stereo Settings script), and re-use them on each of your shots (see
Import/Stereo/Import Stereo Settings) to make the actual images match up properly.
Then, you should save a modified version of each sequence out to disk for subsequent
tracking, compositing, and delivery to the audience.
Obviously this process requires that your stereo rig stay rigid from shot to shot
(or periodic calibrations performed). The better the shots match, the less image quality
and field of view will be lost in making the shots match.
Opening Shots
SynthEyes uses a control on the shot parameters panel to identify shots that
need stereo processing. Open the left shot, and on the shot settings panel, click Stereo
off until it says Left. After you adjust any other parameters and click OK, SynthEyes will
immediately prompt you to open the right shot. Any settings, including image
preprocessor settings, will be copied over to the right shot to save time.
If you do not configure the stereo setting when you initially open the shot, you
can do so later using the shot settings dialog. You can turn it on or off as your needs
warrant. To get stereo processing, you must open the left shot first and the right shot
second, and set the first shot to left and the second to right. Both shots must have the
same shot-start and -end frame values.
Stereo rigs that include mirrors will produce reversed images. If the camera was
mechanically calibrated, use the Mirror Left/Right or Mirror Top/Bottom checkboxes on
the Rez tab of the image preprocessor to remove the mirroring. (If the cameras are
electronically calibrated using the image preprocessor, you should remove mirroring
then as described above.)
Once the shot is open, you can select a regular or stereo-friendly viewport
configuration in which to work, depending on what you are doing at the moment. For
supervised tracking, a stereo layout with two camera views is highly recommended.
Note that you can use the Stereo view in the Perspective view to show both
images simultaneously as an anaglyph, see View/Stereo Display and View/Perspective
View Settings. You can select left-over-right mode here, but it is intended for preview-
movie output, not for interactive use. Use multiple perspective or camera views to
display both views simultaneously for interactive use.
Stereoscopic Tracking
In a stereo setup, there are links from trackers on the secondary camera to the
corresponding trackers on the dominant camera; they tell SynthEyes which features are
tracked in both cameras. These links are always treated as peg-type locks, regardless
of the state of the Constrain checkbox.
Automatic
If you use automatic tracking, SynthEyes will track both shots simultaneously and
automatically link trackers between the two shots. For this to work well, your two shots
need to be properly matched, both in overall geometric alignment, and in color and
brightness grading. A quick fix, if needed, is to turn on high-pass filtering in the image
preprocessor. Be sure to set the proper dominant camera before auto-tracking, as
SynthEyes will examine that to determine in which direction to place the links (from
secondary trackers to primary camera trackers, which can be left to right or right to left).
Tip: you can use the Copy Splines script to copy splines from one eye to the
other, if you want to mask some things out in both eyes.
Supervised
If you use supervised tracking, you will need to create both trackers and the link
between. There are a number of special features to make handling stereo tracker pairs
easier (for automatic trackers also). After you have created stereo pairs, you will track
them both, sequentially or simultaneously (the SimulTrack view can help there).
It is easiest to do supervised stereo tracking with one of the viewport
configurations with two camera views, one for each eye, ie Stereo, Stereo SbS (Side by
Side), or Stereo SimulTrack.
To create stereo pairs from one of these configurations, turn on the Create
Tracker button on the Tracker Control panel, then click alternately in the camera view of
each eye. You can create 1 in the left, 2 in the right, then 2 in the left, and back to 1 on
the right. A link is created when a new tracker is created, and there is one selected
tracker on the other eye, which is not already linked to any other tracker. If you forget to
alternate, you can just click one of the existing trackers to select it, then click to create
the matching tracker in the other eye.
Tip: If you only need to create a single tracker pair, you can do that quickly,
without having to adjust the mode of either the camera or perspective views:
hold down the C key and click in the camera view or other camera or
perspective view.
Note: linking trackers in the camera view requires that the Coordinate System
Panel be open (so that you can see the results). The panel does not have to
be open to link trackers in a stereo shot from the camera view to the opposing
camera or perspective view—you can link them while keeping the Tracker
panel open continuously, to save time.
Fine point: if more than one tracker is selected, and you click on one of the
matching, cross-selected, trackers in the other view, then all of the cross-
selected trackers become selected, not just the one you clicked, so you don't
have to re-select that particular set.
The situation is a bit different if you are using the camera+perspective view
setup:
ACHTUNG! Stay awake, this next one is tricky: if you click on a tracker in the
perspective view, that tracker will not be selected, because it is not on the
currently-active camera, but instead the matching tracker on the other camera
(in the camera view) will be selected. That will in turn make the tracker you
just clicked (in the perspective window) turn orange, because its matching
tracker is now selected. The active tracker host will not change. This will be
more clear when you try it. If you want to select and edit a tracker displayed in
the perspective view (on the opposite eye), you should switch the views with
minus sign—the camera view is the place to do that.
Note that the settings of the left and right trackers in a pair are independent once
the pair has been created. You will need to adjust each separately, and that may well be
necessary depending on the shot.
Stereo ZWTs
You can use zero-weighted -trackers to help you track: once you set up a pair
and if the shot is already solved, any two frames of valid data, on the same camera or
on the opposing cameras, is enough to determine the tracker's 3D location. If you have
it enabled, Track/Search from solved will predict where a tracker will show up in both
images, even if it goes off- and back on- screen. If your solve is not good, it may predict
an unusable location, in which case you may need to turn off Search from solved
temporarily.
Checking Stereo Correspondence
In a stereo tracking setup, the trackers on each eye may be tracked perfectly, but
the track can be a disaster if the trackers on each eye do not track the same feature on
each side.
The SimulTrack view can help identify mismatches. Select the Stereo SimulTrack
layout, which has two camera views and two SimulTracks. You can select trackers in
one camera, and not only will you see the matching trackers in the other camera view
(in orange), you will see the active side's tracker on one SimulTrack, and the other
side's tracker on the other SimulTrack). The ordering of trackers within each SimulTrack
will be the same (driven by the active side), so you can look from one to the other view
to identify mismatches.
After you have a reasonable solve, you can also take a look at the error on the
trackers to identify mismatches, using the Sort by Error option and the SimulTrack or
Graph Editor.
no longer be determined, and the stereo solve goes from a nice stereo tripod situation
to a combination of two tripod shots, see the section on Stereo Tripod Shots.
Stereo Solving Mode
The stereo mode uses the trackers that are linked between cameras to get things
going. It does not rely on any camera motion at all: the camera can be stationary, and
even a single still for each camera can be used, as long as there are enough nearby
features (compared to the inter-ocular distance).
Important: The Begin and End frames on the Solver panel should be
configured directly for Stereo Solving mode—somewhat differently than for a
usual automatic solve, so please keep reading. To begin with, the
checkboxes should be checked so that the values can be set manually.
The Stereo solve will literally start with the Begin frame from both cameras; it
should be chosen at a frame with many trackers in common between the two cameras.
However, this offers a limited pool of data with which to get started. A much
larger, and thus more reliable, pool of data is considered when the End frame is set as
well. The Stereo solver startup considers all the frames between the Begin and End
frames as source data.
The one caveat: none of the camera parameters must change between the begin
and end frames, including the distance or vergence, even if they were marked as
changing (see the next section).
If any of the camera parameters are constantly changing throughout the shot, or
it can not be determined that they do not, then you must set the End frame and Begin
frame to the same frame, and forego having any additional data for startup. Such a
frame should have as many trackers as possible, and they should be carefully
examined to reduce errors.
If you do not select the Begin/End frames manually (leave them in automatic
mode), then SynthEyes will select a single starting/ending frame that has as many
trackers in common as possible. But as described, supplying a range is a better idea.
Note that you might be able to use the entire shot as a range, though probably
this will increase run time and a shorter period may produce equivalent results.
Automatic/Indirectly Solving Mode
The stereo mode effectively uses the inter-ocular distance as the baseline for
triangulating to initially find the tracking points. If the camera is moving, a larger portion
of the motion can be used to get solving started, producing a more accurate starting
configuration.
To do that, use the normal Automatic solving mode for the dominant camera, and
the Indirectly mode for the secondary camera. Assuming the moving camera path is
reasonable, SynthEyes will solve for the dominant camera path, then for the secondary
path, applying the selected camera/camera constraints at that time.
This approach will probably work better on shots where most of the trackers are
fairly far from the cameras, and the camera moves a substantial distance, thus
establishing a baseline for triangulation. If the camera moves (translates) little, you
should use the Stereo solving mode.
Tripod/Indirectly Solving Mode
With two cameras, nodal tripod shots are less an issue because distances and 3-
D coordinates can be determined if there are enough nearby features. However, you
may encounter shots that are nodal by virtue of not having anything nearby; call them
"all-far" shots. For example, consider a camera on the top of a mountain, which must be
attacked by CG birds. With no nearby features, the shot will be nodal, and there will be
no way to determine the inter-ocular distance. Any inter-ocular distance can be used,
with no way to tell if it is right or wrong.
Like a (monocular) tripod shot, no 3-D solve is possible, only what amounts to
two linked tripod solves.
Use the Tripod/Indirectly setup (tripod mode on dominant camera, indirectly on
secondary). When refining, use Refine Tripod mode for both cameras.
On the stereo geometry panel (see below), you should set up your best estimate
of the inter-ocular distance, either from on-set measurements or from other shots. You
can animate it if you have the information to do so. Set the Direction and Elevation
numbers to zero, or known values from other shots.
SynthEyes will solve the shot to produce two synchronized tripod solves.
Then, it will compute adjusted camera paths, based on the interocular distance
and the pointing direction of the camera, as if the camera had been on a tripod. These
will typically be small arc-like paths. If you need to later adjust the inter-ocular distance,
Refine (Tripod) the shot to have the paths recalculated.
As a result, you will have two matching camera paths so that you can add CG
effects that come close to the camera. Since SynthEyes has regenerated the camera
paths at a correct inter-ocular distance, even though all the tracked features are far, you
will still be able to add effects nearby and have them come out OK.
Setting Up Constraints
The Stereo Geometry panel can be used to set up constraints between the two
cameras. If you will be using the inter-ocular distance to set the overall scale of the
scene, then you should do that initially, before setting up a coordinate system using the
*3 tool. The *3 tool will recognize the inter-ocular distance constraint, and generate a
modified set of tracker constraints to avoid creating a conflict with the inter-ocular
distance constraint.
The left-most column on the Stereo Geometry panel sets the solving mode for
each of the six stereo parameters; they can be configured individually and often will be.
The default As-Is setting causes no constraint to be generated for that parameter. To
constrain the Distance, change its mode to Known, and set the Lock-To Value to the
desired value.
The Lock-To value can be animated, under control of the Make Key button at top
left of the panel. With Make Key off, the lock value shown and animated is that at the
beginning of the shot. Beware, this can hide any additional keys you have already
created.
Usually it will be best to solve a shot once first, with at most a Distance
constraint, and examine the resulting camera parameters. The stereo parameters can
be viewed in the graph editor under the node “Stereo Pairs.” The colors of the
parameters are shown on the stereo panel for convenience.
Sudden jumps in a parameter will usually indicate a tracking problem, which
should be addressed directly. The error is like an air bubble under plastic—you can
move it around, but not eliminate it. The stereo locks are all ‘soft’ and can not
necessarily overcome an arbitrarily large error. If you do not fix the underlying errors in
the tracking data, even if you force the stereo parameters to the values you wish, the
error will appear in other channels or in the tracker locations.
Usually, the other four stereo parameters (other than distance and vergence) are
constant at an unknown value. Use the Fixed mode to tell SynthEyes to determine the
best unknown value (like the Fixed Unknown lens-solving mode).
If you are very confident of your calibration, or wish to have the best solve for a
specific set of parameters, you can use the Known mode for them also.
In the Varying solving mode, you can create constraints for specific desired
ranges of frames, by animating the respective Lock button on or off. The parameter will
be locked to the Lock-To value for those specific frames. The Hold button may also be
activated (for vergence and distance); see the following section on Handling Shots with
Changing IOD or Vergence.
Note that usually you should keep solving “from scratch” after changing the
stereo constraint parameters, rather than switching to Refine mode. Usually after a
change the desired solution will be too far away from the current one to be determined
without re-running the solve.
Weights
Each constraint has an animated weight track. Weights range from 0 to 120, with
60 being nominal. The value is decibels, meaning 20 units changes the weight by a
factor of 10. Thus, the total weight range is 0.001 to 1000.0.
Excessively large weight values can de-stabilize the equations, producing a less-
accurate result. We advise sticking with the default values to begin with, and only
increasing a weight if needed to reduce difference values after a solve has been
obtained. On a difficult solve where there is much contradictory information, it may be
more helpful to reduce the weight, to make the equations more flexible and better able
to find a decent solution.
Post-Stereo-Solve Checkup
After a stereo solve with constraints, you should verify that they have been
satisfied correctly, using the graph editor. If the tracking data is wrong or calls for a
stereo relationship too different than the constraints, they may not be satisfied and
adjusting the constraints or tracking data may be necessary.
You can render stereo preview movies using left-over-right stereo mode using
the right-click/Preview Movie capability. This will let you verify tracking on other stereo
displays in real time.
Object Tracking
SynthEyes can perform stereo object tracking, where the same rigid object is
tracked from both cameras. (It can also do motion capture from stereo imagery, where
the objects do not have to be rigid, and each feature is tracked separately.) Or you can
do single-eye object tracking on a stereo shot, if the object moves enough for that to
work well.
To set up the stereo moving-object setup, do a Shot/Add Moving Object when a
stereo shot is active. You will be asked whether to create a regular or stereo moving
object. The latter is similar to adding a moving object twice, once when each camera
(left and right) is selected as the main active object (the currently-selected camera is
used as the parent when a new object is created), except that SynthEyes records that
the two are linked together. Each object will have the Stereo solving mode.
An object can only be in one place at a time: there should only be a single path
for the object in world space, the same path as seen in the left and right cameras, just
like each 3-D tracker pair only has a single location. The world path depends on the
camera path, though! This creates challenging “what comes first” issues. Object solves
always start with an initial camera-type estimate, before being converted to an object-
type solve, and that is what happens with object solves as well.
Simple object solves start out with two separate motion paths, one for the “left
object” and one for the “right object.” As the camera path becomes available, additional
constraints are applied that force the left and right paths, in world space, to become
practically identical by adjusting the camera and object positioning.
Because of this inherent interaction, it is wise to work on the camera solve first,
before proceeding to the objects.
Warning: if you disable the cameras while you work on the object, you'll need
to lock the field of views to avoid creating a mismatch. Alternatively, set the
cameras to Refine mode.
You can use one or more axis locks on the camera and/or object, typically once
you've got the initial solves (and then set everything to Refine mode). For stereo
cameras and objects, place the locks on the dominant camera or object. Locks on the
secondary camera or objects will be ignored!
To perform motion-capture style tracking in a stereo shot, after completing the
camera tracking, do an Add Moving Object, then set the solver mode for the object(s) to
Individual Mocap. Add stereo tracking pairs on the moving object, and each pair will be
solved to create a separate independent path. See the online tutorial for an example,
and see the Motion Capture section of the manual.
Moving stereo objects will produce the export for both objects, which may
be unnecessarily redundant. You can use the usual control from the 3D
panel or Hierarchy view to disable the export of one of the two if you like.
When you work with 360 VR images, you don't have to be concerned about
determining distortion, field of view, or plate/sensor sizes, which is definitely convenient!
Note that SynthEyes does not perform stitching, ie removing distortion and
combining multiple images to initially create a 360 VR image. That is handled by
specialized applications.
You tell SynthEyes that an incoming shot is 360 VR using that setting on the
SynthEyes Shot Setup panel: None or Present, plus the further option of Remove!
We'll start the discussion by describing how to do a simple stabilization to 360 VR
shots, if you will not be doing 3D effects.
ATTENTION! If this is the first part of the SynthEyes manual that you are
reading, you'll need to eventually read quite a bit more of it! Working with 360
VR footage requires a wide variety of techniques from throughout SynthEyes:
the image preprocessor, automatic and supervised tracking, solving,
coordinate system setup, etc. The material here builds upon that earlier
material; it is not repeated here because it would be exactly that, a repetition.
Simple Stabilization
If you need to only stabilize a 360 VR shot, you can do that relatively simply in
the image preprocessor using a 2.5D-type technique, rather than a full 3D solve. The
drawback is that longer shots can drift, though you can easily hand-animate corrections
or art direction.
Open the shot, selecting the 360 VR mode of "Present."
On the Summary panel, click Run Auto-tracker.
Use shift-lasso while scrubbing through the shot to select all the distant trackers,
ie by the horizon line. Don't select any independently-moving trackers, such as
those on vehicles.
(If there aren't many trackers to select, start over but go to the Advanced tab on
the Features panel and increase the number of trackers per frame to 60 or more
and the maximum tracker count to 300, 500, or more depending on the length of
the shot; then rerun the Auto-tracker).
Open the Image pre-processor (hit the P key)
On the Stabilize tab, click Get Tracks.
Change the two Stabilize Axes dropdowns to either Peg or Filter. Peg
stabilization causes the initial orientation to be maintained; Filter does that,
controlled by the Cut Frequency spinner.
To examine the stabilization, scrub through the shot using the spinner at bottom
of the image preprocessor.
You can adjust the aim direction of the stabilized image using the spinners on the
Adjust tab. (Note that these spinners and the Filter frequency can be animated by
first turning on the add-key button at lower right of the image preprocessor
window.)
To have the trackers match up with the modified footage, click Apply to Trkers on
the Output tab before closing the image preprocessor. If you modify the stabilizer
settings later, you must hit Remove f/Trkers before doing so.
You can close the image preprocessor and experiment and play back the shot in
the main camera view.
Once the shot is ready, reopen the image preprocessor, go to the Output tab,
and click Save Sequence.
Or, use the exports to AfterEffects or the generic Save Sequence approach.
It's important to note that 360 VR solves, whether done natively or via
linearization, typically will have much higher final errors than a typical conventional shot,
due to 1) uncorrected residual distortion in the stitching; 2) synchronization errors
because the cameras typically aren't genlocked; and 3) the rolling shutter effect in the
small CMOS cameras typically used in VR rigs.
While SynthEyes will let you do an amazing job stabilizing the footage by
reorienting it on each frame, it cannot repair the image damage done by problems with
stitching, unsynchronized cameras, or the rolling shutter effect.
Note: See the next section if your app has an equirectangular panoramic
camera or an equivalent 360VR rendering rig (for example as AfterEffects +
Cinema 4D). That should frequently be the case, and the procedure
described in this section irrelevant.
Repeat the following process to render each mesh you want to insert in your 3D
application:
Place an imported, created, or bounding-box proxy 3D mesh into the
(SynthEyes) scene.
in your 3D scene. If each face render (top, bottom, left, ...) uses its own
camera-dependent default light, there will be shading discontinuities in the
final result.
If you want to verify that images rendered this way are correct, do a Shot/Change
Shot Images with the Other---Don't do anything special option. Then, on the image
preprocessor's Adjust tab, go to frame zero, then control-right-click the Delta U and
Delta Rotation spinners to remove the stabilization. The rendered footage will then
match the meshes in the camera viewport.
The SynthEyes to Blender exporter has some limited support for using Blender's
Cycles panorama camera when exporting shots as above (ie with no Follow Mesh). It
will configure for Cycles and the proper camera. You will have to set up any mesh
texturing and other materials within the Cycles environment yourself.
Tip: Use the normal "Lock" to lock the perspective view to your camera and
show the image on the spherical backdrop. On the right-click/View menu, turn
on "Lock position only". You can then use "Look" mode to re-aim the camera
to explore in all directions. And when you scrub, you will continue looking in
that direction. Do not leave Lock position only on for normal operations as it
may have indeterminate effects. You can use right-click/Other modes/Field of
view to change the perspective view's field of view.
Settings for the built-in projection screen are accessed via the Perspective
Projection Screen Adjust script. The Screen Distance is the main parameter of interest.
Horizontal and vertical grid sizes are multiplied by 6 and 4 respectively for the spherical
screen. Don't forget to click the Apply Settings button!
NOT adapted for 360 VR: Motion capture processing (mocap from 360VR
cameras?), planar and GeoH tracking, coalesce nearby trackers, rolling front projection
display. While the supervised and automatic tracking code works fine on 360VR
images, it processes them as regular images, so they do not compensate for the
distortion at the top and bottom of a 360VR image, or the wrapping from left edge to
right.
Irrelevant for 360 VR: just a reminder that items such as distortion,
cropping/padding, region of interest, and "scale" in the image preprocessor are
irrelevant for 360 VR footage, and should not be changed!
If there are items that seem crucial and well-suited for 360 VR adaptation, you
can bring them to our attention.
Tip: Many of these scripts can be found on the handy 360VR toolbar, which
you can open via the Script/Script bars/360 VR menu item.
3D Tracking Setup
Use this script to prepare a 360 VR shot for linearized 3-D tracking. You'll
configure the desired linear field of view for the conventional camera, the direction it will
point within the image, and the resolution and aspect ratio of the linearized image. Note
that all these controls are available elsewhere in SynthEyes; this script simply
consolidates them to save time. You can access them directly if you need to animate
the camera direction or field of view during the shot.
3D Tracking Wrap Unstabilized
You run this script after linearizing it and solving it without assigning a coordinate
system. This script converts back to the 360 VR mode such that the 3-D solve
corresponds to the original unstabilized shot.
3D Tracking Wrapup
This script changes a shot, linearized for 3D tracking, back to 360 VR mode after
the completion of 3D tracking. It updates the trackers to correspond to the 360 VR view.
After running this script, you can do roto-masking to block out the camera mount,
then re-run autotracking on the entire shot to generate zero-weighted trackers, and
finally run Stabilize from Camera Path. Alternatively you can run Stabilize from Camera
Path immediately.
Align Camera to Path
After a Stabilize from Camera Path operation (see below), the orientation of the
VR sphere is fixed in space: the viewer will always perceive "north"(and every other
direction) to correspond to a particular fixed direction in their viewing environment. That
world-oriented view makes it easy for the viewer to understand and follow particular
items in their environment.
Alternatively, if the camera is mounted on a moving vehicle, you may wish to
portray a "front looking" (cockpit), "side looking" (bus), or downwards (satellite) view.
You can achieve that using the Align Camera to Path script.
You run this script late in your processing and effects process, upon a world-
oriented scene. (The camera path should be filtered already, if it will be.) The script re-
keys the camera orientation relative to the camera path. You can set up a fixed pan/tilt
offset to the path, for example 0,0 is straight ahead; 90,0 is off to the left; 180,0 is
backwards; 0,-45 is forwards but down a bit.
You have the option to have the path create only the heading(pan), or both
heading and tilt. By default only the heading is created, and the horizon stays level. But
if you wish to better portray a plane diving towards the ground, say, you can have the tilt
follow the path as well.
To minimize disruptive jitter, path keys are generated only so often, with
interpolation between them. And the path's direction is determined over the course of
several frames (typically +/-3), rather than just the adjacent frame.
This script can also be run on renders of meshes that are to be inserted, once
they have been world-stabilized, so that they can match the path-alignment of the main
footage.
AfterEffects 360VR Stabilization
Exports a javascript that creates a comp to stabilize the footage within
AfterEffects, using the data from SynthEyes. When using this, you don't have to render
the footage from SynthEyes. You can do all your (2D) compositing inside of AfterEffects
instead. This export relies on an AfterEffects plugin included with SynthEyes that can
perform the required image manipulation. For installation instructions, see the section
on Installing for AfterEffects CC/CS6 in the main Exporting section.
Blend Stabilizations Importer
This importer is a more complex version of Import Stabilization. It imports not
only the text-based stabilization file that's selected, but any consecutively-named
stabilization files as well. Smooth blending is generated in the overlap region between
the files. See the Piece-wise 360VR Stabilization of Long Shots tutorial.
Copy ONLY Adjustment Tracks
This script copies ONLY the selected camera's shot's image preprocessor
adjustment tracks and camera keys to the current active object's shot. It does not copy
the (effect of) the stabilization data. This is useful for copying a setup that has not been
stabilized, since it doesn't produce a key for each frame.
Copy Stabilization
This script copies the selected camera's shot's stabilization (including
adjustments) and camera keys to the current active object's shot. This is used to create
additional copies of a stabilized and 3D tracked shot in order to create a linearized
subwindow that tracks with a specific mesh. The resulting subwindow can be exported
to another 3D package, a render produced, and the resulting render warped back into
the 360 VR image.
Create Next Piece
Helper scripts (Intro and Pro) for generating successively-named .sni files
containing overlapping ranges of frames. See the Piece-wise 360VR Stabilization of
Long Shots tutorial.
Note that the Synthia version (labeled "Intro") can be used by Intro or Pro users.
The Pro version can be used only by Pro users who have set up SyPy python on their
machine. The principal advantage of the python version is that it can be set up to run
without any user interface appearing when the script runs.
is part of the workflow that allows renders from other 3D applications to be smoothly
integrated with 360VR shots, even if those apps do not have 360 VR capabilities.
The script offers control over the aspect ratio of the image produced: it can either
be chosen automatically (recommended) or you can choose a specific value. You can
choose whether to use a worst-case (widest) camera field of view, or to have the
generated camera field of view exactly track the size of the mesh as it approaches
and/or recedes. You also control the amount of margin around the mesh. It's good to
have some safe area, and also a more substantial margin is convenient when
evaluating your tracking.
While you can select multiple meshes, they should be close together. If they are
spread apart, separate renders area likely a better idea. (And it's likely to not work at all
if meshes on opposite sides of the camera are involved!)
Note that the time required for Follow Mesh depends on the complexity of the
mesh(es) being followed. So don't select meshes that don't matter to determining the
area to be rendered, such as interior meshes. If you need to, you can replace the mesh
with a lower-resolution proxy for this operation—even a simple bounding box.
Fusion 360VR Stabilization
Outputs a Fusion comp that applies the stabilization to the shot in Blackmagic
Design's Fusion, even the free Fusion version. The comp contains elements able to spin
an equirectangular image, keyed on every frame.
You can then add a Saver to this comp, or add additional 2D or 3D compositing
nodes. For that, Andrew Hazelden's KartaVR toolset may be helpful.
When enabled by an Export to clipboard checkbox, the entire exported comp is
placed on the clipboard, where you can paste it into a new or existing comp, instead of
opening the file.
You also have the option of having SynthEyes ask Fusion to open the just-
exported file, though Fusion's startup time makes this somewhat less useful than
pasting if you are repeatedly re-exporting.
Generate VR Lens Map Importer
This importer in the 360 VR area reads a small file of data from a lens
manufacturer, and produces a lens distortion map that converts images to 360 VR
format. The initial example for this is Entaniya super-wide-angle lenses (220, 250, and
280 degrees), which cannot be calibrated with typical lens grids due to the high field of
view. (The random-dot method can be used, see the Camera Calibration manual.)
In addition to the possibility of creating your own 360VR cameras with a back-to-
back pair of cameras and lenses (sometimes with a drone in between), these maps can
also be used to get 3D solves of wide-angle still images to make quicker on-set surveys
for regular non-360VR projects.
The input file consists of multiple lines, each an incoming ray angle and the
resulting image-plane radius. For example, the 250 degree 3mm lens data starts out
(determined by your coordinate system setup), no matter what the original camera
platform did. After the script has been run, the camera will always be in the default
Front-facing orientation, staring straight ahead, so that its local coordinate system
orientation matches the world coordinate system. Note that the new world-coordinate
stabilization bakes in any existing stabilization or camera animation.
Unfollow Mesh
This script is run during the workflow for rendering a mesh externally. After the
mesh has been rendered, the images are loaded back into SynthEyes for conversion to
360VR format using Shot/Change Shot Images with the Re-distort option. The resulting
images are match up with the original unstabilized images.
The Unfollow Mesh script is then run; it reloads the stabilizer from the camera
path so that world-stabilized 360 VR images (and matching camera path) result. The
Align Camera to Path script can be re-run at this time, if that was done for the original
footage.
First, why and when is motion capture necessary? The moving-object tracking
discussed previously is very effective for tracking a head, when the face is not doing all
that much, or when trackable points have been added in places that don’t move with
respect to one another (forehead, jaws, nose). The moving-object mode is good for
making animals talk, for example. By contrast, motion capture is used when the motion
of the moving features is to be determined, and will then be applied to an animated
character. For example, use motion capture of an actor reading a script to apply the
same expressions to an animated character. Moving-object tracking requires only one
camera, while motion capture requires several calibrated cameras.
Second, we need to establish a few very important points: this is not the kind of
capability that you can learn on the fly as you do that important shoot, with the client
breathing down your neck. This is not the kind of thing for which you can expect to
glance at this manual for a few minutes, and be a pro. Your head will explode. This is
not the sort of thing you can expect to apply to some musty old archival footage, or
using that old VHS camera at night in front of a flickering fireplace. This is not
something where you can set up a shoot for a couple of days, leave it around with small
children or animals climbing on it, and get anything usable whatsoever. This is not the
sort of thing where you can take a SynthEyes export into your animation software, and
expect all your work to be done, with just a quick render to come. And this is not the sort
of thing that is going to produce the results of a $250,000 custom full body motion
capture studio with 25 cameras.
With all those dire warnings out of the way, what is the good news? If you do
your homework, do your experimentation ahead of time, set up technically solid
cameras and lighting, read the SynthEyes manual so you have a fair understanding
what the SynthEyes software is doing, and understand your 3-D package well enough
to set up your character or face rigging, you should be able to get excellent results.
In this manual, we’ll work through a sample facial capture session. The
techniques and issues are the same for full body capture, though of course the tracking
marks and overall camera setup for body capture must be larger and more complex.
Introduction
To perform motion capture of faces or bodies, you will need at least two cameras
trained on the performer from different angles. Since the performer’s head or limbs are
rotating, the tracking features may rotate out of view of the first two cameras, so you
may need additional cameras to shoot more views from behind the actor.
Tip: if you can get the field of view and accuracy you need with only two
cameras, that will make the job simpler, as you can use stereo features,
which are simpler and faster because only two cameras are involved.
The fields of view of the cameras must be large enough to encompass the entire
motion that the actor will perform, without the cameras tracking the performer (OK,
experts can use SynthEyes for motion capture even when the cameras move, but only
with care).
You will need to perform a calibration process ahead of time, to determine the
exact position and orientation of the cameras with respect to one another (assuming
they are not moving). We’ll show you one way to achieve this, using some specialized
but inexpensive gear.
Very Important: You’ll have to ensure that nobody knocks the cameras out of
calibration while you shoot calibration or live action footage, or between
takes.
You’ll need to be able to resynchronize the footage of all the cameras in post.
We’ll tell you one way to do that.
Generally the performer will have tracker markers attached, to ensure the best
possible and most reliable data capture. The exception to this would be if one of the
camera views must also be used as part of the final shot, for example, a talking head
that will have an extreme helmet added. In this case, markers can be used where they
will be hidden by the added effect, and in locations not permitting trackers, either natural
facial features can be used (HD or film source!), or markers can be used and removed
as an additional effect.
After you solve the calibration and tracking in SynthEyes, you will wind up with a
collection of trajectories showing the path through space of each individual feature.
When you do moving-object tracking, the trackers are all rigidly connected to one
another, but in motion capture, each tracker follows its own individual path.
You will bring all these individual paths into your animation package, and will
need to set up a rigging system that makes your character move in response to the
tracker paths. That rigging might consist of expressions, Look At controllers, etc; it’s up
to you and your animation package.
Alternatively, you can set up a rig in SynthEyes using the GeoH tracking facilities.
By attaching your motion-capture trackers to it, the rig will be animated to match up with
the trackers in 3D. You can then export the rig in BVH format, and import it into
character animation software.
Camera Types
Since each camera’s fields of view must encompass the entire performance
(unless there are many overlapping cameras), at any time the actor is usually a small
portion of the frame. This makes progressive DV, HD, or film source material strongly
suggested.
Progressive-scan cameras are strongly recommended, to avoid the factor of two
loss of vertical resolution due to interlacing. This is especially important since the
tracking markers are typically small and can slip between scan lines.
While it may make operations simpler, the cameras do not have to be the same
kind, have the same aspect ratio, or have the same frame rate.
Lens distortion will substantially complicate calibration and processing. To
minimize distortion, use high-quality lenses, and do not operate them near their
maximum field of view, where distortion is largest. Do not try to squeeze into the
smallest possible studio space.
Camera Placement
The camera placements must address two opposing factors: one, that the
cameras should be far apart, to produce a large parallax disparity with good depth
perception, and that the cameras should be close together, so that they can
simultaneously observe as many trackers as possible.
You’ll probably need to experiment with placement to gain experience, keeping in
mind the performance to be delivered.
Cameras do not have to be placed in any special pattern. If the performance
warrants it, you might want coverage from up above, or down below.
If any cameras will move during the performance, they will need a visible set of
stationary tracking markers, to recover their trajectory in the usual fashion. This will
reduce accuracy compared to a carefully calibrated stationary camera.
Lighting
Lighting should be sufficient to keep the markers well illuminated, avoiding
shadowing. The lighting should be enough to be able to keep the shutter time of the
cameras as low as possible, consistent with good image quality.
The porcupine is hung by a support wire in the location of the performer’s head,
then rotated as it is recorded simultaneously from each camera. The porcupine’s
colored pom-poms can be viewed virtually all the time, even as they spin around to the
back, except for the occasional occlusion.
Similar fixtures can be built for larger motion capture scenarios, perhaps using
dolly track to carry a wire frame. It is important that the individual trackable features on
the fixture not move with respect to one another: their rigidity is required for the
standard object tracking.
The path of the calibration fixture does not particularly matter.
Camera Synchronization
The timing relationship between the different cameras must be established.
Ideally, all the cameras would all be gen-locked together, snapping each image at
exactly the same time. Instead, there are a variety of possibilities which can be
arranged and communicated to SynthEyes during the setup process.
Motion capture has a special solver mode on the Solver Panel : individual
mocap. In this mode, the second dropdown list changes from a directional hint to
control camera synchronization.
If the cameras are all video cameras, they can be gen-locked together to all take
pictures identically. This situation is called “Sync Locked.”
If you have a collection of video cameras, they will all take pictures at exactly the
same (crystal-controlled) rate. However, one camera may always be taking pictures a
bit before the other, and a third camera may always be taking pictures at yet a different
time than the other two. The option is “Crystal Sync.”
If you have a film camera, it might run a little more or a little less that 24 fps, not
particularly synchronized to anything. This will be referred to as “Loose Sync.”
In a capture setup with multiple cameras, one can always be considered to be
Sync Locked, and serve as a reference. If it is a video camera, other video cameras are
in Crystal Sync, and any film camera would be Loose Sync.
If you have a film camera that will be used in the final shot, it should be
considered to be the sync reference, with Sync Locked, and any other cameras are in
Loose Sync.
The beginning and end of each camera’s view of the calibration sequence and
the performance sequence must be identified to the nearest frame. This can be
achieved with a clapper board or electronic slate. The low-budget approach is to use a
flashlight or laser pointer flash to mark the beginning and end of the shot.
Next, we will need to set up a set of links between corresponding trackers in the
two shots. The links must always be on the Camera02 trackers, to a Camera01 tracker.
This can be achieved at least four different ways.
Matching Plan A: Temporary Alignment
This is probably easiest, and we may offer a script to do the grunt work in the
future.
Begin by assigning a temporary coordinate system for each camera, using the
same pom-poms and ordering for each camera. It is most useful to keep the porcupine
axis upright (which is where pom-poms along the support wire would come in useful, if
available); in this shot three at the very bottom of the porcupine were suitable.
With matching constraints for each camera, when you re-solve, you will obtain
matching pairs of tracker points, one from each camera, located very close to one
another.
Now, with the Coordinate System panel open, Camera02 active, and the
Top view selected, you can click on each of Camera02’s tracker points, and then alt-
click (or command-click) on the corresponding Camera01 point, setting up all the links.
As you complete the linking, you should remove the initial temporary constraints
from Camera02.
Matching Plan B: Side by Side
In this plan, you can use the Camera & Perspective viewport configuration. Make
Camera01 active, and in the perspective window, right-click and Lock to current
camera with Camera01’s imagery, then make Camera02 active for the camera view.
Now camera and perspective views show the two shots simultaneously. (Experts: you
can open multiple perspective windows and configure each for a different shot. You can
also freeze a perspective window on a particular frame, then use the key accelerators to
switch frame as needed.)
You can now click the trackers in the camera(02) view, and alt-click the
matching (01) tracker in the perspective window, establishing the links.
This approach is a bit easier in that you only have one kind of view to worry
about, but a bit less flexible because the perspective view has some additional tricks to
help you find the right match, such as changing the frame number or unlocking to
examine the 3D view.
Matching Plan D: Cross Link by Name
This plan is probably more trouble than it worth for calibration, but can be an
excellent choice for the actual shots. You assign names to each of the pom-poms, so
that the names differ only by the first character, then use the Track/Cross-Link by Name
menu item to establish links.
It is a bit of pain to come up with different names for the pom-poms, and do it
identically for the two views, but this might be more reasonable for other calibration
scenarios where it is more obvious which point is which.
Completing the Calibration
We’re now ready to complete the calibration process. Change Camera02 to
For body tracking, a typical approach is to put the performer in a black outfit
(such as UnderArmour), and attach table-tennis balls as tracking features onto the
joints. To achieve enough visibility, placing balls on both the top and bottom of the
elbow may be necessary. Because the markers must be placed on the outside of the
body, away from the true joint locations, character rigging will have to take this into
account.
calibration scene file. Open the 3-D panel . For each camera, select the camera in
the select-by-name dropdown list. Then hit Blast and answer yes to store the field of
view data as well. Then, hit Reset twice, answering yes to remove keys from the field of
view track also. The result of this little dance is to take the solved camera paths (as
modified by the script), and make them the initial position and orientation for each
camera, with no animation (since they aren’t actually moving).
Next, replace the shot for each camera with LeftFaceSeq and RightFaceSeq.
Again, these shots have been cropped based on the light flashes, which would normally
be removed completely. Set the End Frame for each shot to its maximum possible. If
necessary, use an animated ROI on the Imaging Preprocessing panel so that you can
keep both shots in RAM simultaneously. Hit Control-A and delete to delete all the old
trackers. Set each Lens to Known to lock the field of view, and set the solving mode of
each camera to Disabled, since the cameras are fixed at their calibrated locations.
We need a placeholder object to hold all the individual trackers. Create a moving
object, Object01, for Camera01, then a moving object, Object02, for Camera02. On the
Solving Panel, set Object01 and Object02 to the Individual mocap solving mode, and
set the synchronization mode right below that.
Two-Dimensional Tracking
You can now track both shots, creating the trackers into Object01 and Object02
for the respective shots. If you don’t track all the markers, at least be sure to track a
given marker either in both shots, or none, as a half-tracked marker will not help. The
Hand-Held: Use Others mode may be helpful here for the rapid facial motions.
Frequent keying will be necessary when the motion causes motion blur to appear and
disappear (a lot of uniform light and short shutter time will minimize this).
Solving
You’re ready to solve, and the Solve step should be very routine, producing
paths for each of the linked trackers. The final file is facetrk.sni.
Fine Point! Normally SynthEyes will produce a position for each tracker that
has an equal amount of error as seen from each camera. That's the best
choice for general motion capture. However, if you have one primary camera
where you want an exact match, and several secondary cameras to produce
3-D, you can adjust the Overall Weight of the reference camera, on the Solver
panel, to be 10.0 or similar large value. Adjust the Overall Weight of the
secondary cameras down to 0.1, for example. The resulting solution will be
much more accurate for Camera01, and less so for the secondary cameras.
Afterwards, you can start checking on the trackers. You can scrub through the
shot in the perspective window, orbiting around the face. You can check the error
curves and XYZ paths in the graph editor . By switching to Sort by Error mode ,
you can sequence through the trackers starting from those with the highest error.
Modeling
You can use the calculated point locations to build models. However, the
animation of the vertices will not be carried forward into the meshes you build. Instead,
when you do a Convert to Mesh or Assemble Mesh operation in the perspective
window, the current tracker locations are frozen on that frame.
If desired, you can repeat the object-building process on different frames to build
up a collection of morph-target meshes.
Since the tracker/vertex linkage information is stored, you can use the MDD
Motion Designer Mesh Animation export script to export animated meshes. You must
export the mesh itself, typically in obj format. Then in your target application, you will
read the mesh and apply the MDD data as an animated deformer. Note that it is crucial
that the same exact obj mesh model be used as was used to generate the mdd file.
SynthEyes, since the vertex numbering will not match exactly. SynthEyes does
renumber and adjust vertices as needed when it reads OBJs, to match its internal
vertex processing pipeline, especially if there is normal or texture coordinate data.
You may also find it helpful to export the 3D path of a tracker by itself, using the
Tracker 3D on Mesh exporter. You can use that data to drive bones or other rigging if
desired.
Light Illumination
The basic approach in light illumination tracking is to put one or more trackers
into the scene, then have the average intensity level calculated frame by frame and
stored. This process is performed by the Set Illumination from Trackers script.
The calculated illumination data can be stored on a light, so it can illuminate all
the objects in a downstream scene; on a mesh, so the mesh's color can directly match
the measured color; or on the tracker itself, which is most useful as a way to transfer a
number of measured colors to downstream applications.
Trackers can be placed anywhere in the scene, but a flat white wall is a good
choice and corresponds to typical white-balancing techniques. If a substantially-colored
location is desired, you can have a monochrome light intensity calculated. Since a flat
wall provides nothing to track, you can use offset tracking, or alternatively if there is a
mesh there, use Drop on Mesh to put a 3D seed point there, and that point can be used
to determine the location to be examined. The tracker's size is the area measured (and
can be animated).
If you are calculating color for a mesh, you should put the tracker(s) on a
relatively flat area of the mesh. SynthEyes can calculate the average interior color of a
planar tracker as affected by in-plane masks, so to get fine control over the measured
area, you can track a larger planar tracker, then before running the script, add a specific
in-plane mask for that area. (Lock the tracker and don't retrack without deleting that
mask.)
The selected trackers are averaged to create the illumination track. When the 2D
tracker location is examined, the tracker can be used only when it has a valid 2D
location. When the 3D location is used, the 3D location can be used throughout the shot
(except when it goes offscreen), which might be undesirable if the tracker location is
occluded by something else. Accordingly a "3D masked by enable" option allows the
tracker's Enable track to control whether or not the tracker's position is examined, even
if the tracker has not been tracked because its 3D location is being used to determine
where to analyze the lighting.
Note that while trackers do not have to last for the duration of the shot, there will
be consistency issues when trackers appear or disappear. If there aren't any usable
trackers on a given frame, no illumination key will be generated on that frame.
The script provides a variety of options to offset and re-scale the data, to remove
black levels and to be able to increase the illumination level contrast.
You can control whether illumination keys are generated for the entire shot, the
playback range, or a single frame, and can have keys generated on every frame, every
other frame, every fifth frame, etc, depending on how rapidly the light levels change.
You can use the graph editor or the color swatch on the 3-D or lighting panels to
edit the generated animated illumination level curves, including to change from linear to
spline interpolation. Note that there is a separate static color for a light, mesh, or tracker
that is shown on swatches elsewhere in SynthEyes, such as the graph editor and
hierarchy view.
Once the illumination curve has been created, it can be exported via selected
exports, most notably Filmbox (FBX), which is the generally the most capable and
modern export, and Export/Illumination as text to obtain a plain text file.
One tracks the spout of a teacup, the other tracks the spout’s shadow on the
table. After solving the scene, we have the 3-D position of both. The procedure to locate
the light in this situation is as follows.
Switch to the Lighting Control Panel . Click the New Light button, then the
New Ray button. In the camera view, click on the spout tracker, then on the tracker for
the spout’s shadow.
We could turn on the Far-away light checkbox, if the light was the sun, so that the
direction of the light is the same everywhere in the scene.
Hint: if the sun is visible (typical for 360 VR shots), create a single ray with a
far tracker as the Source tracker, to solve for the sun, configured as a Far-
away light.
Instead, we’ll leave the checkbox off, and instead set the distance spinner to 100,
moving the light away that distance from the target.
The light will now be positioned so that it would cast a shadow from the one
tracker to the next; you can see it in the 3-D views. The lighting on any mesh objects in
the scene changes to reflect this light position, and you see the shadows in the
perspective view. You can repeat this process for the second light, since the spout casts
two shadows. This scene is Teacup.sni.
If the scene contained two different teapot-type setups due to the same single
light, you can place two rays on one light, and the 3-D position of the light will be
triangulated, without any need for a distance.
SynthEyes handles another important case, where you have walls, fences, or
other linear features casting shadows, but you can not say that a single point casts a
shadow at another single point. Instead, you may know that a point casts a shadow
somewhere on a line, or a line casts a shadow onto a point. This is tantamount to
knowing that the light falls somewhere in a particular 3-D plane. With two such planes,
you can identify the light’s direction; with four you may be able to locate it in 3-D.
To tell SynthEyes about a planar constraint, you must set up two different rays,
one with the common tracker and one point on the wall/fence/etc., and the other ray
containing the common tracker and the other point on the wall/fence/etc.
On the lighting control panel , add a new light, click the New Ray button,
then click one of the two highlight trackers twice in succession, setting that tracker as
both the Source and Target. The target button will change to read “(highlight)” Raise the
Distance spinner to 48”, which is an estimated value (not needed for Far-away lights).
From the quad view, you’ll see the light hanging in the air above the ball, as in reality.
Add a second light for the second highlight tracker.
If you scrub through the shot, you’ll see the lights moving slightly as the camera
moves. This reflects the small errors in tracking and mesh positioning. You can get a
single average position for the light as follows: select the light, select the first ray if it
isn’t already by clicking “>”, then click the “All” button. This will load up your CPU a bit
as the light position is being repeatedly averaged over all the frames. This can be
helpful if you want to adjust the mesh or tracker, but you can avoid further calculations
by hitting the Lock button. If you later change some things, you can hit the Lock button
to cause a recalculation.
Note: The flex room is not part of the normal default set. To use the flex
panel, use the room bar's Add Room to create a Flex room that uses the Flex
panel.
Terminology
There’s a bit of new terminology to define here, since there are both 2-D and 3-D
curves being considered.
Curve. This refers to a spline-like 2-D curve. It will always live on one particular
shot’s images, and is animated with a different location on each frame.
Flex. A spline-like 3-D curve. A flex resides in 3-D, though it may be attached to
a moving object. One or more curves will be attached to the flex; those curves will be
analyzed to determine the 3-D shape of the flex.
Rough-in. Placing control-point keys periodically and approximately.
Tuning a curve. Adjusting a curve so it matches edges exactly.
Overview
Here’s the overall process for using the curve and flex system to determine a 3-D
curve. The quick synopsis is that we will get the 2-D curves positioned exactly on each
frame throughout the shot, then run a 3-D solving stage. Note that the ordering of the
steps can be changed around a bit, and additional wrinkles added, once you know what
you are doing — this is the simplest and easiest to explain.
1. Open the shot in SynthEyes
2. Obtain a 3-D camera solution, using automatic or supervised tracking
3. At the beginning of the shot, create a (2-D) curve corresponding to the flex-to-
be.
4. “Rough-in” the path of the curve, with control-point animation keys throughout
the shot. There is a tool that can help do this, using the existing point
trackers.
5. Tune the curve to precisely match the underlying edges (manual or
automatic).
6. Draw a new flex in an approximate location. Assign the curve to it.
7. Configure the handling of the ends of the flex.
8. Solve the flex
9. Export the flex or convert it to a series of trackers.
Flexes and curves are not closed like the letter O — they are open like the letter
U or C. Also, they do not contain corners, like a V. Nor do they contain tangency
handles, since the curvature is controlled by SynthEyes.
Generally, the curve will be set up to track a fairly visible edge in the image. Very
marginal edges can still be used and solved to produce a flex, if you are willing to do the
tracking by hand.
flex, you should open the Flex Control Panel , which contains both flex and curve
controls, and select the camera view.
Click the New Curve button, then, in the Camera View, click along the section of
curve to be tracked, creating control points as you go. Place additional control points in
areas of rapid curvature, and at extremal points of the curve. Avoid area where there is
no trackable edge if possible.
When you have finished with the last control point, right-click to exit the curve
creation mode.
The first field asks how many trackers must be valid for the roughing process to
continue. In this case, 5 trackers were selected to start. As shown, it will continue even
if only one is valid. If the value is raised to 5, the process will stop once any tracker
becomes invalid. If only a few trackers are valid (especially less than 4), less useful
predictions of the curve shape can be made.
The Key every N frames setting controls how often the curve is keyed. At the
default setting of 1, a key will be placed at every frame, which is suitable for a hand-held
shot, but less convenient to subsequently refine. For a smooth shot, a value of 10-20
might be more appropriate.
The Rough Curve Importer will start at the current frame, and begin creating keys
every so often as specified. It will stop if it reaches the end of the shot, if there are too
few trackers still valid, or if it passes by any existing key on the curve. You can take
advantage of this last point to “fill in” keys selectively as needed, using different sets of
trackers at different times, for example.
After you’ve used the Rough Curve Import tool, you should scrub through the
shot to look for any places where additional manual tweaking is required.
The curve may go offscreen or be obscured. If this happens, you can use the
curve Enable checkbox to disable the curve. Note that it is OK if the curve goes partly
offscreen, as long as there is enough information to locate it while it is onscreen.
Curve Tuning
Once the curve has been roughed into place, you’re ready to “tune” it to place it
more accurately along the edge. Of course, you can do this all by hand, and in adverse
conditions, that may be necessary. But it is much better to use the automated Tune
tool.
You can tune either a single frame, with the Tune button, or all of the frames
using of course the All button. When a curve is tuned on a frame, the curve control
points will latch onto the nearby edge.
For this reason, before you begin tuning, you may wish to create additional
control points along the curve, by shift-clicking it.
The All button will bring up a control panel that controls both the single- and
multi-frame tuning. If you want to adjust the parameters without tuning all the frames,
simply close the dialog instead of hitting its Go button.
You can adjust to edges of different widths, control the distance within which the
edge is searched, and alter the trade-off between a large distant edge, and a smaller
nearby one. Clearly, it is going to be easier to track edges with no nearby edges of
similar magnitude.
The control panel allows you to tune all frames (potentially just those within the
animation playback range), only the frames that already have keys (to tune your
roughed-in frames), or only the frames that do not have keys (to preserve your
previously-keyed frames).
You can also tell the tracking dialog to use the tuned locations as it estimates
(using splining) where the curve is in subsequent frames, by turning on the Continuous
Update checkbox. If you have a simple curve well-separated from confounding factors,
you can use this feature to track a curve through a shot without roughing it in first. The
drawback of doing this is that if the curve does get off course, you can wind up with
many bad keys that must be repaired or replaced. [You can remove erroneous keys
using Truncate.] With the Continuous Update box off, the tuning process is more
predictable, relying solely on your roughed-in animation.
Flex Creation
With your curve(s) complete, you can now create a flex, which is the 3-D splined
curve that will be made to match the curve animation. The flex will be created in 3-D in a
position that approximately matches its actual position and shape. It is usually most
convenient to open the Quad view, so that you can see the camera view at the same
time you create the flex in one of the 3-D views (such as the Top view).
Click the New Flex button, then begin clicking in the chosen 3-D view to lay out a
succession of control points. Right-click to end the mode. You can now adjust the flex
control points as needed to better match the curve. You should keep the flex somewhat
shorter than the curve.
To attach the curve to the flex, select the curve in the camera view, then, on the
flex control panel, change the parent-flex list box for the curve to be your flex. (Note: if
you create a flex, then a curve while the flex is still selected, the curve is automatically
connected to the flex.)
Flex Endpoints
The flex’s endpoints must be “nailed down” so that the flex can not just shrivel up
along the length of the curve, or pour off the end. The ends are controlled by one of
several different means:
1. the end of the flex can stay even with its initial position,
2. the end of the flex can stay even with a specific tracker, or
3. the end of the flex can exactly match the position of a tracker.
The first method is the default. The last method is possible only if there is a
tracker at the desired location; this arises most often when several lines intersect. You
can track the intersection, then force all of the flexes to meet at the same 3-D location.
To set the starting or ending tracker location for a flex, click the Start Pt or End Pt
button, then click on the desired tracker. Note that the current 3-D location of the tracker
will be saved, so if you re-track or re-solve, you will need to reset the endpoint.
The flex will end “even” with the specified point, meaning so that the point is
perpendicular to the end of the flex. To match the position exactly, turn on the Exact
button.
Flex Solving
Now that you’ve got the curve and flex set up, you are ready to solve. This is very
easy — click the Solve button (or Solve All if you have several flexes ready to be
solved).
After you solve a flex, the control points will no longer be visible—they are
replaced by a more densely sampled sequence of non-editable points. If you want to get
back to the original control points to adjust the initial configuration, you can click Clear.
Flex Exports
Once you have solved the flex, you can export it. At present, there are two
principal export paths. The flexes are not currently exported as part of regular tracker
exports.
First, you can convert the flex into a sequence of trackers with the Convert Flex
to Trackers script on the Script menu. The trackers can be exported directly, or, more
usefully, you can use them in the Perspective window to create a mesh containing those
trackers. For example, on a building project where the flex is the edge of the road, you
can create a ground mesh to be landscaped, and still have it connect smoothly with the
road, even if the road is not planar.
Second, you can export the coordinates of the points along the flex into a text file
using the Flex Vertex Coordinates exporter. Using that file is up to you, though it should
be possible to use it to create paths in most packages.
Image Storage
First, you want to get the shot into RAM. Clearly, having a lot of RAM will help.
If your shot does not fit, you have two primary options: using the small playback-
range markers on the SynthEyes time bar to play back a limited range of the shot at a
time, or to reduce the amount of memory by down-sampling the images in the
SynthEyes image preprocessor (or maybe drop to black/white). If you have 4K film or
RED scans and are playing back on a 2K monitor, you might as well down-sample by 2x
anyway.
If you have a RAID array on your computer, SynthEyes’s sophisticated image
prefetch system should let you pull large sequences rapidly from disk.
Actual-Speed Playback
Once you have your shot playing back as rapidly as possible, you probably want
it to play at the desired rate, typically 24, 25, or 29.97 fps.
You can tell SynthEyes to play back at full speed, half speed, quarter speed, or
double actual speed using the items on the View menu.
SynthEyes does not change your monitor display rate. It achieves your desired
frame rate by playing frames as rapidly as possible, duplicating or dropping frames as
appropriate (much like a film projector double-exposes frames). The faster the display
rate, the more accurately the target frame rate can be achieved, with less jitter.
With the control panel hidden, you should use the space bar to start and stop
playback, and shift-A to rewind to the beginning of the shot.
Safe Areas
You can enable one or more safe-area overlays from the safe area submenu of
the View menu.
sometimes an incomplete set of rotation constraints. You might also consider flipping on
the Slow but sure box, or give a hint for a specific camera motion, such as Left or Up.
Eliminate inconsistent constraints as a possibility by turning off the Constrain checkbox.
Object Mode Track Looks Good, but Path is Huge. If you’ve got an object
mode track that looks good---the tracker points are right on the tracker boxes---but the
object path is very large and flying all over the place, usually you haven’t set up the
object’s coordinate system, so by default it is the camera position, far from the object
itself. Select one tracker to be the object origin, and use two or more additional ones to
set up a coordinate system, as if it was a normal camera track.
Master Reset Does Not Work. By design, the master reset does not affect
objects or cameras in Refine or Refine Tripod mode: they will have to be set back to
their primary mode anyway, and this prevents inadvertent resets.
Can’t open an image file or movie. Image file formats leave room for
interpretation, and from time to time a particular program may output an image in a way
that SynthEyes is not prepared to read. SynthEyes is intended for RGB formats with 8
or more bits per channel. Legacy or black and white formats will probably not read. If
you find a file you think should read, but does not, please forward it to SynthEyes
support. Such problems are generally quick to rectify, once the problematic file can be
examined in detail. In the meantime, try a different file format, or different save options,
in the originating program, if possible, or use a file format converter if available. Also,
make sure you can read the image in a different program, preferably not the one that
created it: some images that SynthEyes “couldn’t read” have turned out to be corrupted
previously.
Can’t delete a key on a tracker (ie by right-clicking in the tracker view window,
or right-clicking the Now button). If the tracker is set to automatically key every 12
frames, and this is one of those keys, deleting it will work, but SynthEyes will
immediately add a new key! Usually you want to back up a few frames and add a
correct key; then you can delete or correct the original one. Or, increase the auto-key
setting. Also, you can not delete a key if the tracker is locked.
Crashes
We work hard to make sure that SynthEyes and Synthia are as reliable as
possible. If there is a problem, we want not only to minimize lost work, but we very
much appreciate it if you can provide us as much information as possible, so that we
can isolate and eliminate the cause of the problem.
For more information, see https://www.ssontech.com/faqs/crash.html
Catastrophe: Out of Memory!
With modern 64-bit processors and operating systems, actually running out of
memory is difficult, since the operating system simply assigns more virtual memory, and
the application runs slower and slower while the disk runs faster and faster
("thrashing").
If you get an out-of-memory error, it probably means one of the following:
Your machine doesn't have enough free space on your system disk for all
the virtual memory required by SynthEyes with other apps.
You've asked for something requiring really a lot of memory, such as a
huge image.
There's a SynthEyes bug.
The first item is little understood. You'll need to free up space on the drive, or
reduce the amount of RAM cache that SynthEyes uses.
Recovering Crash Dumps
Dump files can be packaged up and emailed to support@ssontech.com along
with a step-by-step description of what you were doing immediately before the crash. If
the file is more than 5 MB or so, please use Dropbox etc.
Windows: Crash dump files for the last 3 crashes can be retrieved by opening
SynthEyes and selecting the File/User Data Folder menu item. Go UP two levels to
the AppData folder, then DOWN into the Local, SynthEyes, and CrashDumps
folders, in that order.
macOS: When the Mac crash dialog appears, click the See Report button. Then
command-A and command-C to copy all of the report onto the clipboard. Paste it
into the email message.
Linux: The location or existence of the core dump file depends on your particular
system and its settings. You can package up core dumps if you can generate them.
SNI File Recovery
In the event that SynthEyes detects an internal error, it will usually pop up an
Imminent Crash dialog box asking you if you wish to save a crash sni file. You should
take a screen capture with Print Screen on your keyboard, then respond Yes.
SynthEyes will save the current file, then pops up another dialog box that tells you that
location.
You should then open a paint program such as Photoshop, Microsoft Paint, Paint
Shop Pro, etc, and paste in the screen capture. Save the image to a file, then e-mail the
screen capture, the crash save file, and a short description of what you were doing right
before the crash, to SynthEyes technical support for diagnosis, so that the problem can
be fixed in future releases. If you have Microsoft’s Dr. Watson turned on, forwarding that
file would also be helpful.
Crash files will be stored in the same folder as the sni file itself, with "_recovered"
appended. For example, if you've saved the scene as shot27B.sni, the file
shot27B_recovered.sni will be created (or overwritten) in the event of a crash. If you've
not yet saved the scene, the name of the shot imagery will be used; for example
shot9take3.mov becomes shot9take3_recovered.sni in the folder with the imagery. If
there is no imagery, or the folder can't be written, the file crash.sni will be created in
your File/User Data Folder.
The crash save file (crash.sni) is your SynthEyes scene, right before it began the
operation that resulted in the crash. You can restart SynthEyes then open the crash file.
You should often be able to continue using this file, especially if the crash occurred
during solving. It is conceivable that the file might be corrupted, so if you recently had
saved the file, you may wish to go back to that file for safety.
File/Merge
After you start File/Merge and select a file to merge, you will be asked whether or
not to rename the trackers as necessary, to make them unique. If the current scene has
Camera01 with trackers Tracker01 to Tracker05, and the scene being merged also has
Camera01 with trackers Tracker01 to Tracker05, then answering yes will result in
Camera01 with Tracker01 to Tracker05 and Camera02 with Tracker06 to Tracker10. If
you answer no, Camera01 will have Tracker01 to Tracker05 and Camera02 will also
have (different) Tracker01 to Tracker05, which is more confusing to people than
machines.
As that example shows indirectly, cameras, objects, meshes, and lights are
always renamed to be unique. Renaming is always done by appending a number: if the
incoming and current scenes both have a TrashCan, the incoming one will be renamed
to TrashCan1.
If you are combining a shot with a previously-tracked reference, you will probably
want to keep the existing tracker names, to make it easiest to find matching ones.
Otherwise, renaming them with yes is probably the least confusing unless you have a
particular knowledge of the TrackerNN assignments (in which case, giving them actual
names such as Scuff1 is probably best).
You might occasionally track one portion of a shot in one scene file, and track a
different portion of the same shot in a separate file. You can combine the scene files
onto a single camera as follows:
1. Open the first shot
2. File/Merge the second shot.
3. Answer yes to make tracker names unique (important!)
4. Select Camera02 from the Shot menu.
5. Hit control-A to select all its trackers.
The first three fields control the range of frames to be exported, in this case,
frames 10 from 15. The offset allows the frame number in the file to be somewhat
different, for example, -10 would make the first exported frame appear to be frame
zero, as if frame 10 was the start of the shot.
The next four fields, two scales and two offsets, manipulate the horizontal (U)
and vertical (V) coordinates. SynthEyes defines these to range from -1 to +1 and from
left to right and top to bottom. Each coordinate is multiplied by its scale and then the
offset added. The normal defaults are scale=1 and offset=0. The values of 0.5 and 0.5
shown rework the ranges to go from 0 to 1, as may be used by other programs. A scale
of -0.5 would change the vertical coordinate to run from bottom to top, for example.
The scales and offsets can be used for a variety of fixes, including changes in the
source imagery. You’ll have to cook up the scale and offset on your own, though. Note
that if you are writing a tracker file on SynthEyes and will then read it back in with a
transform, it is easiest to write it with scale=1 and offset=0, then make changes as you
read in, since if you need to try again you can retry the import, without having to
reexport.
Continuing with the controls, Even when missing causes a line to be output
even if the tracker was not found in that frame. This permits a more accurate import,
though other programs are less likely to understand the file. Similarly, the Include
Outcome Codes checkbox controls whether or not a small numeric code appears on
each line that indicates what was found; it permits a more accurate import, though is
less likely to be understood elsewhere.
The 2-D tracks box controls whether or not the raw 2-D tracking data is output;
this is not necessarily mandatory, as you’ll see.
The 3-D tracks box controls whether or not the 3-D path of each tracker is
included―this will be the 2-D path of the solved 3-D position, and is quite smooth. In
the example, 3-D paths are exported and 2-D paths are not, which is the reverse of the
default. When the 3-D paths are exported, an extra Suffix for 3-D can be added to the
tracker names; usually this is _3D, so that if both are output, you can tell which is which.
Finally, the Extra Points box controls whether or not the 2-D paths of an extra
helper points in the scene are output.
Importing
The File/Import/Import 2-D Tracker Paths import can be used to read the output
of the 2-D exporter, or from other programs as well. The import script offers a similar set
of controls to the exporter:
The import runs roughly in reverse of the export. The frame offset is applied to
the frame numbers in the file, and only those within the selected first and last frames are
stored.
The scale and offset can be adjusted; by default they are 1 and 0 respectively.
The values of 2 and -1 shown undo the effect of the 0.5/0.5 in the example export panel.
If you are importing several different tracker data files into a single moving object
or camera, you may have several different trackers all named Tracker1, for example,
and after combining the files, this would be undesirable. Instead, by turning on Force
unique names, each would be assigned a new unique name. Of course, if you have
done supervised tracking in some different files to combine, you might well leave it off,
to combine the paths together.
If the input data file contains data only for frames where a tracker has been
found, the tracker will still be enabled past the last valid frame. By turning on Truncate
enables after last, the enable will be turned off after the last valid frame.
After each tracker is read, it is locked up. You can unlock and modify it as
necessary. The tracking data file contains only the basic path data, so you will probably
want to adjust the tracker size, search size, etc.
If you will be writing your own tracker data file for this script to import, note that
the lines must be sorted so that the lines for each specific tracker are contiguous, and
sorted in order of ascending frame number. This convention makes everyone’s scripts
easier. Also, note that the tracker names in the file never contain spaces, they will have
been changed to underscores.
Like the 2-D exporter, the File/Export/Plain Camera Path exporter provides a
variety of options:
First Frame. First frame to export
Last Frame. Last frame to export.
Frame Offset. Add this value to the frame number before storing it in the file.
World Scaling. Multiplies the X,Y, Z coordinates, making the path bigger or smaller.
Axis Mode. Radio-buttons for Z Up; Y Up, Right; Y Up, Left. Adjust to select the desired
output alignment, overriding the current SynthEyes scene setting.
Rotation Order. Radio buttons: XYZ or ZXY. Controls the interpretation of the 3 rotation
angles in the file.
Zoom Channel. Radio buttons: None, Field of View, Vertical Field of View, Focal
Length. Controls the 7th data channel, namely what kind of field of view data is
output, if any.
Look the other way. SynthEyes camera looks along the –Z axis; some systems have
the camera look along +Z. Select this checkbox for those other systems.
The 3-D path importer, File/Import/Camera/Object Path, has the same set of
options. Though this seems redundant, it lets the importer read flexibly from other
packages. If you are writing from SynthEyes and then reading the same data back in,
you can leave the settings at their defaults on both export and import (unless you want
to time-shift too, for example). If you are changing something, usually it is best to do it
on the import, rather than the export.
3 4 12 LightPole
When importing trackers, the coordinates are automatically set up as a seed
position on the tracker. You may want to change it to a Lock constraint as well. If a
tracker of the given name does not exist, a new tracker will be created.
Details
SynthEyes uses two folders for batch file processing: an input folder and an
output folder. Submit for ... places them into the input folder; exports are written to the
exports folder, completed files are written to the output folder, and the input file
removed. You can open those folders via items in the Batch Processing submenu. You
can set the location of the input, export, and output folders from the Preferences panel.
If you want to examine it specifically, without exporting, enter the scene's
export type into Synthia, and to set it, something like make the scene's export
type `Filmbox FBX`. Be sure to use the back quotes (indicating a literal string)
around the export name. You can make the scene's export type `None` to
prevent any export.
Thanks for reading this far!
System Requirements
Installation and Registration
Customer Care Features and Automatic Update
Menu Reference
Control Panel Reference
Additional Dialogs Reference
Viewport Features Reference
Perspective Window Reference
Overview of Standard Tool Scripts
Preferences and Scene Settings Reference
Keyboard Reference
Viewport Layout Manager
Support
Mac OS X
macOS 10.13 (High Sierra), 10.12 (Sierra), 10.11 (El Capitan), 10.10 (Yosemite),
or 10.9 (Mavericks).
Intel Mac (64-bit).
2 GB RAM minimum. 4+ GB are strongly suggested. 8-32+ GB are suggested for
pro, 360VR, and film users.
3 button mouse with middle scroll wheel/button. See the viewport reference
section for help using a trackball or Microsoft Intellipoint mouse driver.
1024x768 or larger display, 32 bit color, with OpenGL support. Large multi-head
configurations require graphics cards with sufficient memory.
Approximately 50 MB disk space to install. Tutorial and other learning materials
are separate downloads.
A supported 3-D animation or compositing package to export paths and points to.
Can be on a different machine, even a different operating system, depending on
the target package.
A user familiar with general 3-D animation techniques such as key-framing.
Linux
Redhat/CentOS 6.4+ or Ubuntu 12.04 and 14.04 LTS. Other versions are likely to
work but are not officially supported. See the website's Linux page for the latest
information.
Interchange
The different platforms can read each other's files in general, though there may
be differences due to character encoding (iso-latin-1 vs UTF-8). Note that Windows,
Linux, and macOS licenses must be purchased separately; normal licenses are not
cross-platform.
Windows Installation
Please uninstall SynthEyes Demo before installing the actual product.
Run the installer such as Syn####Pro64Setup.exe (with variations such as
Syn####Intro32Setup.exe etc), where #### is a version number, such as 1604.
You can install to the default location, or any convenient location. The installer
will create shortcuts on the desktop for the SynthEyes program and HTML
documentation.
If you have a trackball or tablet, you may wish to turn on the No middle-mouse
preference setting to make alternate mouse modes available. See the viewport
reference section. You should turn on Enhance Tablet Response if you have trouble
stopping playback or tracking (Wacom appears to have fixed the underlying issue in
recent drivers, so getting a new tablet driver may be another option.)
Proceed to the Registration section below.
Windows - QuickTime
If you have shots contained in QuickTime™ (Apple) movies (ie .mov files), you
must have Apple’s QuickTime installed on your computer. If you use a capture card that
produces QuickTime movies, you will already have QuickTime installed. SynthEyes can
also produce preview movies in QuickTime format.
You can download QuickTime from
http://www.apple.com/quicktime/download/
Quicktime Pro is not required for SynthEyes to read or write files.
Note that at present Apple does not offer a 64-bit version of Quicktime, so
reading Quicktime files is only available by a 32-bit server for 64-bit SynthEyes.
Mac OS X Installation
1. Download the Syn####Pro.dmg or Syn####Intro.dmg file (where #### is the
version number, such as 1504) to a convenient location on the Mac.
2. Double-click it to open it and expose the SynthEyes installation package.
3. Double-click the installer to run it.
4. Proceed through a normal install; you will need root permissions.
5. Eject the .dmg file from the finder; it will be deleted.
6. Start SynthEyes from your Applications folder. You can create a shortcut on your
desktop if you wish.
7. Proceed to the Registration directions below.
Note that pictures throughout this document are based on the Windows version;
the Mac version will be very similar. In places where an ALT-click is called for on
Windows, a Command-click should be used on the Mac, though these should be
indicated in this manual.
If you have a trackball or Microsoft’s Intellipoint mouse driver, you may wish to
turn on the No middle-mouse preference setting to make alternate mouse modes
available. See the viewport reference section. You should turn on Enhance Tablet
Response if you have trouble stopping playback or tracking (Wacom appears to have
fixed the underlying issue in recent drivers, so getting a new tablet driver may be
another option.)
Linux Installation
You can double-check the current installation instructions and notes on the
website's Linux page.
1. Unpack the .gz file into any convenient folder. The details may vary with your
browser and system; for example, you might double-click the .gz file, then drag the
whole SynthEyes folder inside it onto your desktop. While it's possible to run from
there for quick testing, the following installation steps are necessary for SynthEyes
to appear on the menus, have file icons, be available to other users, etc.
2. Open the Terminal window.
3. Use "cd" to go to the unpacked folder.
4. If you are using Kubuntu, edit SynthEyes.sh to uncomment the
UBUNTU_MENU_PROXY line. You can also do this if you don't want to use Unity's
global menu for SynthEyes.
5. Type "sudo ./install.sh" and hit enter. You'll need to type in your password before the
script will run. If you aren't on your system's sudoer list, you'll need someone is who
is, or you'll need the superuser password. If your system does not use sudo, type
"su" and hit enter to become superuser instead.
Registration
When you first start SynthEyes, a form will appear for you to enter registration
information. (Floating licenses: see the SyFlo documentation for alternate directions.) If
you’ve entered the temporary authorization data already, you can access the
registration dialog from the Help/Register menu item.
Important: if you have renewed your existing license, you still need to
register. Click "Register" on the Help menu to access this panel.
Proceed as follows:
1. Use copy and paste to transfer the entire serial number (looks like S6-
1601-12345-6789X, starting with SN- for Windows Intro, S6- for Windows
Pro, IM- for macOS Intro, M6- for macOS Pro, L6- for Linux Pro, or C6- for
cross-platform Pro) from the email confirmation of your purchase to the
form.
2. Fill out the remainder of the form. Sorry if this seems redundant to the
original order form, but it is necessary. This data should correspond to the
user of the software. If the user has no clear relationship to the purchaser
(a freelancer, say), please have the purchaser email us to let us know, so
we don’t have to check to see who to issue the license to.
3. Hit Register, and SynthEyes will place a block of data onto the clipboard.
Be sure to hit Register, not the other button, this is a frequent cause of
confusion, simple though it may seem. If you receive a message about
filling out all the fields, be sure to check them and supply full, accurate,
answers, paying special attention to those marked with an asterisk (*).
4. Create an email message entitled “SynthEyes Registration” addressed to
register@ssontech.com. Please use your normal email address and
program and send plain-text emails. Some web mail programs can re-
code mails in ways that are difficult to read. Click inside the new- message
window’s text area, then hit control-V (command-V on Mac) to paste the
information from SynthEyes into the message. If the email is blank, or
contains temporary authorization information, be 300% sure you clicked
Register after filling out the registration form.
5. If you are re-registering, after getting a new workstation, say, or are not
the person originally purchasing the software, please add a remark to that
effect to the mail.
6. Send the e-mail. Please use an email address for the organization owning
the license, not a personal gmail, hotmail, etc address. We cannot send
confidential company authorization data to your personal email; use of
personal emails frequently causes problems for license owners.
7. You will receive an e-mail reply, typically the next business day, containing
the authorization data. Be sure to save the mail for future reference.
Authorization
You'll use this authorization procedure twice: once to install the temporary license
that you receive initially, so you can get tracking right away; and a second time to install
the permanent license.
1. View the email containing the authorization data.
2. Highlight the authorization information — everything from the left
parentheses bracket “(” to the right parentheses “)” and including both
parentheses — in your e-mail program, and select Edit/Copy in your mail
program. Note: the serial number (SN-, IM-, etc) is not part of the
authorization data but is included above it only for reference, especially for
multiple licenses.
3. Start SynthEyes. If the registration dialog box appears, click the Install
License from Clipboard button. If your temporary registration is still
active, the registration dialog will not appear, so click Help/Authorize
instead.
4. Windows: the User Account Control popup will appear; click Yes to allow
SynthEyes to work.
5. A “Customer Care Login Information” dialog will appear. If you have a
temporary license, you should hit Cancel on this panel. It will be pre-filled
with your serial and the support login and password that came in the
email with the authorization data. (The user ID looks like jan02, and the
password looks like jm323kx—these two will not work, use the ones from
your mail.) We just point these out so you know what they are, and where
they go—they are also used for logging into the customer-only area of the
website. You can change the update check frequency if desired, then click
OK.
6. SynthEyes will acknowledge the data, then exit. When you restart it, you
should see your permanent information listed on the splash screen, and
you’re ready to go.
Windows Uninstallation
Like other Windows-compatible programs, use the Add/Remove Programs tool
from the Windows Control Panel to uninstall SynthEyes.
Mac Uninstallation
Delete the folders /Applications/SynthEyes and (if desired) your preferences etc
/Users/YourName/Library/Applications Support/SynthEyes
Linux Uninstallation
Delete the directories /opt/SynthEyes and optionally ~/.SynthEyes
SynthEyes automatically checks for updates when it starts up, each time in “on
startup” mode, but only the first time each day in “daily” mode. The check is performed
in the background, so that it does not slow you down.
You can easily check for updates manually, especially if you are in “never” mode.
Click the D/L button on the main toolbar, or Help/Check for updates.
Automatic Downloads
SynthEyes checks to determine the latest available build on the web site. If the
latest build is more current than its own build, SynthEyes begins a download of the new
version. You'll see the new build number displayed in the button. The download takes
place in the background as you use SynthEyes. The D/L button will be Yellow during the
download.
Once the download is complete, the D/L button will turn green. When you have
reached a convenient time to install the new version, click the D/L button or select the
Help/Install Updated menu item. After making sure your work is saved, and that you are
ready to proceed, SynthEyes closes and opens the folder containing the new installer.
Depending on your system and security settings, the installer may or may not
start automatically. If it does not start automatically, click it to begin installation.
If there isn't usable update information configured via Help/Set Update Data, or
there is some other connection problem (firewall, no Wifi etc), the D/L button will turn
red.
The D/L button will turn blue with a build number displayed if a new build is
available, but you need to renew support to be able to download and use it.
The same process occurs when you check for updates manually by clicking the
D/L button, with a few more explanatory messages.
Suggestions
We maintain a feature-suggestion system to help bring you the most useful and
best-performing software possible. Click the Sug button on the toolbar, or Help/Suggest
a Feature menu item.
This miniature forum not only lets you submit requests, but comment and vote on
existing feature suggestions. (This is not the place for technical support questions,
however, please don’t clog it up with them.)
Demo version customers: this area is not available. Send email to support
instead. Past experience has shown that most suggestions from demo customers are
already in SynthEyes; please be sure to check the manual first!
Web Links
The Help menu contains a number of items that bring up web pages from the
https://www.ssontech.com web site for your convenience, including the main home
page, the tutorials page, and the forum.
E-Mail Links
The Help/Tech Support Mail item brings up an email composition window
preaddressed to technical support. Please investigate matters thoroughly before
resorting to this, consulting the manual, tutorials, support site, and forum.
If you do have to send mail, please include the following:
Your name and organization
An accurate subject line summarizing the issue
A detailed description of your question or problem, including information
necessary to duplicate it, preferably from File/New
Screen captures, if possible, showing all of SynthEyes.
A .sni scene file, after Clear All Blips, and ZIPped up (not RAR).
The better you describe what is happening, the quicker your issue can be
resolved.
Help/Report a Credit brings up a preaddressed email composition window so
that you can let us know about projects that you have tracked using SynthEyes, so we
can add them to our “As Seen On” web page. If you were wondering why your great
new project isn’t listed there… this is the cure.
Find New Scripts. Causes SynthEyes to locate any new scripts that have been placed
in the script folder since SynthEyes started, making them available to be run.
Tidy Up Scripts. Opens the Tidy Up Scripts dialog to look for redundant older scripts in
the user or system script area.
Export Again. Redoes the last export, saving time when you are exporting repeatedly
to your CG application.
Export Multiple. Exports the current scene to all the exporters currently configured for
multiple export.
Configure Multi-export. Brings up the Multiple Export Configuration dialog to configure
multiple exports.
File Info. Shows the full file name of the current file, its creation and last-written times,
full file names for all loaded shots, file names for all imported meshes (and the
time they were imported), and file names for mesh textures (being extracted or
only displayed). Plus, allows you to add your own descriptive information to be
stored in the file.
User Data Folder. Opens the folder containing preferences, the batch, script,
downloads, and preview movie folders, etc.
Make New Language Template. Creates a new XML file for user editing to customize
SynthEyes's dialogs and menus for non-English display. Enter the name of the
language, as it is to appear on the preferences panel, then select a file in which
to store the XML data (to be visible to SynthEyes, it must be in the system or
user script folders).
Submit for Batch. The current scene is submitted for batch processing by writing it into
the queue area. It will not be processed until the Batch Processor is running, and
there are no jobs before it.
Submit for Render. The current scene is submitted for batch processing: the Save
Sequence process will be run on the active shot to write out the re-processed
image sequence to disk as a batch task. Use the Save Sequence dialog to set up
the output file and compression settings first, close it without saving, then Submit
for Batch. You will be asked whether or not to output both image sequences
simultaneously for stereo. Other multiple-shot renderings can be obtained by
Sizzle scripting, or by submitting the same file several times with different shots
active.
Batch Process. SynthEyes opens the batch processing window and begins processing
any jobs in the queue.
Batch Input Queue. Opens a Windows Explorer to the batch input queue folder, so that
the queue can be examined, and possibly jobs removed or added.
Batch Output Queue. Opens a Windows Explorer to the batch output queue folder,
where completed jobs can be examined or moved to their final destinations.
Exporter Outputs. Opens a Windows Explorer to the default exporter folder.
Edit Menu
Undo. Undo the last operation, changes to show what, such as “Undo Select Tracker.”
See the Undo button, which can be right-clicked to open a menu allowing
multiple operations to be redone at once.
Redo. Re-do an operation previously performed, then undone. See the Redo button,
which can be right-clicked to open a menu allowing multiple operations to be
redone at once.
Select same color. Select all the (un-hidden) trackers with the same color as the
one(s) already selected.
Select All etc affect the tracker selections, not objects in the 3-D viewports.
Invert Selection. Select unselected trackers, unselect selected trackers.
Clear Selection. Unselect all trackers.
Lock Selection. Lock the selection so it can not be changed.
Delete. Delete selected objects and trackers.
Hide unselected. Hide the unselected trackers
Hide selected. Hide the selected trackers
Reveal selected. Reveal (un-hide) the selected trackers (typically from the lifetimes
panel).
Reveal nnn trackers. Reveal (un-hide) all the trackers currently hidden, ie nnn of them.
Flash selected. Flashes all selected trackers in the viewports, making them easier to
find.
Polygonal Selections. Lassos follow the mouse motion to create irregular shapes.
Rectangular Selections. Lassos sweep out a rectangular area.
Lasso Trackers. The lasso selects trackers
Lasso Meshes Instead. The lasso selects meshes
Edit Pivots. When checked, the pivot points of meshes and GeoH objects can be
moved in the 3-D and perspective windows. Hit-testing against the meshes is
disabled when active, only the pivot points or handles can be selected. A mesh's
pivot point is snapped to the center, sides and corners of its bounding box, or to
any vertex if it is sufficiently close to the vertex in all three dimensions (ie not
just a single 3-D viewport). You can adjust the snap distance in the Mesh area of
the preferences. Equivalent to menu items on the 3-D viewport and perspective
view's right-click menu, and the GeoH perspective toolbar.
Lock Children. Turn this on if you want to move a pivot without moving the pivots of its
children. Normally, when you move the parent, the children are carried along
automatically as a consequence of their hierarchy. If the child pivots are correct,
and only the parent needs adjustment, turning this on lets you do that, by
creating opposite motions for the child pivots, so they stay put. This setting is
shared system wide; you can see it on the main Edit menu and the perspective
and 3D views' right-click menu.
Adjust Rig. Normally, if you drag a GeoH object's handles in the perspective or 3D
views, only the unlocked joints move and receive keys. That prevents you from
messing up your carefully-constructed hierarchy. Turning on Adjust Rig lets you
move the rig in the perspective view, setting keys on all joints lock values. This is
quite dangerous. This setting is shared system wide; you can see it on the main
Edit menu and the perspective and 3D views' right-click menu.
Update Textures Now. All mesh textures will be re-extracted from the shot.
Redo Textures at Solve. Enable control, when on, texture calculations will re-run
whenever the scene is solved.
Add Notes. Creates a new note in the camera view and brings up the Notes Editor to
configure it.
Spinal aligning. Sets the spinal adjustment mode to alignment.
Spinal solving. Sets the spinal adjustment mode to solving.
Edit Scene Settings affects the current scene only.
Edit Preferences contains some of the same settings; these do not affect the current
scene, but are used only when new scenes are created.
Reset Preferences. Set all preferences back to the initial factory values. Gives you a
choice of presets for a light- or dark-colored user interface, appropriate for office
or studio use, respectively.
Edit Keyboard Map. Brings up a dialog allowing key assignments to be altered.
View Menu
Reset View. Resets the camera view so the image fills its viewport.
Expand to Fit. Same as Reset View.
Reset Time Bar. Makes the active frame range exactly fill the displayable area.
Rewind. Set the current time to the first active frame.
To End. Set the current time to the last active frame.
Play in Reverse. When set, replay or tracking proceeds from the current frame towards
the beginning.
Frame by Frame. Displays each frame, then the next, as rapidly as possible.
Quarter Speed. Play back at one quarter of normal speed.
Half Speed. Play back at one half of normal speed.
Normal Speed. Play back at normal speed (ie the rated frame per second value),
dropping frames if necessary. Note: when the Tracker panel is selected,
playback is always frame-by-frame, to avoid skipping frames in the track.
Double Speed. Play back at twice normal speed, dropping frames if necessary.
Show Image. Turns the main image’s display in the camera view on and off.
Show Trackers. Turns on or off the tracker rectangles in the camera view.
Only Camera01’s trackers. Show only the trackers of the currently-selected camera or
object. When checked, trackers from other objects/cameras are hidden. The
camera/object name changes each time you change the currently-selected
object/camera on the Shot menu.
Only selected trackers. Shows only selected trackers and their stereo spouse, all
others are hidden. Along with the shift-O key, this gives a quick way to isolate on
specific trackers during tracking. Note that you can still lasso unselected trackers,
which is an easy way to switch to other trackers without having to exit and re-
enter this mode.
The next items occur in the Tracker Appearance submenu.
Show All Tracker Names. When turned on, tracker names will be displayed for all
trackers in the camera, perspective, and 3D viewports.
Show Supervised Names. When turned on, tracker names will be displayed for (only)
the supervised trackers, in the camera, perspective, and 3D viewports.
Show Selected Names. The tracker names are displayed for each selected tracker.
Show Names in Viewport. The tracker names (as controlled by the above) are also
displayed in the 3D viewports.
Use alternate colors. Each tracker has two different colors. The alternate set is
displayed and editable when this menu item is checked, generally under control
of the Set Color by RMS Error script.
With Central Dot. Trackers are shown with a central dot in the camera view, even for
auto-trackers, offset trackers, and locked trackers, where the dot would not
normally be shown.
Show Only as Dots. All locked trackers are shown as solely a dot in the camera view,
reducing clutter.
Show as Dots in 3D Views. Trackers are shown as dots, not X's, in the 3D viewports.
Show as Dots in Perspective. Trackers are shown as dots, not X's, in the Perspective
view.
Show Tracker Trails. When on, trackers show a trail into the future(red) and past(blue).
Show 3-D Points. Controls the display of the solved position marks (X’s).
Show 3-D only if visible. When turned on, the 3-D points will be displayed (if enabled)
ONLY if the tracker is visible on the frame as well. This is helpful in decluttering
the camera view on long shots.
Show 3-D Seeds. Controls the display of the seed position marks (+’s).
Show axes for stereo links. When checked, the cyan axis mark indicating that a
tracker has a coordinate-system lock is shown for stereo trackers. Normally,
these axis marks are suppressed to reduce clutter. You might turn this on to help
identify unpaired trackers on a stereo camera.
Show Tracker Radar. Visualization tool shows a circle at each tracker reflecting the
tracker's error on the current frame.
Show Reference Crosshairs. Enables display of animatable (per-tracker) crosshairs
for use as a reference for the selected tracker. Default accelerator key: shift-+.
Show Planar 3D Pyramids. Enables the 3-D pyramid visualization display for 3-D
planar trackers.
Show Object Paths Submenu:
Show no paths. Paths aren't shown for any cameras or moving objects.
Show all paths. Paths are shown for all cameras and moving objects.
Show selected object. The path is shown for the selected camera or moving object, if
any.
Show selected and children. The paths are shown for the selected camera or moving
object, plus its GeoH children.
The following items resume in the main View menu.
Show Seed Paths. When on, values for the ‘seed’ path and field of view/focal length of
the camera and moving objects will be shown and edited. These are used for
“Use Seed Paths” mode and for camera constraints. When off, the solved values
are displayed.
Show Meshes. Controls display of object meshes in the camera viewport. Meshes are
always displayed in the 3-D viewports.
Solid Meshes. When on, meshes are solid in the camera viewport, when off, wire
frame. Meshes are always wireframe in the 3-D viewports.
Outline Solid Meshes. Solid meshes have the wire frame drawn over top, to better
show facet locations.
Cartoon Wireframe Meshes. A special wireframe mode where only the outer boundary
and any internal creases are visible, intended for helping align set and object
models.
Only texture alpha. When on, meshes will display the alpha channel of their texture,
instead of the texture itself, simplifying over-painting.
Shadows. Show ground plane or on-object shadows in perspective window. This
setting is sticky from SynthEyes run to run.
Show Lens Grid. Controls the display of the lens distortion grid (only when the Lens
control panel is open).
Show Notes. Enables or disables the display of notes in all camera views.
Timebar background. Submenu with the following two entries. There is a preference in
the User Interface section to control which is selected at startup.
Show cache status. The color of each frame of the timebar background depends on
whether the frame is in-cache or not (pink).
Show tracker count. The color of each frame of the timebar background depends on
the number of trackers active on that frame for the active object, following a
sequence corresponding to the graph editor background. Can reduce
performance if there are many trackers and frames.
OpenGL Camera View. When enabled, the camera view is drawn using OpenGL.
When off, built-in graphics are used, possibly with an assist from the Software
mesh render below. The fastest option will depend on your scene and mesh.
OpenGL 3-D Viewports. When enabled, the 3D viewports are drawn using OpenGL.
When off, built-in graphics are used, possibly with an assist from the Software
mesh render below. The fastest option will depend on your scene and mesh.
Software mesh render. Applies only to camera and 3D viewports that are not using
OpenGL. When on, 3D meshes are rendered using a SynthEyes-specific internal
software renderer. For contemporary multi-core machines, this will be much
faster than the operating system's drawing routines, and can be faster than
OpenGL. Takes effect at startup, after that, see the Software mesh render item
on the View menu.
Double Buffer. Slightly slower but non-flickery graphics. Turn off only when maximal
playback speed required.
Sort Alphabetic. Trackers are sorted alphabetically, mainly for the up/down arrow keys.
Updated when you change the setting in the graph editor.
Sort by Error. Trackers are sorted from high error to low error.
Sort by Time. Trackers are sorted from early in the shot to later in the shot.
Sort by Lifetime. Trackers are sorted from shortest-lived to longest-lived.
Group by Color. In the sort order, all trackers with colors assigned will come first, with
each color grouped together, each sorted by the specified order, followed by
trackers at the default color. When this is off, trackers are not grouped together;
the order is determined solely by the sort order.
Only Selected Splines. When checked, the selected spline, and only the selected
spline, will be shown, regardless of its Show This Spline status.
Safe Areas. This is a submenu with checkboxes for a variety of safe areas you can turn
on and off individually (you can turn on both 90% and 80% at once, for example).
Safe areas are defined in the file safe14.ini in the main SynthEyes folder; you
can add your own safe14.ini to add your own personal safe area definitions.
Change the color via the preferences.
Track Menu
Add Many Trackers. After a shot is auto-tracked and solved, use Add Many Trackers
to efficiently added additional trackers.
Clean Up Trackers. The Clean Up Trackers dialog finds bad trackers or frames and
deletes them.
Coalesce Nearby Trackers. Brings up a dialog that searches for, and coalesces,
multiple trackers that are tracking the same feature at different times in the shot.
Combine Trackers. Combine all the selected trackers into a single tracker, and delete
the originals.
Cross Link by Name. The selected trackers are linked to trackers with the same name,
except for the first character, on other objects. If the tracker’s object is solved
Indirectly, it will not link to another Indirectly-solved object. It also will not link to a
disabled object.
Drop onto mesh. If a mesh is positioned appropriately in the camera viewport, drops all
selected trackers onto the mesh, setting their seed coordinates. Similar to Place
mode of Perspective window.
Fine-tune Trackers. Brings up the fine-tune trackers dialog to automatically re-track
automatic trackers using supervised tracking. Reduces jitter on some scenes.
Selected Only. When checked, only selected trackers are run while tracking. Normally,
any tracker which is not Locked is processed.
Stop on auto-key. Causes tracking to stop whenever a key is added as a result of the
Key spinner, making it easy to manually tweak the added key locations.
Preroll by Key Smooth. When tracking starts from a frame with a tracker key,
SynthEyes backs up by the number of Key Smooth frames, and retracks those
frames to smooth out any jump caused by the key.
Do not auto-generate keys. Do not auto-generate keys (see next two entries).
Auto-generate for ZWTs. Auto-generate keys only for zero-weighted trackers (ZWTs).
(See next entry.)
Auto-generate for all. Auto-generate keys every Key (every) frame (from the tracker
control panel) based on a rough 3D location when the second key is added to a
tracker, if the camera/object is already solved.
Smooth after keying. When a key is added or changed on a supervised tracker,
update the relevant adjacent non-keyed frames, based on the Key Smooth
parameter. Defaults to OFF in versions after 1502, unlike 1502 and earlier.
Pan to Follow. The camera view pans automatically to keep selected trackers
centered. This makes it easy to see the broader context of a tracker.
Pan to Follow 3D. This variant keeps the solved 3-D point of the tracker centered,
which can be better for looking for systematic solve biases. It also allows you to
follow a moving object or (the pivot point of) a mesh.
ZWT auto-calculation. The 3-D position of each zero-weighted tracker is recomputed
whenever it may have changed. With many ZWTs and long tracks, this might
slow interactive response; use this item to temporarily disable recalculation if
desired.
Lock Z-Drop on. Mimics holding down the 'Z' key in the camera view, so that the Z-
Drop feature is engaged: a selected tracker is immediately dropped at a clicked-
on location, rather than having to be dragged there. Saves wear and tear on
pinky finger. For convenience, meshes will not be selected if you click on them
when this control is engaged. The status bar will show if this control is on.
Steady Camera. Predicts the next location of the tracker based on the last several
frames. Use for smooth and steady shots from cranes, dollies, steadi-cams.
Hand-Held: Sticky. Use for very irregular features poorly correlated to the other
trackers. The tracker is looked for at its previous location. With both hand-held
modes off, trackers are assumed to follow fairly smooth paths.
Hand-Held: Use others. Uses previously-tracked trackers as a guide to predict where a
tracker will next appear, facilitating tracking of jittery hand-held shots.
Re-track at existing. Use this mode to re-track an already-tracked tracker. The search
will be centered at the previously-determined location, preventing large jumps in
position. Used for fine-tuning trackers, for example.
Search from solved. When enabled, the search for an already-solved tracker will begin
at its prediction location, as long as the camera or object is also solved on that
frame. Useful for extending tracks after a solve, or for rapidly tracking ZWTs. Not
available for trackers with offsets. On by default, it is a sticky preference.
No resampling. Supervised tracking works at the original image resolution.
Linear x 4. Supervised tracking runs at 4 times the original image resolution, with linear
interpolation between pixels. Default setting, suitable for usual DV and prosumer
cameras.
Mitchell2 x 4. Tracking runs at 4x resolution with B=C=1/3 Mitchell-Netravali filtering,
which produces sharper images than bilinear but less than Lanczos: an
intermediate setting if there are too many noise artifacts with Lanczos2.
Lanczos2 x 4. Tracking runs at 4x resolution with N=2 Lanczos filtering, which
produces sharper images—of the image and the noise. Suitable primarily for
clean uncompressed source footage. Takes longer than Linear x 4.
Lanczos3 x 4. Tracks at 4x with N=3 Lanczos, which is even sharper, but takes longer
too.
Linear x 8. Supervised tracking runs at 8x the original resolution. Not necessarily any
better than running at 4x.
Mitchell2 x 8. Tracking runs at 8x resolution with B=C=1/3 Mitchell-Netravali filtering,
which produces sharper images than bilinear but less than Lanczos: an
intermediate setting if there are too many noise artifacts with Lanczos2 x 8.
Lanczos2 x 8. Tracks at 8x with N=2 Lanczos.
Lanczos3 x 8. Tracks at 8x with N=3 Lanczos.
(Tool Scripts). Tool scripts were listed at the end of the track menu in earlier versions
of SynthEyes. They now have their own Script menu.
Shot Menu
Add Shot. Adds a new shot and camera to the current workspace. This is different than
File/New, which deletes the old workspace and starts a new one! SynthEyes will
solve all the shots at the same time when you later hit Go, taking links between
trackers into account. Use the camera and object list at the end of the Shot menu
to switch between shots.
Edit Shot. Brings up the shot settings dialog box (same as when adding a shot) so that
you can modify settings. Switching from interlaced to noninterlaced or vice versa
will require retracking the trackers.
Change Shot Images. Allows you to select a new movie or image sequence to replace
the one already set up for the present shot. Useful to bring in a higher or lower-
resolution version, or one with color or exposure adjustments. Warning: changes
to the shot length or aspect ratio will adversely affect previously-done work.
Add Separate Alpha. Brings up the file open dialog so that you can attach a separate
sequence of images as an alpha channel to the shot. (Not in the Intro version.)
Save Sequence. Brings up the dialog to save the image sequence being output from
the image preparation dialog (stabilized, cropped, undistorted, etc) to disk as a
sequence or movie. Equivalent to buttons of the same name on the Summary
panel and Output tab of image preprocessor(preparation dialog).
Image Preparation. Brings up the image preparation dialog (also accessed from the
shot setup dialog), for image preparation adjustments, such as region-of-interest
control, as well as image stabilization.
Enable Prefetch. Turns the image prefetch on and off. When off, the cache status in
the timebar will not be updated as accurately.
Read 1f at a time. Preference! Tells SynthEyes to read only one frame at a time, but
continue to pre-process frames in parallel. This option can improve performance
when images are coming from a disk or network that performs poorly when given
many tasks at once.
Raw Caching. When on (experimental), SynthEyes caches a lightly preprocessed
version of the shot, ie mainly with resolution preprocessing performed. That
allows you to change the other image preprocessor settings without forcing a
cache flush of the entire shot. The drawback is that the image preprocessing
must be repeated frequently, reducing playback update rates. When off (usually),
SynthEyes caches the result of all preprocessor operations. The checkbox gives
a tradeoff of when you want the work done. While you are changing settings
frequently, you may want it on. Once you're done with it, you want if off.
Prefer DirectShow. Preference! Windows only. Tells SynthEyes to use the Window's
DirectShow movie-reading subsystem to read AVIs instead of the older but
simpler and more reliable AVI subsystem. DirectShow is required to read AVI
files >2GB.
Activate other eye. When the camera view is showing one of the views from a stereo
pair, switches to the other eye. Additionally, if there is a perspective window
locked to the other (now-displayed) eye, it is switched to show the original
camera view, swapping the two views.
Stereo Geometry. Brings up the Stereo Geometry control panel. Same as
Window/Stereo Locking.
Add Moving Object. Adds a new moving object for the current shot. Add trackers to
this object and SynthEyes will solve for its trajectory. The moving object shows
as a diamond-shaped null in the 3-D workspace.
Remove Moving Object. Removes the current object and trackers attached to it. If a
camera, the whole shot goes with it.
Create Lens Grid Trackers. Part of the lens calibration workflow. The images should
be a big grid of spots, this creates a regular grid of trackers.
Lens Master Calibration. Runs the new fancy lens calibration system, based on dot
grids, checkerboard grids, or random dot patterns.
Process Lens Grid. Largely replaced by Lens Master Calibration, available for
backwards compatibility. Runs the lens calibration calculation, once trackers are
set up and fully tracked.
Write Distortion Maps. Creates forward and inverse lens distortion map images for the
current lens distortion configuration. You'll be prompted to set the file location
and type (from the list of 16-bit/float exporters). The out-of-bounds handling and
extension for the inverse map are set from preferences.
(Camera and Object List). This list of cameras and objects appears at the end of the
shot menu, showing the current object or camera, and allowing you to switch to a
different object or camera. Selecting an object here is different than selecting an
object in a 3-D viewport.
Script Menu
User Script Folder. Opens your personal folder of custom scripts in the Explorer or
Finder. Handy for making or modifying your own. SynthEyes will mirror the
subfolder structure to produce a submenu tree, so you can keep yours separate,
for example.
System Script Folder. Opens SynthEyes’s folder of factory scripts. Helpful for quickly
installing new script releases. SynthEyes will mirror the subfolder structure to
produce a submenu tree, so you can put all the unused scripts into a common
folder to simplify the view, for example.
Run Script. Runs a Sizzle, python, or Synthia script, supplying it with information to
connect with this SynthEyes instance.
Script bars. Sub-menu allowing you to open script bars, which give you buttons to push
to whatever scripts you want.
Script Bar Manager. The Script Bar Manager allows you to create, edit, and delete
script bars, which contain buttons that run scripts.
(Most-recent scripts area.) Shows the last few scripts you've run, for quick access
including a keyboard accelerator. You can control the number from the
Save/Export section of the preferences.
(Tool Scripts). Any tool scripts will appear here; selecting one will execute it. Such
scripts can reach into the current scene to act as scripted importers, gather
statistics, produce output files, or make changes. Standard scripts include Filter
Lens F.O.V., Invert Perspective, Select by type, Motion capture calibrate, Shift
constraints, etc. Scripts in the user's area are listed with an asterisk (*) as a
prefix. If a user's script has the same name as a system script, thus replacing it,
the user's script will be used, and it is prefixed with three asterisks(***) to note
this situation.Note that importers and exporters have their own submenus on the
File menu. See the Sizzle reference manual for information on writing scripts.
Window Menu
(Control Panel List). Allows you to change the control panel using standard Windows
menu accelerator keystrokes.
No floating panels. The current active panel is docked on the left edge of the main
application window.
Float One Panel. The active panel floats in a small carrier window and can be
repositioned. If the active panel is changed, the carrier switches to the new
panel. This may makes better use of your screen space, especially with larger
images or multiple monitor configurations
Many Floating Panels. Each panel can be floated individually and simultaneously.
Clicking each panel’s button either makes it open, or if it is already open, closes
it. Only one panel is the official active panel. Important note: mouse, display, and
keyboard operations can depend on which panels are open, or which panel is
active. These combinations may not make sense, or may interact in undesirable
ways without warning. If in doubt, keep only a single panel open.
No Panel. Closes all open floating panels, or removes the fixed panel. Note that one
panel is still active for control purposes, even though it is not visible. Useful to get
the most display space, and minimize redraw time, when using SynthEyes for
RAM playback.
Hold Region Tracker Prep. Launch the Hold Tracker Preparation dialog, used to
handle shots with a mix of translation and tripod-type nodal pans.
Solver Locking. Launch the solver’s lock control dialog, used to constrain the camera
path directly.
Stereo Locking. Same as the Stereo Geometry item. Brings up the Stereo Geometry
control panel.
Path Filtering. Launch the solver's object path (and FOV) filtering dialog, which controls
any post-solve filtering.
Advanced Lens Controls. Launch the Advanced Lens Control dialog, also accessed
from the Lens panel.
Spinal Editing. Launch the spinal editing control dialog, for real-time updates of solves.
Texture panel. Opens the texture extraction control panel.
Notes Editor. Brings up the Notes Editor.
Floating Camera. Click to create a new floating camera view. If the "Only one floating
camera, persp. view" preference (User Interface section) is on, this menu item
will alternately open and close a single camera view.
Floating Constrained Points. Opens a new constrained points (axis control) view,
showing coordinate setup information. If the "Only one floating (other)"
preference (User Interface section) is on, this menu item will alternately open and
close a single constrained points view.
Floating Graph editor. Opens a new graph editor. If the "Only one floating (other)"
preference (User Interface section) is on, this menu item will alternately open and
close a single graph editor.
Floating Hierarchy View. Click to open a new floating Hierarchy View. If the "Only one
floating (other)" preference (User Interface section) is on, this menu item will
alternately open and close a single Hierarchy View.
Floating Perspective. Click to open a new floating perspective window. If the "Only one
floating camera, persp. view " preference (User Interface section) is on, this
menu item will alternately open and close a single perspective view.
Floating Phase View. Opens a new phase view. If the "Only one floating (other)"
preference (User Interface section) is on, this menu item will alternately open and
close a single phase view.
Floating SimulTrack. Click to open a new floating SimulTrack window. If the "Only one
floating (other)" preference (User Interface section) is on, this menu item will
alternately open and close a single SimulTrack view.
Floating Solver Output. Opens a new solver output view, showing the results of the
last solve. If the "Only one floating (other)" preference (User Interface section) is
on, this menu item will alternately open and close a single solver output view.
Synthia. Opens the Synthia instructible assistant, see Help/Synthia PDF.
Synthia Helpers. These menu items are also found on the right-click menu of the IA
button, and are here mainly to make them usable with keyboard accelerators.
Abracadabra. You make your own magic here.
Legilimens. You make your own magic here.
Listen. Enables speech recognition on Windows.
Don't listen. Turns off speech recognition on Windows.
Talk. Allows Synthia to use speech synthesis.
Don't talk. Turns off speech synthesis.
Use the cloud. See the Synthia manual for information and possible implications.
Don't use the cloud.
Forget you heard that. Purges the cloud buffer, in case inadvertent speech or
proprietary rules have been captured.
Minimize Synthia. Sends Synthia down to the taskbar/dock.
Exit Synthia. Causes Synthia to stop running. Use Minimize in most cases, as
starting and stopping Synthia is fairly intensive.
Show Time Bar. Turns the time-bar of the main window on or off, for example, if you
are using a graph editor’s time bar on a second monitor, you can turn off the time
bar on the main display.
Viewport Layout Manager. Starts the viewport layout manager, which allows you to
change and add viewport configurations to match your working style and display
system geometry.
Click-on/Click-off. Quick toggle for click-on/click-off ergonometric mode, see
discussion in the Preferences panel.
Help Menu
Commands labeled with an asterisk(*) require a working internet connection,
those with a plus sign(+) require a properly-configured support login as well. An internet
connection is not required for normal SynthEyes operation, only for acquiring updates,
support, etc.
User Manual PDF. Opens the main SynthEyes User Manual PDF file. Be sure to use
the PDF’s bookmarks as an extended table of contents, and the search function
to help find things.
Planar Tracking PDF. Opens the 3-D Planar Tracking Manual.
Phase PDF. Opens the Phase Reference manual.
Synthia PDF. Opens the manual for the Synthia Instructible Assistant. (Start Synthia
with the IA button)
Sizzle PDF. Opens the Sizzle scripting language manual.
SyPy Python PDF. Opens the manual for the SyPy python interface to SynthEyes.
Recent Change List PDF. Opens the roster of changes to this version and prior
versions (as also found in the support area of the website).
License Agreement. Opens up the SynthEyes license agreement in a separate viewer
for your reference.
Read Messages+. Opens the web browser to a special message page containing
current support information, such as the availability of new scripts, updates, etc.
This page is monitored automatically; this is equivalent to the Msg button on the
toolbar.
Suggest Features+. Opens the Feature-Suggestion page for SynthEyes, allowing you
to submit suggestions, as well as read other suggestions and comment and vote
on them. (Not available on the demo version: send mail to support with
questions/comments/suggestions.)
Tech Support Site*. Opens the technical support page of the web site.
Tech Support Mail*. Opens an email to technical support. Be sure to include a good
Subject line! (Email support is available for one year after purchase.)
Report a credit*. Hey, we all want to know! Drop us a line to let us know what projects
SynthEyes has been used in.
Website/Home*. Opens the SynthEyes home page for current SynthEyes news.
Website/Tutorials*. Opens the tutorials page.
Website/Forum*. Opens the SynthEyes forum.
Purchase or Renew*. Takes you to web pages to purchase SynthEyes (demo) or
renew the license if it is nearing expiration. Convenience options that are no
different than heading to the ssontech.com website yourself.
Register. Launches a form to enter information required to request SynthEyes
authorization. Information is placed on the clipboard. See the registration and
authorization tutorial on the web site.
Authorize. After receiving new authorization information, copy it to the clipboard, then
select Authorize to load the new information.
Set Update Info. Allows you to update your support-site login, and control how often
SynthEyes checks for new builds and messages.
Check for Updates+. Manually tells SynthEyes to go look for new builds and
messages. Use this periodically if you have dialup and set the automatic-check
strategy to never. Similar to the D/L button on the toolbar.
Install Updated. If SynthEyes has successfully downloaded an updated build (D/L
button is green), this item will launch the installation.
About. Current version information.
The Graph Editor icon appears in the toolbar area to indicate a nominal
workflow, but it launches a floating window.
Additional panels are described below:
Add Many Trackers Dialog
Advanced Features
Clean Up Trackers
Coalesce Nearby Trackers
Curve tracking control
Finalize Trackers
Fine-Tuning Panel
Green-screen control
Hard and Soft Lock Controls
Hold Tracker Preparation Tool
Image Preparation
Spinal Editing Control
The shot-setup dialog is described in the section Opening the Shot.
Spinners
SynthEyes uses spinners, the stacked triangles on the right of the following
graphic ( ), to permit easy adjustment of numeric fields on the control panels.
The spinner control provides the following features:
Click either angle arrow (<, >) to increase or decrease the value in steps,
Drag up or right within the control to smoothly increase the value, or down or left
to decrease it,
Has a red underline on key frames,
Right-click to remove a key, or if none, to reset to a predefined default value,
Shift-drag or -click to change the value much more rapidly,
Control-drag or -click to change the value slowly for fine-tuning.
Tool Bar
New, Open, Save, Undo, Redo. Buttons. Right-clicking the Undo/Redo menu items
opens a menu listing all the available undo or redo operations, so you can click
once to easily undo the most recent 5, for example. A preference in the User
Interface area can request that the list always be shown, which can be easier
than thinking about left or right click. There are also tooltips that show the top
undo or redo operation, especially for the legacy mode. In aggregate the "new"
way will likely be more effective.
(Control Panel buttons). Changes the active control panel.
Forward/Backward ( / ). Button. Changes the current playback and tracking
direction.
Reset Time . Button. Resets the timebar so that the entire shot is visible.
Fill . Button. The camera viewport is reset so that the entire image becomes visible.
Shift-fill sets the zoom to 1:1 horizontally. Control-click fills the image, but centers
it instead of putting it at top-left. If you click control-shift-home, all camera and 3-
D views are reset. The default key for this button is HOME, with the same
combinations of shift and control keys.
Active Tracker Host: Camera01. Dropdown. Active camera/object.
IA. Button. Starts the Synthia instructible assistant, see Help/Synthia PDF. This button
also has a right-click menu, see the manual for Window/Synthia Helpers for
details.
Play Bar
Summary Panel
AUTO. (the big green one!) Run the entire match-move process: create features(blips),
generate trackers, and solve. Optionally runs tracker cleanup and auto-place. If
no shot has been set up yet, you will be prompted for that first, so this is truly a
one-stop button. See also Submit for Batch.
Motion Profile. Select one of several profiles reflecting the kinds of motion the image
makes. Use Crash Pan for when the camera spins quickly, for example, to be
able to keep up. Or use Gentle Motion for faster processing when the
camera/image moves only slightly each frame.
Green Screen. Brings up the green-screen control dialog.
Zoom Lens. Check this box if the camera zooms.
On Tripod. Check this box if the camera was on a tripod.
Hold. Animated Button. Use to create hold regions to handle shots with a mix of normal
and tripod-mode sections.
Corners. When on, the corner detector is run when auto-tracking. This checkbox is a
sticky preference.
Fine-tune. Performs an extra stage of re-tracking between the initial feature tracking
and the solve. This fine-tuning pass can improve the sub-pixel stability of the
trackers on some shots.
Settings. Launches the settings panel for fine-tuning.
Run Auto-tracker. Runs the automatic tracking stage, then stops.
Master Solution Reset ( ). Clear any existing solution: points and object paths.
Solve. Runs the solver.
Not solved. This field will show the overall scene error, in horizontal pixels, after
solving.
Run tracker cleanup. When checked, SynthEyes will run the Edit/Clean up trackers
process automatically when you use AUTO to track and solve a shot. Experts
only! Only use this when the solve will already be very clean—if there are bad
trackers (say due to actors being tracked and not matted out), then keep this off,
or many trackers will be killed unnecessarily and incorrectly. When in doubt, keep
this off and verify tracking and clean up trackers manually. Initialized from a
preference.
Run auto-place. When checked, SynthEyes will run the auto-place algorithm
automatically at the end of an AUTO track and solve, setting up a good-guess
coordinate system. Initialized from a preference.
Place. Automatically place the scene by creating a set of position coordinate system
constraints. The placement is a "good guess;" clicking it repeatedly will generate
different possible placements for your consideration. If you hold down SHIFT,
only the currently-selected trackers will be considered to potentially form the
ground plane or wall. The Place algorithm runs automatically during the AUTO
button sequence if the Run auto-place checkbox is set.
Coords. Initiates a mode where 3 trackers can be clicked to define a coordinate
system. After the third, you will have the opportunity to re-solve the scene to
apply the new settings. Same as *3 on the Coordinate System panel.
Lens Workflow. Button. Starts a script to help implement either of the two main lens-
distortion workflows, adjusting tracker data and camera field of view to match
distortion.
Save Sequence. Button. Launches the dialog to save the image preprocessor’s output
sequence, typically to render new images without distortion. Same as save
sequence on the Output tab of the Image Preprocessor.
Spline/Object List. An ordered list of splines and the camera or object they are
assigned to. The default Spline1 is a rectangle containing the entire image. A
feature is automatically assigned to the camera/object of the first spline in the list
that contains the feature: splines at the top of the list are on top in the image.
Double-click a spline to rename it as desired.
Camera/Object Selector. Drop-down list. Use to set the camera/object of the spline
selected in the Spline/Object List. You can also select Garbage to set the spline
as a garbage matte.
Show this spline. Checkbox. Turn on and off to show or hide the selected spline. Also
see the View/Only Selected Splines menu item.
Lock this spline. Checkbox. Turn on to lock this spline in the camera view, even when
the roto panel is open, so that you don't inadvertently change one spline while
working on another.
Key all CPs if any. Checkbox. When on, moving any control point will place a key on all
control points for that frame. This can help make keyframing more predictable for
some splines.
Enable. Button. Animatable spline enable. Right-click, shift-right, and control-right
delete a key, truncate, or delete all keys, respectively.
Create Circle. Lets you drag out circular splines.
Create Box. Lets you drag out rectangular splines.
Magic Wand. Lets you click out arbitrarily-shaped splines with many control points.
Right-click or hit the ESCape key to stop adding points.
Color. Swatch.
Move Up. Push button. Moves the selected spline up in the Spline/Object List,
making it lower priority.
Move Down. Push button. Moves the selected spline down in the Spline/Object List,
making it higher priority.
Delete. Deletes the currently-selected spline.
Shot Alpha Levels. Integer spinner. Sets the number of levels in the alpha channel for
the shot. For example, select 2 for an alpha channel containing only 0 or 1(255),
which you can then assign to a camera or moving object.
Object Alpha Level. Spinner. Sets the alpha level assigned to the current camera or
object. For example, with 2 alpha levels, you might assign level 0 to the camera,
and 1 to a moving object. The alpha channel is used to assign a feature only if it
is not contained in any of the splines.
Import Tracker to CP. Button. When activated, select a tracker then click on a spline
control point. You'll be asked whether you want to import the tracker's relative
motion (the CP won't change on the current frame), or the tracker's actual
position (the CP will leap to the tracker's position). The control point will be keyed
on each frame where the tracker is valid.
Motion Profile. Select one of several profiles reflecting the kinds of motion the image
makes. Use Crash Pan for when the camera spins quickly, for example, to be
able to keep up. Or use Gentle Motion for faster processing when the
camera/image moves only slightly each frame.
Clear all blips. Clears the blips from all frames. Use to save disk space after blips have
been peeled to trackers.
Blips this frame. Push button. Calculates features (blips) for this frame.
Blips playback range. Push button. Calculates features for the playback range of
frames.
Blips all frames. Push button. Calculates features for the entire shot. Displays the
frame number while calculating.
Delete. Button. Clears the skip frame channel from this frame to the end of the
shot, or the entire shot if Shift is down when clicked.
Skip Frame. Checkbox. When set, this frame will be ignored during automatic tracking
and solving. Use (sparingly) for occasional bad frames during explosions or
actors blocking the entire view. Camera paths are spline interpolated on skipped
frames.
Advanced. Push button. Brings up a panel with additional control parameters.
Link frames. Push button. Blips from each frame in the shot are linked to those on the
prior frame (depending on tracking direction). Useful after changes in splines or
alpha channels.
Peel. Mode button. When on, clicking on a blip adds a matching tracker, which will be
utilized by the solving process. Use on needed features that were not selected by
the automatic tracking system.
Peel All. Push button. Causes all features to be examined and possibly converted to
trackers.
To Golden. Push button. Marks the currently-selected trackers as “golden,” so that they
won’t be deleted by the Delete Leaden button.
Delete Leaden. Push button. Deletes all trackers, except those marked as “golden.” All
manually-added trackers are automatically golden, plus any automatically-added
ones you previously converted to golden. This button lets you strip out
automatically-added trackers.
Blip Display Preferences:
Show Trails. (sticky preference) When checked, the trail of a blip is shown, allowing a
quick assessment of its length and quality. The maximum displayed length of the
trail is controlled by the Trail length preference.
Only Unassigned. (sticky preference) When checked, only blips and trails that are not
already assigned to an existing tracker are shown in the viewport, reducing
clutter.
Show type. Dropdown allows you to show all blips, or only corners or only spots. Most
useful when you only want to add additional corner trackers.
Min. Trail. Sets the minimum required length of a blip's trail in order for it to be shown.
Start with a large minimum to get the longest potential trackers, then reduce it
until you have the number you desire.
Tracker Mini-View. Shows its interior---the inner box of the tracker. Left Mouse: Drag
the tracker location. Middle Scroll: Advance the current frame, tracking as you
go. Right Mouse: Add or remove a position key at the current frame. Or, cancel a
drag in progress. Small crosshair shows the offset position, if present and offset
not enabled.
Create. Mode Button. When turned on, depressing the left mouse button in the
camera view creates new trackers. When off, the left mouse button selects and
moves trackers.
Delete. Button (also Delete key). Deletes the selected tracker(s) or other objects. If
nothing is selected when the button is clicked, then delete-tracker-mode is
entered or left. In this mode, simply clicking or lassoing a tracker deletes it, which
is helpful when cleaning up autotracks before they have been solved. Delete-
tracker-mode is automatically exited when this tracker panel is closed (even if a
new one opens immediately thereafter).
Channel. , , , , , .Button. Selects channel to be used for this
particular tracker, either normal RGB, red, green, blue, luminance, or alpha.
Alpha is usable only for planar trackers and may not be selected for other
tracker types. If all trackers will use the same channel, make the channel
selection on the image preprocessor's Rez tab, which will greatly reduce RAM
storage requirements for the shot (and enable Alpha to be tracked if necessary).
Finish. Button. Brings up the finalize dialog box, allowing final filtering and gap
filtering as a tracker is locked down.
Lock. Button. Lock or unlock the tracker. Turn on when tracker is complete. Not
animated
Tracker Type. , , , . Button. Flyout to change the tracker type to normal
match-mode, dark spot, bright spot, or symmetric spot.
Direction. Button. Configures the tracker for forwards or backwards tracking: it will
only track when playing or stepping in the specified direction.
Enable. Button. Animated control turns tracker on or off. Turn off when tracker gets
blocked by something or goes offscreen, turn back on when it becomes visible
again. Right-click, shift-right, and control-right delete a key, truncate, or delete all
keys, respectively.
Contrast. Number-less spinner. Enhances contrast in the Tracker Mini-View window.
Bright. Number-less spinner. Turns up the Tracker Mini-View brightness.
Color. Rectangular swatch. Sets the display color of the tracker for the camera,
perspective, and 3-D views.
Now. Button. Adds a tracker position key at the present location and frame. Right-click
to remove a position key. Shift-right-click to truncate, removing all following keys.
Key. Spinner tells SynthEyes to automatically add a key after this many frames, to keep
the tracker on track (relevant only for pattern-matching tracking).
Key Smooth. Spinner. Tracker’s path will be smoothed for this many frames before
each key, so there is no glitch due to re-setting a key. Warning: this can force
your machine to do a LOT of work, and respond slowly, if you make this value
too large!
Pos. H and V spinners. Tracker’s horizontal and vertical position, from –1 to +1. You
can delete a key (border is red) by right-clicking. Shift-right-clicking will truncate
the tracker after this frame.
By Hand. (Hand animation) Button. Animated. Used to suspend tracking and hand-
animate the tracker over a range of frames, typically to handle occlusion. Right-
click, shift-right, and control-right delete a key, truncate, or delete all keys,
respectively.
Cliff. (Cliffhanger) Button. Normally (with the button off), trackers are automatically shut
off when they reach the edge of the image. That is inconvenient if the tracker
hovers on the edge, so mark it as a cliffhanger, and the automatic shutoff will be
disabled. Turns on automatically when you re-enable a tracker that was just
automatically shut off.
Size. Size and aspect spinners. Animated. Size and aspect ratio (horizontal divided by
vertical size) of the interior portion of the tracker.
Search. H and V spinners. Animated. Horizontal and vertical size of the region
(excluding the actual interior) that SynthEyes will search for the tracker around its
position in the prior frame. "Prior frame" means the adjacent lower-numbered
frame for forward tracking, the adjacent higher-numbered frame for backward
tracking. Shift-right to truncate, control-right to clear.
E (ie Edit Offset). Button. When (normally) off, Nudge moves the tracker itself, and the
time bar shows keys on all the tracker's channels. When E/Edit offset is turned
on, Nudge keys will move the offset of the tracker, and the time bar will show the
keys on the offset channel. Both are contingent on the tracker having an offset.
Edit offset facilitates using the offset channel to overlay correcting offsets over
top of a slowly drifting tracker, if you desire (vs correcting the tracker itself).
Offset. Button. Animated. When on, the offset channels will be added to the tracked
location to determine the final tracker location. When off, the offset position is not
added in. Offset tracking is typically used for occlusion or to handle nearby
trackers. Right-click, shift-right, and control-right delete a key, truncate, or delete
all keys, respectively.
Offset. H and V spinners. Animated. These give the desired final position of the tracker,
when the Offset button is on, relative to the 2-D tracking position. Shift-right to
truncate, control-right to clear.
New+. Button. Clones the currently-selected tracker(s) to form new trackers, typically to
be used for fine-detail offset trackers. If a tracker being cloned has an offset
channel, you will be asked whether you wish to keep it, or clear it after baking in
the offsets as position keys (making it much easier to add additional animated
offsets).
Weight. Spinner. Animated. Defaults to 1.0. Multiplier that helps determine the weight
given to the 2-D data for each frame from this tracker. Higher values cause a
closer match, lower values allow a sloppier match. You can reduce weight in
areas of marginal accuracy for a particular tracker. Adjust the key at the first
frame to affect the entire shot. Shift-right to truncate, control-right to clear.
WARNING: This control is for experts and should be used judiciously and
infrequently. It is easy to use it to mathematically destabilize the solving process,
so that you will not get a valid solution at all. Keep near 1. Also see ZWTs below.
Exact. For use after a scene has already been solved: set the tracker’s 2-D position to
the exact re-projected location of the tracker’s 3-D position. A quick fix for
spurious or missing data points, do not overuse. See the section on filtering and
filling gaps. Note: applied to a zero-weighted-tracker, error will not become zero
because the ZWT will re-calculate using the new 2-D position, yielding a different
3-D and then 2-D position.
F: n.nnn hpix. (display field, right of Exact button) Shows the distance, in horizontal
pixels, between the 2-D tracker location and the re-projected 3-D tracker location.
Valid only if the tracker has been solved.
ZWT. When on, the tracker’s weight is internally set to zero—it is a zero-weighted-
tracker (ZWT), which does not affect the camera or object’s path at all. As a
consequence, its 3-D position will be continually calculated as you update the 2-
D track or change the camera or object path, or field of view. The Weight spinner
of a ZWT will be disabled, because the weight is internally forced to zero and
special processing engaged. The grayed-out displayed value will be the original
weight, which will be restored if ZWT mode is turned off.
T: n.nnn hpix. (display field, right of ZWT button) Shows the total error, in horizontal
pixels, for the solved tracker. This is the same error as from the Coordinate
System panel. It updates dynamically during tracking of a zero-weighted tracker.
Tip: If you don't see the Planar room, see I Don't See the Planar Room in the
Planar Tracking Manual.
The solver panel affects the camera or moving object currently selected in the
viewports, or if nothing or something else is selected, it affects the camera or moving
object listed as the Active Tracker Host on the toolbar.
Field of View. Spinner. Field of view, in degrees, on this frame. Right-clicking clears 1
key, shift-right truncates, control-right clears totally.
Focal Length. Spinner. Focal length, computed using the current Back Plate Width on
Scene Settings. Provided for illustration only.
Add/Remove Key. , Button. Add or remove a key to the field of view (focal
length) track at this frame.
Known. Radio Button. Field of view is already known (typically from an earlier run) and
is taken from the field of view seed track. May be fixed or zooming. You will be
asked if you want to copy the solved FOV track to the seed FOV track—do that if
you want to lock down the solved FOV.
Fixed, Unknown. Radio Button. Field of view is unknown, but did not zoom during the
shot.
Fixed, with Estimate. Radio Button. Camera did not zoom, and a reasonable estimate
of the field of view is available and has been set into the beginning of the lens
seed track. This mode can make solving slightly faster and more robust.
Important: verify that you know, and have entered, the correct plate size before
using any on-set focal length values. A correct on-set focal length with an
incorrect plate size makes the focal length useless, and this setting harmful.
The solver panel affects the camera or moving object currently selected (in the
viewports), or if nothing or something else is selected, it affects the camera or moving
object listed as the Active Tracker Host (on the toolbar).
Go! Button. Starts the solving process, after tracking is complete.
Master Reset. Button. Resets all cameras/objects and the trackers on them,
though all Disabled camera/objects are left untouched. If shift is on, all world
sizes are also restored to the default. Control-click is completely different, clears
(only) the current object's seed path, and optionally the seed FOV (after
confirmation).
more. Button (next to Go!). Brings up the Advanced Solver Settings panel. Lights up if
any of the settings are different than the current default values in the Solver
section of the preferences.
Error. Number display. Root-mean-square error, in horizontal pixels, of all trackers
associated with this object or tracker.
Seeding Method. Upper drop-down list controlling the way the solver begins its solving
process, chosen from the following methods:
Auto. List Item. Selects the automatic seeding(initial estimation) process, for a
camera that physically moves during the shot.
Refine. List item. Resumes a previous solving cycle, generally after changes in
trackers or coordinate systems.
Tripod. List Item. Use when the camera pans, tilts, and zooms, but does not
move.
Refine Tripod. List item. Resumes a previous solving cycle, but indicates that
the camera was mounted on a tripod.
Indirectly. List Item. Use for camera/objects which will be seeded from links to
other camera/objects, for example, an HD shot indirectly seeded from
digital camera stills. Use this mode on the camera/object that has links to
a (single) other camera/object. That camera/object should use a regular
solving mode, such as Automatic or Tripod.
GeoHTrack. List Item. Geometric Hierarchy tracking using meshes and/or
hierarchies of moving objects. See separate manual.
Individual. List Item. Use for motion capture. The object’s trackers are solved
individually to determine their path, using the same feature on other
“Individual” objects; the corresponding trackers are linked in one direction.
Points. List Item. Seed from seed points, set up from the 3-D trackers panel. Use
with on-set measurement data, or after Set All on the Coordinate Panel.
You should still configure coordinate system constraints with this mode:
some hard locks and/or distance constraints.
Path. List Item. Uses the camera/object’s seed path as a seed, for example, from
a previous solution or a motion-controlled camera.
Disabled. List Item. This camera/object is disabled and will not be solved for.
Directional Hint. Second drop-down list. Gives a hint to speed the initial estimation
process, or to help select the correct solution, or to specify camera timing for
“Individual” objects. Chosen from the following for Automatic objects:
Undirected (previously Automatic). List Item. With the Undirected setting
selected, SynthEyes determines the motion.
Left. List Item. The camera moved generally to its left.
Right. List Item. The camera moved generally to its right.
Up. List Item. The camera moved generally upwards.
Down. List Item. The camera moved generally downwards.
Push In. List Item. The camera moved forward (different than zooming in!).
Pull Back. List Item. The camera moved backwards (different than zooming
out!).
Camera Timing Setting. The following items are displayed when “Individual” is
selected as the object solving mode. They actually apply to the entire shot, not
just the particular object.
Sync Locked. List Item. The shot is either the main timing reference, or is locked
to it (ie, gen-locked video camera).
Crystal Sync. List Item. The camera has a crystal-controlled frame rate (ie a
video camera at exactly 29.97 Hz), but it may be up to a frame out of
synchronization because it is not actually locked.
Loosely Synced. List item. The camera’s frame rate may vary somewhat from
nominal, and will be determined relative the reference. Notably, a
mechanical film camera.
Slow but sure. Checkbox. When checked, SynthEyes looks especially hard (and
longer) for the best initial solution.
Constrain. Checkbox for experts. When on, constraints set up using the coordinate
system panel are applied rigorously, modifying the tracker positions. When off,
constraints are used to position, size, and orient the solution, without deforming
it. See alignment vs constraints.
Independent. Checkbox for experts. When set for a camera, this camera and its
objects will be solved independently of other cameras and objects. When set for
a moving object, the object will be solved independent of the rest of the scene
and its camera too. Importantly, independently-solved objects do not affect the
field of view determined from the camera.
Hold. Animated Button. Use to create hold regions to handle shots with a mix of normal
and tripod-mode sections. Right-click, shift-right, and control-right delete a key,
truncate, or delete all keys, respectively.
Begin. Spinner and checkbox. Numeric display shows an initial frame used by
SynthEyes during automatic estimation. With the checkbox checked, you can
override the begin frame solution. Either manually or automatically, the camera
should have panned or tilted only about 30 degrees between the begin and end
frames, with as many trackers as possible that are simultaneously active on both
these two frames. If the camera does something wild between the automatically-
selected frames, or if their data is particularly unreliable for some reason, you
can manually select the frames instead. The selected frame will be selected as
you adjust this, and the number of frames in common shown on the status line.
End. Spinner and checkbox. Numeric display shows a final frame used by SynthEyes
during automatic estimation. With the checkbox checked, you can override the
end frame solution.
World size. Spinner. Rough estimate of the size of the scene, including the trackers
and motion of the camera. The world size is used to automatically size objects in
the viewports, and to provide mathematical context when solving the scene. The
spinner is underlined in red (key mark) when the world sizes are not all the same.
(The world size does not animate). Right-click to reset it, and optionally its
children too, to the default 100.0.
(All). Checkbox with no text, immediately right of "World Size", left of the actual spinner.
When set, changing the spinner changes the world size of all cameras and
objects. This is on by default as it is most useful. When off, you can change the
world sizes individually, typically to try to get a different solution on very different
solves.
Transition Frms. Spinner. MOVED TO ADVANCED SOLVER SETTINGS. When
trackers first become usable or are about to become unusable, SynthEyes
gradually increases or decreases their impact on the solution, to maintain an
undetectable transition. The value specifies how many frames to spread the
transition over.
Overall Weight. Spinner. Defaults to 1.0.MOVED TO ADVANCED SOLVER
SETTINGS. Multiplier that helps determine the weight given to the data for each
frame from this object’s trackers. Lower values allow a sloppier match, higher
values cause a closer match, for example, on a high-resolution calibration
sequence consisting of only a few frames. WARNING: This control is for experts
and should be used judiciously and infrequently. It is easy to use it to
mathematically destabilize the solving process, so that you will not get a valid
solution at all. Keep near 1.
Filtering control. Launches the Path Filtering control dialog to configure post-solve
filtering. The button is lit up if any filtering (applied on each solve) is present.
Axis Locks. 7 Buttons. When enabled, the corresponding axis of the current camera or
object is constrained to match the corresponding value from the seed path.
These constraints are enforced either loosely after solving, with Constrain off, or
tightly during solving, with Constrain on. See the section on Constraining Camera
or Object Position. Animated. Right-click, shift-right, and control-right delete a
key, truncate, or delete all keys, respectively.
L/R. Left/right axis (ie X)
F/B. Front/back axis (Y or Z)
U/D. Up/down axis (Z in Z-up or Y in Y-up)
FOV. Camera field of view (available/relevant only for Zoom cameras)
Pan. Pan angle around ground plane
Tilt. Tilt angle up or down from ground plane
Roll. Roll angle from vertical
more. Button (next to Axis Locks). Brings up or takes down the Hard and Soft Lock
Controls dialog. Lights up if path, distance, or field of view locks are present on
any frame.
Never convert to Far. Now the "If looks far" dropdown on ADVANCED SOLVER
SETTINGS. Normally, SynthEyes monitors trackers during 3-D solves, and
automatically converts trackers to Far if they are found to be too far away. This
strategy backfires if the shot has very little perspective to start with, as most
trackers can be converted to far. Use this checkbox if you wish to try obtaining a
3-D solve for your nearly-a-tripod shot.
The Phase Panel has only a few basic elements; its contents are primarily
determined by the phase selected in the phase viewport, here a Camera Height phase
named Phase2. In the capture, you'll notice that the bottom edge has been cut off,
because the Camera Height phase has few parameters and there is nothing else below.
(Phase name), ie Phase2. Editable selector. Shows the name of the selected phase, if
exactly one is selected. Double-click on the actual name to change it. Use the
drop-down to select a different phase, or use the scroll wheel to quickly look
through many phase user interfaces.
(Swatch), ie blue. Sets the color of the selected phase. Since phases are red when
selected, you will not see this color until the phase is unselected.
(Phase kind name), ie Camera Height. Shows what kind of tracker is selected. Note
that a phase can not be changed to a different kind; create a replacement.
(Phase parameters), ie Camera, Frame, etc. When exactly one phase is selected, its
parameters are shown and changeable in this area. The meaning of the
parameters depends on the kind of phase, see the writeup for that kind of phase,
and the tooltips obtained by hovering the mouse over them.
Camera/Object. Drop-down list. Shows what object or camera the tracker is associated
with; change it to move the tracker to a different object or camera on the same
shot (or, you can clone it there for special situations). Entries beginning with
asterisk(*) are on a different shot with the same aspect and length; trackers may
be moved there, though this may adversely affect constraints, lights, etc.
*3. Button. Starts and controls three-point coordinate setup mode. Click it once to begin,
then click on origin, on-axis, and on-plane trackers in the camera view, 3-D
viewports, or perspective window. The button will sequence through Or, LR, FB,
and Pl to indicate which tracker should be clicked next. Click this button to skip
from LR (left/right) to FB (front/back), or to skip setting other trackers. After the
third tracker, you will have the opportunity to re-solve the scene to apply the new
settings.
Seed & Lock Group
X, Y, Z. Buttons. Multi-choice buttons flip between X, X+, X-; Y, Y+, Y-; and Z, Z+,Z-
respectively. These buttons control which possible coordinate-system solution is
selected when there are several possibilities and can be set for any tracker, ie if
X+ is selected, then only solutions where X is positive will be allowed. If the
tracker has a target and constraints, then the polarity refers to the difference
between this tracker and the target, for example with X+, this tracker's X
coordinate must be greater than the target's X coordinate (ie the difference must
be positive).
X, Y, Z. Spinners. An initial position used as a guess at the start of solving (if seed
checkbox on), and/or a position to which the tracker is locked, depending on the
Lock Type list.
Seed. Mode button. When on, the X/Y/Z location will be used to help estimate
camera/object position at the start of solving, if Points seeding mode is selected.
+/- Nudge Tool. Spinner. Adjusting this spinner moves the seed/lock points of all
selected trackers closer or further from the camera. Right-clicking the spinner will
snap each tracker's seed onto the line of sight through the tracker's 2D location
on the current frame. (This works for Far trackers also, which can be handy for
setting up tripod coordinate systems.) Use control-drag for fine tuning, or adjust
the Nudge Sensitivity preference.
Peg. Mode button. If on, and the Solver panel’s Constrain checkbox is on, the tracker
will be pegged exactly, as selected by the Lock Type. Otherwise, the solver may
modify the constraints to minimize overall error. See documentation for details
and limitations.
Far. Mode button. Turn on if the tracker is far from the camera. Example: If the camera
moved 10 feet during the shot, turn on for any point 10,000 feet or more away.
Far points are on the horizon, and their distance can not be estimated. This
button states your wish, SynthEyes may solve a tracker as far anyway, if it is
determined to have too little perspective.
Lock Type. Drop-down list. Has no effect if Unconstrained. The other settings tell
SynthEyes to force one or more tracker position coordinates to 0 or the
corresponding seed axis value. Use to lock the tracker to the origin, the floor, a
wall, a known measured position, etc. See the section on Lock Mode Details. If
you select several trackers, some with targets, some without, this list will be
empty—right-click the Target Point button to clear it.
Target Point. Button. Use to set up links between trackers. Select one tracker, click the
Target Point button to select the target tracker by name. Or, ALT-click (Mac:
Command-Left-Click) the target tracker in the camera view or 3-D viewport. If the
trackers are on the same camera/object, the Distance spinner activates to control
the desired distance between the trackers. You can also lock one or more of their
coordinates to be identical, forcing them parallel to the same axis or plane. If the
trackers are on different camera/objects, you have created a link: the two
trackers will be forced to the same location during solving. If two trackers track
the same feature, but one tracker is on a DV shot, the other on digital camera
stills, use the link to make them have the same location. Right-click to remove an
existing target tracker.
Dist. Spinner. Sets the desired distance between two trackers on the same object.
Solved. X, Y, Z numbers. After solving, the final tracker location.
Error. Number. After solving, the root-mean-square error between this tracker’s
predicted and actual positions. If the error exceeds 1 pixel, look for tracking
problems using the Tracker Graph window.
[FAR]. This will show up after the error value, if the tracker has been solved as far.
Set Seed. Button. After solving, sets the computed location up as the seed location for
later solver passes using Points mode.
All. Button. Sets up all solved trackers as seeds for subsequent passes.
Exportable. Checkbox. Uncheck this box to tell savvy export scripts not to export this
tracker. For example, exporting to a compositor, you may want only a half dozen
of a hundred or two automatically-generated trackers to be exported and create a
new layer in the compositor. Non-exportable points are shown in a different color,
somewhat closer to that of the background. See also the Exportable checkbox on
the 3-D Panel, which operates not only on trackers, but cameras, objects,
meshes, and lights as well.
Creation Mesh Type. Drop-down. Selects the type of object created by the Create
Tool. Note that the Earthling is scaled so that 1.0 is the top of his head, not hand.
If you want a 6ft tall earthling, set the scale to 6.
Create Tool. Mode button. Clicking in a 3-D viewport creates the mesh object listed
on the creation mesh type list, such as a pyramid or Earthling. Most mesh objects
require two drag sequences to set the position, size, and scale. Note that mesh
objects are different than objects created with the Shot Menu’s Add Moving
Object button. Moving objects can have trackers associated with them, but are
themselves null objects. Mesh objects have a mesh, but no trackers. Often you
will create a moving object and its trackers, then add a mesh object(s) to it after
solving to check the track.
Delete. Button. Deletes the selected object.
Lock Selection. Mode button. Locks the selection in the 3-D viewport to prevent
inadvertent reselection when moving objects. This enables additional
functionality to rotate or scale an object about any arbitrary point, inside or
outside the object. Lock Selection turns off automatically when you exit the 3D
panel, except if you had turned it on prior to opening the 3D Panel, using the
right-click menu of the 3D or perspective viewports.
World/Object. Mode button. Switches between the usual world coordinate system, and
the object coordinate system where everything else is displayed relative to the
current object or camera, as selected by the shot menu. Lets you add a mesh
aligned to an object easily.
Far. Mode button. Mesh is very far from the camera; to pretend that, its translation is
parented to the camera similar to Far trackers. Permits reasonable-sized meshes
to be used for distant backdrops more easily. May not be transmitted to all export
targets.
#. Mode button. The XYZ spinners switch to control the number of segments along
width, depth, and height of the selected primitive object(s). Used to create higher
or lower-resolution versions of builtin primitive meshes for editing and texture
extraction.
Move Tool. Mode button. Dragging an object in the 3-D viewport moves it.
Rotate Tool. Mode button. Dragging an object in the 3-D viewport rotates it about
the axis coming up out of the screen.
Scale Tool. Mode button. Dragging an object in the 3-D viewport scales it
uniformly. Use the spinners to change each axis individually.
Make/Remove Key. , Button. Adds or removes a key at the current frame for
the currently-selected object.
Show/Hide. Button. Show or hide the selected mesh object.
Object color. Color Swatch. Object color, click to change. Note that lights, meshes, and
trackers have both a static color and possibly an animated illumination color (if
the Set Illumination... script has been run). The swatch shows the illuminated
color, if present, and the static color if not. To access the static color when an
animated color is present, see the swatches in the Hierarchy View or Graph
Editor.
X/Y/Z Values. Spinners. Display X, Y, or Z position, rotation or scale values, depending
on the currently-selected tool. Note that when a camera/moving object is
selected, the pan/tilt/roll rotation angles can be animated outside the usual 360
degree range for rotation angles, and those angles will be preserved as-is. With a
camera or moving object selected, right-clicking clears 1 key, shift-right
truncates, control-right clears totally.
Size/Distance. Spinner. This is an overall size spinner, use it when the Scale Tool is
selected to change all three axis scales in lockstep. Fancy feature: if you hold
down the ALT/command key while changing this spinner, the mesh's (only)
position will also be scaled along the line of sight from the camera, so that the
overall visual size and position of the mesh in the image doesn't change. This is
helpful after a Pinning operation, for example. Additional nifty feature: When
the Move Tool is active, this spinner shows the distance from camera to the
selected entity (not just meshes). You can change the spinner to move the object
closer or further from the camera (it does not change size with this tool!).
Whole. Button. When moving a solved object, normally it moves only for the current
frame, allowing you to tweak particular frames. If you turn on Whole, moving the
object moves the entire path, so you can adjust your coordinate system without
using locks. For convenience, turning on Whole selects the active camera or
object and turns on Lock Selection. Turning Whole off turns off Lock Selection.
Whole turns off automatically when you exit the 3D panel, except if you had
turned it on prior to opening the 3D Panel, using the right-click menu of the 3D or
perspective viewports. If you use Whole to align your coordinate system, you
should set up some locks so that the coordinate system will be re-established if
you solve again (done automatically by Auto-place). Hint: Whole mode has some
rules to decide whether or not to affect meshes. Turn on or off Whole affects
meshes on the 3-D viewport and perspective window’s right-click menu. There's
also a preference in the Meshes area to turn Whole affects meshes on or off at
startup.
Blast. Button. Writes the entire solved history onto the object’s seed path, so it can be
used for path seeding mode.
Reset. Button. Clears the object’s solved path, exposing the seed path.
Cast Shadows. (Mesh) Object should cast a shadow in the perspective window.
Catch Shadows. (Mesh) Object should catch shadows in the perspective window.
Back Faces. Draw the both sides of faces, not only the front.
Invert Normals. Make the mesh normals point the other way from their imported
values.
Opacity. Spinner 0-1. Controls the opacity of the mesh in the perspective view and the
OpenGL version of the camera view (see the View menu and preferences to
enable OpenGL camera view). Note that opacity rendering is an inexact surface-
based approximation and, to allow interactive performance, is not equivalent to
changing the object into a semitransparent 3-D aero-gel.
Exportable. Checkbox. When set, the selected item (mesh, tracker, camera, object,
light) is marked to be exported. When cleared, it is not. (It is up to the exporter to
decide whether or not to export any individual item.)
Vtx Cache. Button. Allows you to set up an .ABC (Alembic), .MCX (Maya), .MDD
(Lightwave), or .PC2 (3ds max) vertex cache file for the selected mesh, ie
allowing it to display stored vertex animation (including mesh animation exported
from SynthEyes itself). Follow-up question allows you to select Y-Up (typical) or
Z-Up (less so) coordinate mode. Button is blue when a vertex cache file is
present. To remove the vertex cache file, just hit Cancel on the file selection
dialog, then confirm that you'd like to do so. (Note: don't delete a vertex cache file
that is used by a SynthEyes scene—the files are large and are not embedded in
the .sni file.) Not available in the Intro version.
Reload Mesh. Reloads the selected mesh, if any. If the original file is no longer
accessible, allows a new location to be selected.
(texture filename). Non-editable Text field. Shows the name of the texture displayed on,
or computed for, this mesh.
Create Texture. Checkbox. When on, a texture will be computed for this mesh on
demand (see the texture panel).
Texture Panel. Button. Brings up the texture control panel (also available from the
Window menu).
Next Ray (>). Button. Switch to the next higher ray on the selected light.
Selected Ray
Source. Mode button. When lit up, click a tracker in the camera view or any 3-D view to
mark it as one point on the ray. For Far-away lights, the ray can consist of a
single Far tracker as the Source.
Target. Mode button. When lit up, click a tracker in the camera view or any 3-D view to
mark it as one point on the ray. If the source and target trackers are the same, it
is a reflected-highlight tracking setup, and the Target button will show
“(highlight).” For highlight tracking to be functional, there must be a mesh object
for the tracker to reflect from.
Distance. Spinner. When only a single ray to a nearby light is available, use this
spinner to adjust the distance to the light. Leave at zero the rest of the time.
Hierarchy Panel
The Hierarchy Panel is a pseudo-panel intended for use as a secondary panel on
other panels, most notably the GeoH Tracking panel. It contains a single fixed-size
Hierarchy View, see the documentation for the view for more details.
Note: The flex room is not part of the normal default set. To use the flex
panel, use the room bar's Add Room to create a Flex room that uses the Flex
panel.
The flex/curve control panel handles both object types, which are used to determine the
3-D position/shape of a curve in 3-D, even if it has no discernable point features. If you
select a curve, the parameters of its parent flex (if any) will be shown in the flex section
of the dialog.
New Flex. Creates and selects a new flex. Left-click successively in a 3-D view or the
perspective view to lay down a series of control points. Right-click to end.
Delete Flex. Deletes the selected flex (even if it was a curve that was initially clicked).
Flex Name List. Lists all the flexes in the scene, allowing you to select a flex, or change
its name.
Moving Object List. If the flex is parented to a moving object, it is shown here.
Normally, “(world)” will be listed.
Show this 3-D flex. Controls whether the flex is seen in the viewports or not.
Clear. Clears any existing 3-D solution for the flex, so that the flex’s initial seed control
points may be seen and changed.
Solve. Solves for the 3-D position and shape of the flex. The control points disappear,
and the solved shape becomes visible.
All. Causes all the flexes to be solved simultaneously.
Pixel error. Root-mean-square (~average) error in the solved flex, in horizontal pixels.
Count. The number of points that will be solved for along the length of the flex.
Stiffness. Controls the relative importance of keeping the flex stiff and straight versus
reproducing each detail in the curves.
Stretch. Relative importance of (not) being stretchy.
Endiness. (yes, made this up) Relative importance of exactly meeting the end-point
specification.
New Curve. Begins creating a new curve—click on a series of points in the camera
view.
Delete. Deletes the curve.
Curve Name List. Shows the currently-selected curves name among a list of all the
curves attached to the current flex, or all the unconnected curves if this one is not
connected.
Parent Flex List. Shows the parent flex of this curve, among all of the flexes.
Show. Controls whether or not the curve is shown in the viewport.
Enable. Animated checkbox indicating whether the curve should be enabled or not on
the current frame. For example, turn it off after the curve goes off-screen, or if the
curve is occluded by something that prevents its correct position from being
determined.
Key all. When on, changing one control point will add a key on all of them.
Rough. Select several trackers, turn this button on, then click a curve to use the
trackers to roughly position the curve throughout the length of the shot.
Truncate. Kills all the keys off the tracker from the current frame to the end of the shot.
Tune. Snaps the curve exactly onto the edge underneath it, on the current frame.
All. Brings up the Curve Tracking Control dialog, which allows this curve, or all the
curves, to be tracked throughout an entire range of frames.
This dialog, launched from the Trackers menu, allows you to add many more
trackers—after you have successfully auto-tracked and solved the shot. Use it to
improve accuracy in a problematic area of the shot, or to produce additional trackers to
use as vertices for a tracker mesh. This dialog can work directly on (already-solved)
360 VR shots.
Note: it may take several seconds between launching the dialog and its
appearance. During this time your processors will be very busy.
Tracker Requirements
Min #Frames. Spinner. The minimum number of valid frames for any tracker added.
Min Amplitude. Spinner. The minimum average amplitude of the blip path, between
zero and one. A larger value will require a more visible tracker.
Max Avg Err. Spinner. The maximum allowable average error, in horizontal pixels, of
the prospective tracker. The error is measured in 2-D between the 2-D tracker
position, and the 3-D position of the prospective tracker.
Max Peak Err. Spinner. The maximum allowable error, in horizontal pixels, on any
single frame. Whereas the average error above measures the overall noisiness,
the peak error reflects whether or not there are any major glitches in the path.
Only within last Lasso. Checkbox. When on, trackers will only be created within the
region swept out by the last “lasso” operation in the main camera view, allowing
control over positioning.
Spots. Only spot trackers may be added.
Corners. Only corner trackers may be added. Corners must have been enabled on the
Summary panel when the autotrack was performed, in order for there to be
eligible corner blips to be turned into trackers.
Either Kind. Both spot and corner trackers may be added.
Frame-Range Controls
Start Region. Spinner. The first frame of a region of frames in which you wish to add
additional trackers. When dragging the spinner, the main timeline will follow
along.
End Region. Spinner. The final frame of the region of interest. When dragging the
spinner, the main timeline will follow along.
Min Overlap. The minimum required number of frames that a prospective tracker must
be active within the region of interest. With a 30-frame region of interest, you
might require 25 valid frames, for example.
Number of Trackers
Available. Text display field. Shows the number of prospective trackers satisfying the
current requirements.
Desired. Spinner. The maximum number of trackers to be added: the actual number
added will be the least of the Available and Desired values.
New Tracker Properties
Regular, not ZWT. Checkbox. When off, ZWTs are created, so further solves will not
be bogged down. When on, regular (auto) trackers will be created.
Selected. Checkbox. When checked, the newly-added trackers will be selected,
facilitating easy further modification.
Set Color. Checkbox. When checked, the new trackers will be assigned the color
specified by the swatch. When off, they will have the standard default color.
Color. Swatch. Color assigned to trackers when Set Color is on.
Others
Max Lostness. Spinner. Prospective trackers are compared to the other trackers to
make sure they are not “lost in space.” The spinner controls this test: the
threshold is this specified multiple of the object’s world size. For example, with a
lostness of 3 and a world size of 100, trackers more than 300 units from the
center of gravity of the others will be dropped.
Re-fetch possibles. Button. Push this after changes in Max Lostness.
Add. Button. Adds the trackers into the scene and closes the dialog. Will take a little
while to complete, depending on the number of trackers and length of the shot.
Cancel. Button. Close the dialog without adding any trackers.
Defaults. Button. Changes all the controls to the standard default values.
Advanced Features
This floating panel can be launched from the Feature control panel, affecting the
details of how blips are placed and accumulated to form trackers.
Feature Size (small). Spinner. Size in pixels for smaller blips
Feature Size (big). Spinner. Size in pixels for larger blips, which are used for alignment
as well as tracking.
Density/1K. Spinner for each of big and small. Gives a suggested blip density in term of
blips per thousand pixels.
Minimum Track Length. Spinner. The path of a given blip must be at least this many
frames to have a chance to become a tracker.
Minimum Trackers/Frame. Spinner. SynthEyes will try to promote blips until there are
at least this many trackers on each frame, including pre-existing guide trackers.
Maximum Tracker Count. Spinner. Only this many trackers will be produced for the
object, unless even more are required to meet the minimum trackers/frame.
Camera View Type. Drop-down list. Shows black and white filtered versions of the
image, so the effect of the feature sizes can be assessed. Can also show the
image’s alpha channel, and the blue/green-screen check image, even if the
screen control dialog is not displayed. Edges and corners displays show
intermediate and final results from the corner detector.
Auto Re-blip. Checkbox. When checked, new blips will be calculated whenever any of
the controls on the advanced features panel are changed. Keep off for large
images/slow computers.
(See tooltips for Edge/corner controls at present)
Reset. Turns off all the Calculate checkboxes and clears all the parameter values back
to their normal values, ie assuming the camera is perfect.
Quadratic distortion. Checkbox and spinner. This is the simple traditional quadratic
distortion value, also found on the main Lens panel. When the checkbox is on,
the solver will determine the best quadratic distortion value, starting from the
spinner's initial value, placing the optimal value into the spinner at the completion
of solving. When Zooming Distortion is enabled, the value shown reflects the
effect of (changes with) the zoomed field of view on the current frame, ie it is
apparently animated, although there is still only a single underlying value.
Cubic distortion. Checkbox and spinner. This is a cubic distortion value, which affects
the image corners more than the interior, compared to quadratic distortion. When
the checkbox is on, the solver will determine the best cubic distortion value,
starting from the spinner's initial value, returning the optimal value to the spinner
at the completion of solving. When Zooming Distortion is enabled, the value
shown reflects the effect of (changes with) the zoomed field of view on the
current frame, ie it is apparently animated, although there is still only a single
underlying value.
Quartic distortion. Checkbox and spinner. This is a quartic distortion value, which
affects the image corners even more than the interior, compared to both cubic
and quadratic distortion. When the checkbox is on, the solver will determine the
best quartic distortion value, starting from the spinner's initial value, returning the
Turning this value on and off affects the display of the quadratic, cubic, and
quartic distortion values: when Zooming Distortion is on, those values are always
the values for the current frame, ie as they are affected by the (zooming) field of
view.
Unsolve frames. Button. Clicking this brings up a (scripted) user interface allowing you
to clear the solution (unsolve) from a selected number of solved frames at the
beginning or end of the shot. This allows you to recover from situations where a
long solve develops problems: you can unsolve the recent section, correct or
remove bad trackers, then resume the solve in Refine mode.
Decimate Frames. Spinner. Normally SynthEyes solves every frame of the shot. This
spinner allows you to solve every second, every third, etc to accelerate initial
solves on long shots. The value must stay small compared to average tracker
lifetime, or trackers will only be valid on a single examined frame, thus
contributing nothing, and the solve will fail. This control affects camera and all
moving objects on the shot; individual moving objects can't be set differently.
Max frames/pass. Spinner. The maximum number of frames added per solver pass.
On high-noise shots, typically 360VR with rolling shutter and alignment errors,
you can reduce this to limit the amount of conflict added to the solution at a time,
minimizing the chances of the existing good solution from being corrupted. This
is a trade-off. Smaller numbers will typically result in a fewer iterations, thus a
faster solve with the added frames, though less progress is made by doing so.
Max points/pass. Spinner. Only if non-zero, this limits the number of points added per
frame, for the same reasons as Max frames/pass.
Max iterations/pass. Spinner. Per scene. Limits the number of iterations per pass:
many iterations typically indicates trouble. The Stop on iterations checkbox
determines what happens next.
Stop on iterations. Checkbox. Per scene. If set, then the solve stops if the number of
iterations hits the Max iterations/pass value. If not set, the solve continues as if
this solve pass has successfully converged.
Stop if error exceeds. Spinner. Per scene. If the value is non-zero, solving stops if the
hpix error exceeds this value. A low value, ie just a few pixels, can stop a solve
for user interaction before bad tracking data has corrupted it much.
Solving trouble mode. Dropdown. Per scene. Determines what happens if the solve
encounters numerical issues, which typically involve bad tracking data but might
be more subtle. The choices are to stop (default); to Advance and hope, ie in
hopes that adding more trackers or frames will resolve the issue, or to Proceed
Slowly, which is the pre-2018 behavior, using a much slower algorithm that is
able to tolerate the numerical issues. That can allow the solve to proceed, but
may wind up producing a non-useful result. The action is always Proceed Slowly
during the first 10 seconds of a solve, to handle startup transients that can arise
as a solve gets established.
If looks far: Dropdown. Normally, in Convert to Far mode, SynthEyes monitors
trackers during 3-D (camera) solves, and automatically converts trackers to Far if
they are found to be too far away. This strategy backfires if the shot has very little
perspective to start with, as most trackers can be converted to far. To avoid that,
use Never convert to Far if you wish to try obtaining a 3-D solve for your nearly-
a-tripod shot. Very long shots, especially 360VR shots, can have a different
problem, where trackers that initially appear very far away, with no 3D solution
possible, later become close enough that they are no longer far, and any initial
Far solution would cause the solve to be destroyed. Instead, select Make ZWT or
Disable it so a bad tracker won't cause trouble if it later gets close. The Make
ZWT option is recommended, as it prevents zombie trackers from reviving and
destroying a later Refine solve, which can happen if they are only Disabled.
Allow glitch mode. Dropdown. Very technical control. 'Allowing a glitch' can result in
more reliable solve startups, but might cause a small glitch on the first solved
frame. The First setting allows the glitch initially as the solve starts, but switches
to the deglitched method as additional trackers and frames are added. For
maximal deglitching, maybe on stereo shots if a glitch persists, switch to Never
mode.
Point fuzz. Spinner. Point randomization, percentage of world size.
First path fuzz. Spinner. First-pass path randomization for non-tripod shots, percentage
of world size.
Tripod fuzz. Spinner. First-pass randomization for tripod shots (only), degrees.
Rest path fuzz. Spinner. Remaining-pass path randomization, percentage of world
size.
Point-path fuzz. Spinner. Point-path randomization, percentage of world size.
Refine fuzz. Spinner. Refine mode randomization, percentage of world size. Use this if
you want to force repeated Refine solves to produce different results, in hopes of
getting a different, more-globally maximum, solution.
Normal Startup Threshold. Spinner. Affects start/stop frame numbers for normal
Automatic solves: smaller value allows them to be closer together, larger makes
them further apart.
Tripod Startup Threshold. Spinner. Affects start/stop frame numbers for tripod
Automatic solves: smaller value allows them to be closer together, larger makes
them further apart.
360VR Startup Threshold. Spinner. Affects start/stop frame numbers for 360VR
Automatic solves: smaller value allows them to be closer together, larger makes
them further apart.
Overall Weight. Spinner. Defaults to 1.0. Multiplier that helps determine the weight
given to the data for each frame from this object’s trackers. Lower values allow a
sloppier match, higher values cause a closer match, for example, on a high-
resolution calibration sequence consisting of only a few frames. This only has
meaning in comparison to other objects or cameras in the same solve.
WARNING: This control is for experts and should be used judiciously and
infrequently. It is easy to use it to mathematically destabilize the solving process,
so that you will not get a valid solution at all. Keep near 1.
Transition Frms. Spinner. When trackers first become usable or are about to become
unusable, SynthEyes gradually increases or decreases their impact on the
solution, to maintain an undetectable transition. This value specifies how many
frames to spread the transition over.
Align Mesh to Tracker Positions. The mesh will move to meet the trackers
Align World to Mesh Position. The entire solve, camera path and trackers, will move
to meet the mesh, which will not move.
Allow Uniform scaling, all axes the same. The mesh will be stretched the same along
each axis to match the trackers as best possible.
Allow Non-uniform scaling, each axis separate. The mesh can be stretched
separately along each axis to match, most usually for boxes where the exact
dimensions are not known.
Store resulting locations as tracker constraints. After alignment, the locations of the
vertices will be burned into the trackers as Locks, so that the solve will reproduce
this match again later, particularly for Align World to Mesh Position.
TIP: See also Find Erratic Trackers for a tool that works before solving,
identifying problematic trackers.
Defaults. Button. When clicked, resets all the parameter settings to their factory-default
values. Does not change your preferences for these values.
Get Prefs. Button. All the controls are reset to the preference values that you have set
previously, or to the factory values if no preferences are set.
Set Prefs. Button. The current control settings are saved for future use as preferences,
for when this panel is first opened in a new scene.
(Delete) Bad Frames. Checkbox. When checked, bad frames are deleted when the Fix
button is clicked. Note that the number of trackers in the category is shown in
parentheses.
Show. Toggle button. Bad frames are shown in the user interface, by temporarily
invalidating them. The graph editor should be open in Squish mode to see them.
Threshold. Spinner. This is the threshold for a frame to be bad, as determined by
comparing its 2-D location on a frame to its predicted 3-D location. The value is
either a percentage of the total number of frames (ie the worst 2%), or a value in
horizontal pixels, as controlled by the radio buttons below.
%. Radio button. The bad-frame threshold is measured in percentage; the worst N% of
the frames are considered to be bad.
Hpix. Radio button. The bad-frame threshold is a horizontal-pixel value.
Disable. Radio button. When “fixed,” bad frames are disabled by adjusting the tracker’s
enable track.
Clear. Radio button. Bad frames are fixed by clearing the tracking results from that
frame; the tracker is still enabled and can be easily re-tracked on that frame.
(Delete) Far-ish Trackers. Checkbox. When on, trackers that are too far-ish (have too
little perspective) are deleted.
Threshold. Spinner. Controls how much or little perspective is required for a tracker to
be considered far-ish. Measured in horizontal pixels.
Delete. Radio button. Far-ish trackers will be deleted when fixed.
Make Far. Radio button. Far-ish trackers will be changed to be solved as Far trackers
(direction only, no distance).
(Delete) Short-Lived Trackers. Checkbox. Short-lived trackers will be deleted.
Threshold. Spinner. Number of frames a tracker must be valid to avoid being to short-
lived.
(Delete) High-error Trackers. Checkbox. Trackers with too many bad frames will be
deleted.
Threshold. Spinner. A tracker is considered high-error if the percentage of its frames
that are bad (as defined above by the bad-frame threshold) is higher than this
first percentage threshold, or if its average rms error in hpix is more than the
second threshold below (next to “Unsolved”) . For example, if more than 30% of
a tracker’s frames are bad, or its average error is more than 2 hpix, it is a high-
error tracker.
Unsolved/Behind. Checkbox. Some trackers may not have been solved, or may have
been solved so that they are behind the camera. If checked, these trackers will
be deleted.
Threshold. Spinner. This is the average hpix error threshold for a tracker to be high
error. Though it is next to the Unsolved category, it is part of the definition of a
high-error tracker.
Clear All Blips. Checkbox. When checked, Fix will clear all the blips. This is a way to
remember to do this and cut the final .sni file size.
Unlock UI. Button. A tricky button that changes this dialog from modal (meaning the
rest of the SynthEyes user interface is locked up) to modeless, so that you can
go fix or rearrange something without having to close and reopen the panel.
Note: keyboard accelerators do not work when the user interface is unlocked.
Frame. Spinner. The current frame number in SynthEyes, use to scrub through the shot
without closing the dialog or even having to unlock the user interface.
Fix. Button. Applies the selected fixes, then closes the panel.
Close. Button. Closes the panel, without applying the fixes. Parameter settings will be
saved for next time. The clean-up panel can be a quick way to examine the
trackers, even if you do not use it to fix anything itself.
Trackers, especially automatic trackers, can wind up tracking the same feature in
different parts of the shot. This panel finds them and coalesces them together into a
single overall tracker.
Coalesce. Button. Runs the algorithm and coalesces trackers, closing the panel.
Cancel. Button. Removes any tracker selection done by Examine, then closes the
dialog without saving the current parameter settings.
Close. Button on title bar. The close button on the title bar will close the dialog, saving
the tracker selection and parameter settings, making it easy for examine the
trackers and then re-do and complete the coalesce.
Examine. Button. Examines the scene with the current parameter settings to determine
which trackers will be coalesced and how many trackers will be eliminated. The
trackers to be coalesced will be selected in the viewports.
# to be eliminated. Display area with text. Shows how many trackers will be eliminated
by the current settings. Example: SynthEyes found two pairs of trackers to be
coalesced. Four trackers are involved, two will be eliminated, two will be saved
(and enlarged). The display will show 2 trackers to be eliminated.
Defaults. Button. Restores all controls to their factory default settings.
Distance (hpix). Spinner. Sets the maximum consistent distance between two trackers
to be coalesced. Measured in horizontal pixels.
Sharpness. Spinner. Sets the sensitivity within the allowable distance. If zero, trackers
at the maximum distance are as likely to be coalesced as trackers at the same
location. If one, trackers at the maximum distance are considered unlikely.
Consistency. Spinner. The fraction of the frames two trackers must be nearby to be
merged.
Only selected trackers. Checkbox. When checked, only pre-selected trackers might be
coalesced. Normally, all trackers on the current camera/object are eligible to be
coalesced.
Include supervised non-ZWT trackers. Checkbox. When off, supervised (golden)
trackers that are not zero-weighted-trackers (ZWTs) are not eligible for
coalescing, so that you do not inadvertently affect hand-tuned trackers. When the
checkbox is on, all trackers, including these, are eligible.
Only with non-overlapping frame ranges. Checkbox. When checked, trackers that
are valid at the same time will not be coalesced, to avoid coalescing closely-
spaced but different trackers. When off, there is no such restriction.
Adjacency Rejection. 0..1. The worst weight an edge far from the roughed-in location
can receive.
Do all curves. When checked all curves will be tuned, not just the selected one.
Animation range only. When checked, tuning will occur over the animation playback
range, rather than the entire playback range.
Continuous Update. Normally, as a range of frames is tuned, the tuning result from
any frame does not affect where any other frame is searched for—the searched-
for location is based solely on the earlier curve animation that was roughed in.
With this box checked, the tuning result for each frame immediately updates the
curve control points, and the next frame will be looked for based on the prior
search result. This can allow you to tune a curve without previously roughing it
in.
Do keyed or not. All frames will be keyed, whether or not they have a key already.
Do only keyed. Add keys only to frames that are already have keys, typically to tune up
a few roughed in keys.
Do only unkeyed. Only frames without keys will be tuned. Use to tune without
adversely affecting frames that have already been carefully manually keyed.
Room Name. The name of the room displayed on the row of tabs.
Panel. Select the panel that should be selected when entering the room. This selector is
grayed out for pre-defined rooms. For user-added rooms, you can also select Do
not change or No panel.
Secondary Panel. Select the second panel that should be selected when entering the
room (if space permits), or none. Used to stack a lens panel under the solve
panel, for example.
Layout. Selects which layout should be selected upon entering the room, or Do not
change.
Show Timebar. Whether or not the time bar should be displayed for this room.
Launch Dialog. Select one of these dialogs to be opened when you enter the room.
Tooltip. Tooltip displayed for the room when the mouse hovers over its tab.
With one or more trackers selected, launch this panel with the Finalize Button on
the Tracker control panel, then adjust it to automatically close gaps in a tracker (where
an actor briefly obscures a tracker, say), and to filter (smooth) the trajectory of the
selected trackers.
The Finalize dialog affects only trackers which are not Locked (ie their Lock
button is unlocked). When the dialog is closed via OK, affected trackers are Locked. If
you need to later change a Finalized tracker, you should unlock it, then rerun the tracker
from start to finish (this is generally fairly quick, since you’ve already got all the
necessary keys in place).
Filter Frames. The number of frames that are considered to produce the filtered version
of a particular frame.
Filter Strength. A zero to one value controlling how strongly the filter is applied. At the
default value if one, the filter is applied fully.
Max Gap Frames. The number of missing frames (gap) that can be filled in by the gap-
filling process.
Gap Window. The number of frames before the gap, and after the gap, used to fill
frames inside the gap.
Begin. The first frame to which filtering is applied.
End. The last frame to which filtering is appied.
Entire Shot. Causes the current frame range to be set into the Begin and End spinners.
Playback Range. Causes the current temporary playback range to be set into the
Begin and End spinners.
Live Update. When checked, filtering and gap filtering is applied immediately, allowing
its effect to be assessed if the tracker graph viewport is open.
Clearly there are a number of parameters to it, each discussed below. The
default values are intended to be satisfactory in many cases. The output of the Find
Erratic Trackers tool presents extra information that can allow some important
parameters to be more thoughtfully adjusted, as discussed further below.
Frame Step. A number of frames. The tool only examines every N frames, in this
example, every 5 frames. This saves some time. To be checked, trackers must
be valid for at least twice the sum of the frame step and dead zone.
Dead Zone. A number of frames. Automatically-generated trackers frequently have little
tails at the beginning or end of their lifetime, as features emerge or go behind
foreground features. This control eliminates this many frames at the beginning or
end of its life from consideration. (Intended primarily for automatic trackers, it
does not affect the interior lifetime of shots, only the ends.) See the Frame Step
for a discussion of how long trackers must be valid for consideration.
Kernel size. A number of trackers. This is the minimum number of trackers (at least 10)
required to form each kernel, which is used to evaluate other trackers. Larger
kernels do a better job of suppressing noise in tracker positions in order to make
more accurate predictions, but larger kernels require that more trackers be valid
on pairs of frames, reducing the number of frames and thus trackers that can be
checked.
Kernel count. A number of kernels. The number of kernels that will be used to
evaluate each pair of frames considered. More kernels better accommodate
situations featuring many noisy trackers, where individual kernels may be bad.
More kernels take more time, but more significantly, the need to have additional
novel kernels increases the number of trackers required for any pair of frames, in
order to perform.
Kernel accuracy. Horizontal pixels. Kernels must be able to predict their own tracker
locations within this many pixels in order to be considered accurate enough to be
useful (ie, that the kernel itself doesn't contain bad trackers).
Tracker accuracy. Horizontal pixels. Non-kernel trackers that have an error exceeding
this value are considered to be bad on this kernel. If a tracker is bad on too many
kernels, it will be considered to be erratic and tagged for repair or deletion.
Larger and smaller values affect the tradeoff between more false positives and
negatives.
Bad Kernel %. Percentage. A tracker that is bad on more than this percentage of the
kernels will be considered to be erratic.
Bad tracker color. RGB color values. These are the 0-255 RGB values for the color
any bad trackers are set to. If you empty this field, the tracker colors will not be
changed. See also View/Tracker Appearance/Use alternate colors.
Discussion of Output
Fundamentally, the Find Erratic Tracker tool's only necessary output is the
selected (bad) trackers. However, it also does produce a textual output that can assist
in understanding and refining the input parameters. Here's an output and discussion:
14 bad trackers out of 600: 2.3%. Pretty self-explanatory, can be confirmed by the
selection dropdown at top left of the main SynthEyes window.
Required 15 trackers in common between frames. This is how many frames are
required to be able to determine if anything is the matter with those trackers, and
is a result of both the Kernel Size and Kernel Count values.
Average travel 444.6 frames. Indicates how far apart, on average, it is able to look at
pairs of frames, before the number in common drops below the required trackers
in common value. This is a function of the lifetime of the trackers in your shot. If
the shot is long but has too few trackers, this average travel will be short, and the
tool will not be able to detect much. You might reduce the Kernel Size or Count,
but it is probably better to rerun the autotracking, if you haven't done too much
yet, with a much higher value for Minimum Trackers Per Frame on the Advanced
tab of the Features panel.
1413/2960 bad kernels: 4.9%. Tells you how many of the kernels were too inconsistent
to be usable, according to your Kernel Accuracy setting (and to a lesser extent,
kernel size). The 5% value here is probably on the high side; it is better to keep it
down towards 2-3% by increasing the Kernel Accuracy threshold. If there are few
to none kernels, you may want to decrease the accuracy value, as you're
probably not excluding some you should.
Histogram for adjusting 'Bad kernels%' (50% => 10): The 50% is your current Bad
kernels % setting; this is telling you that means 10 or more bad kernels of your
kernel count of 20. The histogram is all the lines that follow.
Bad Histogram[1]: 69. There were 69 occasions where a tracker appeared in only one
bad kernel.
Bad Histogram[2]: 18.... 18 occasions where it was in two bad kernels, etc.
Bad Histogram[10*]: 5. The asterisk(*) means that this number of bad kernels is over
your bad kernels % setting, ie the tracker will be declared to be erratic. And etc.
Only non-zero values are shown. Note that the histogram does not necessarily
add up to exactly your number of trackers.
The implication of the histogram is that if there are too many trackers that have
been flagged, you should adjust the settings upwards, either the Tracker Accuracy or
the Bad Kernel %. If there were too few, they can be adjusted downwards. Either way,
you can then rerun the tool.
A proper histogram will have most counts at the beginning and end of the list,
with a low area in between. Here you see a count of only one each on 7, 8, and 9. The
threshold should be placed in or right above that low area. When a shot has too many
issues to predict what where trackers should be, the histogram values will be largely
constant or steadily decreasing, with no hole in the middle.
Keep in mind that this tool is statistical in nature. It will flag some trackers it
shouldn't, and not flag some that it should. Nevertheless, it can very easily and rapidly
identify trackers for further investigation.
Fine-Tuning Panel
Launched from the Track menu.
Fine-tune during auto-track. Checkbox. If checked, the fine-tuning process will run
automatically after auto-tracking.
Key Spacing. Spinner. Requests that there be a key every this many frames after fine-
tuning.
Tracker Size. Spinner. The size of the trackers during and after fine tuning. The tracker
size and search values are the same as on the Tracker panel.
Tracker Aspect. Spinner. The aspect ratio of the trackers during and after fine tuning.
U Search Size. Spinner. U (horizontal) search area size. Note that because the fine-
tuning starts from the previously-tracked location, the search sizes can be very
small, equivalent to a few pixels.
V Search Size. Spinner. V (vertical) search area size.
Reset. Button. Restore the current settings of the panel to factory values (not the
preferences). Does not change the preferences; to reset the preferences to the
factory values click Reset then Set Prefs.
Get Prefs. Button. Set the current settings to the values stored as preferences.
Set Prefs. Button. Save the current settings as the new preferences.
HiRez. Drop-down. Sets the high-resolution resampling mode used for supervised
tracking (this is the same setting as displayed and controlled on the Track menu).
All auto-trackers. Radio button. The Run button will work on all auto-trackers.
Selected trackers. Radio button. The Run button will work on only the selected
trackers, typically for testing the parameters.
Make Golden. Checkbox. When on, fine-tuned trackers become ‘golden’ as if they had
been supervised-tracked initially. When off, they are left as automatic trackers.
Run. Button. Causes all the trackers, or the selected trackers, to be fine-tuned
immediately, according to the selected parameters. The other way for fine-tuning
to occur is during automatic tracking, if the Fine-tune during auto-track
checkbox is turned on. If run automatically, the top set of parameters (in the
“Overall Parameters” group) apply during the automatic fine-tune cycle.
Green-Screen Control
Launched from the Summary Control Panel, causes auto-tracking to look only
within the keyed area for trackers. The key can also be written as an alpha channel or
RGB image by the image preprocessor.
Enable Green Screen Mode. Turns on or off the green screen mode. Turns on
automatically when the dialog is first launched.
Reset to Defaults. Resets the dialog to the initial default values.
Average Key Color. Shows an average value for the key color being looked for. When
the allowable brightness is fairly low, this color may appear darker than the actual
typical key color, for example.
Auto. Sets the hue of the key color automatically by analyzing the current camera
image.
Brightness. The minimum brightness (0..1) of the key color.
Chrominance. The minimum chrominance (0..1) of the key color.
Hue. The center hue of the key color, -180 to +180 degrees.
Hue Tolerance. The tolerance on the matchable hue, in degrees. With a hue of -135
and a tolerance of 10, hues from -145 to -125 will be matched, for example.
Radius. Radius, in pixels, around a potential feature that will be analyzed to see if it is
within the keyed region (screen).
Coverage. Within the specified radius around the potential feature, this many percent of
the pixels must match the keyed color for the feature to be accepted.
Scrub Frame. This frame value lets you quickly scrub through the shot to verify the key
settings over the entire shot.
Master Controls
All. Button. Turn on or off all of the position and rotation locks. Shift-right-click to
truncate keys past the current frame. Control-right-click to clear all keys leaving
the object un-locked.
Master weight. Spinner. Set keys on all position and rotation soft-lock weights. Shift-
right-click to truncate keys past the current frame. Control-right-click to clear all
keys (any locked frames will be hard locks).
Back Key. Button. Skip backwards to the previous frame with a lock enable or
weight key (but not seed path key).
Forward Key. Button. Skip forward to the next frame with a lock enable or weight
key (but not seed path key).
Show. Button. When on, the seed path is shown in the main viewports, not the seed
path. Also, the seed field of view/focal length is shown on the Lens Control panel,
instead of the solved value.
Zero-weighted frame. Animated checkbox. The indicated frames will result in a
camera/object solution on that frame, but the frame will not affect the 3D position
of the trackers. For use on low-quality frames, eg subject to motion blur or
requiring much locking.
Translation Weights
Pos. Button. Turn on or off all position lock enables.
Position Weight. Spinner. Set all position weights.
L/R. Button. Left/right lock enable.
L/R Weight. Spinner. Left/right weight.
F/B. Button. Front/back lock enable.
F/B Weight. Spinner. Front/back weight.
U/D. Button. Up/down lock enable.
U/D Weight. Spinner. Up/down weight.
X Value. Spinner. X value of the seed path at the current frame (regardless of the
Show button)
Y Value. Spinner. Y value of the seed path.
Z Value. Spinner. Z value of the seed path.
Get 1f. Button. Create a position and rotation key on the seed path at the current frame,
based on the solved path.
Get PB. Button. Create position and rotation keys for all frames in the playback range of
the timebar, by copying from the solved path to the seed path.
Get. Button. Copy the entire solved path to the seed path, for all frames (equivalent to
the Blast button on the 3-D panel, except that FOV is never copied).
Rotation Weights
Rot. Button. Turn on or off all rotation lock enables.
Rot Weight. Spinner. Set all rotation weights.
Pan. Button. Pan angle lock enable.
Pan Weight. Spinner. Pan axis soft-lock weight.
Minimum Length. Spinner. Prevents the creation of tracker fragments smaller than
this threshold. Default=6.
Far Overlap. Spinner. The range of created Far trackers is allowed to extend out of the
hold region, into the adjacent translating-camera region, by this amount to
improve continuity.
Like the main SynthEyes user interface, the image preparation dialog has several
tabs, each bringing up a different set of controls. The Stabilize tab is active above. With
the left button pushed, you can review all the tabs quickly.
For more information on this panel, see the Image Preparation and Stabilization
sections.
Warning: you should be sure to set up the cropping and distortion/scale values
before beginning tracking or creating rotosplines. The splines and trackers do not
automatically update to adapt to these changes in the underlying image structure, which
can be complex. Use the Apply/Remove Lens Distortion script on the main Script menu
to adapt to late changes in the distortion value.
Shared Controls
OK. Button. Closes the image preprocessing dialog and flushes no-longer-valid frames
the RAM buffer to make way for the new version of the shot images. You can use
SynthEyes’s main undo button to undo all the effects of the Image Preprocessing
dialog as a unit, or then redo them if desired.
Cancel. Button. Undoes the changes made using the image preprocessing dialog, then
closes it.
Undo. Button. Undo the latest change made using the image preprocessing panel.
You can not undo changes made before the panel was opened.
Redo. Button. Redo the last change undone.
Add (checkline). Button. When on, drag in the view to create checklines.
Delete (checkline). Button. Delete the selected checkline.
Final/Padded. Button. Reads either Final or Padded: the two display modes of the
viewport. The final view shows the final image coming from the image
preparation subsection. The padded view shows the image after padding and
lens undistortion, but before stabilization or resampling.
Both. Button. Reads either Both, Neither, or ImgPrep, indicating whether the image
prep and/or main SynthEyes display window are updated simultaneously as you
change the image prep controls. Neither mode saves time if you do not need to
see what you are doing. Both mode allows you to show the Padded view and
Final view (in the main camera view) simultaneously.
Margin. Spinner. Creates an extra off-screen border around the image in the image
prep view. Makes it easier to see and understand what the stabilizer is doing, in
particular.
Show. Button. When enabled, trackers are shown in the image prep view.
Image Prep View. Central image display. Shows either the final image produced by the
image prep subsystem (Final mode), or the image obtained after padding the
image and undistorting it (Padded mode). You can drag the Region-of-interest
(ROI) and Point-of-interest (POI) around, plus you can click to select trackers, or
lasso-select by dragging.
Playbar (at bottom)
Preset Manager. Drop-down. Lets you create and control presets for the image prep
system, for example, different presets for the entire shot and for each moving
object in the shot. You can control which settings the preset affects, so that for
example you have a few presets that control only the color settings. If you do
that, you should probably make your other presets not affect the color settings,
so that you can change them all around.
Preset Mgr. Disconnect from the current preset; further changes on the
panel will not affect the preset.
New preset. Create and attach to a new preset. You will be prompted for
the name of the new preset and what settings it controls.
Reset. Resets the current preset to the initial settings, which do nothing to
the image.
Rename/Alter. Prompt for a new name and settings configuration for the
current preset.
Delete. Delete the current preset.
Your presets. Selecting your preset will switch to it. Any changes you
then make will affect that preset, unless you later select the Preset
Mgr. item before switching to a different preset.
+Alpha. Button. Click this to add a separate alpha image sequence to the shot. Click
the button, cancel the file selector, then answer Yes to remove the alpha
sequence. Button is blue when a sequence has been explicitly selected. Note
that sequences can also be attached implicitly, if their names match the
Separate-alpha suffix preference.
Keep Alpha. Checkbox. Requests that SynthEyes read and store the alpha channel
(always 8-bit) even if SynthEyes will not use it itself—typically so that it can be
saved with the pre-processed version of the sequence. Alpha data can be in the
RGB files, ie RGBA, or in separate alpha-channel files, see Separate Alpha
Channels.
Mirror Left/Right. Checkbox. Mirror-reverse the image left and right (for some stereo
rigs).
Mirror Top/Bottom. Checkbox. Mirror-reverse the image top and bottom (for some
stereo rigs).
360 VR. Dropdown. This is the same control as on the Shot Settings panel; it controls
360 VR processing, with modes indicating that the shot is 360 VR or not, and
also for changing 360 VR to non-VR and back.
Filtering Tab
Blur. Spinner. Causes a Gaussian blur with the specified radius, typically to minimize
the effect of grain in film. Applied before down-sampling, so it can eliminate
artifacts.
Hi-Pass. Spinner. When non-zero, creates a high-pass filter using a Gaussian blur of
this radius. Use to handle footage with very variable lighting, such as explosions
and strobes. Radius is usually much larger than typical blur compensations.
Applied before down-sampling.
Noise Reduce. Spinner. Controls a noise-reduction algorithm intended to reduce noise
with a bit less blur than the regular blur. The value has no physical meaning;
typically values in the 1-5 range are useful. This noise reduction algorithm is for
helping tracking, not producing finished images. It avoids thresholding and
median filtering, which can shift feature locations. Unclear if this is more helpful
than a similar small Blur value.
Luma Blur. Spinner. Blurs just the luminance portion of the image, typically for
processing DNG images. Artifacts can result. Use Blur for normal degraining.
Chroma Blur. Spinner. Blurs just the chrominance portion of the image, typically for
processing DNG images. Artifacts can results. Use Blur for normal degraining.
Levels Tab
3-D Color Map. Drop-down selector. Select a 3-D Color Look-Up-Table (LUT) to use to
process the images. Note that when a LUT is present, the Level Adjustments are
ignored, but Hue, Saturation, and Exposure are still active (especially for the
case of 1D LUTs, and also so a 3D LUT can have its exposure changed to make
features more visible).
Open. Button. Brings up a file selector to use a specific color map file anywhere on disk
(not just in the user or system preset areas).
Reload. Button. Forces an immediate reload of the selected color map. Note that
File/Find New Scripts also does a reload of any color maps that have changed.
Either way, reloading a color map will invalidate the image cache.
Save. Button. Writes a .cube 1D or 3D (as needed) color map LUT that corresponds to
the current settings of the Level Adjustments, Hue, Saturation, and Exposure.
The resolution is set by a FILE EXPORT preference (1D maps are 8x the
specified 3D LUT resolution). Allows you to reuse those settings in different
scene files, or other applications. Note that while those controls can be animated,
a color map cannot. The color map is written using the settings of the current
frame. You can write the color map into your LensPresets folder so it will be
listed as a color preset for easy use, or write it to any other location on disk.
When reapplying the color map, you will likely want to clear the Hue, Saturation,
and Exposure tracks, at least to start with, so that they are not double-applied
High. Spinner. Incoming level that will be mapped to full white in RAM. Changing the
level values will create a key on the current frame if the Make Keys checkbox is
on, so you can dynamically adjust to changes in shot image levels. Use right-
click to delete a key, shift-right-click to truncate keys past the current frame, and
control-right-click to kill all keys. High, Mid, and Low are all keyed together.
Mid. Spinner. Incoming level that will be mapped to 50% white in RAM. (Controls the
effective gamma.)
Low. Spinner. Incoming level that will be mapped to 0% black in RAM.
Gamma. Spinner. A gamma level corresponding to the relationship between High, Mid,
and Low.
Hue. Spinner. Rotates the hue angle +/- 180 degrees. Might be used to line up a color
axis a bit better in advance of selecting a single-channel output.
Saturation. Spinner. Controls the saturation (color gain) of the images, without affecting
overall brightness.
Exposure. Spinner. Controls the brightness, up or down in F-stops (2 stops = a factor of
two). This exposure control affects images written to disk, unlike the range
adjustment on the shot setup panel. This one can be animated, that one cannot.
Cropping Tab
Left Crop. Spinner. The amount of image that was cropped from the left side of the film.
Width Used. Spinner. The amount of film actually scanned for the image. This value is
not stored permanently; it multiplies the left and right cropping values. Normally it
is 1, so that the left and right crop are the fraction of the image width that was
cropped on that size. But if you have film measurements in mm, say, you can
enter all the measurements in mm and they will eventually be converted to
relative values.
Right Crop. Spinner. The relative amount of the width that was cropped from the right.
Top Crop. Spinner. The relative amount of the height that was cropped.
Height Used. Spinner. The actual height of the scanned portion of the image, though
this is an arbitrary value.
Bottom Crop. Spinner. The relative amount of the height that was cropped along the
bottom.
Effective Center. 2 Spinners. The optic center falls, by definition, at the center of the
padded-up (uncropped) image. These values show the location of the optic
center in the U and V coordinates of the original image. You can also change
them to achieve a specified center, and corresponding cropping values will be
created.
Maintain original aspect. Checkbox. When checked, changing the effective image
center will be done in a way that maintains the original image aspect ratio, which
minimizes user confusion and workflow impact.
Stabilize Tab
For more information, see the Stabilization section of the manual.
Get Tracks. Button. Acquires the path of all selected trackers and computes a weighted
average of them together to get a single net point-of-interest track.
Stabilize Axes:
Translation. Dropdown list: None/Filter/Peg. Controls stabilization of the left/right and
up/down axes of the stabilizer, if any. The Filter setting uses the cut frequency
spinner, and is typically used for traveling shots such as a car driving down a
highway, where features come and go. The Pegged setting causes the initial
position of the point of interest on the first frame to be kept throughout the shot
(subject to alteration by the Adjust tracks). This is typical for shots orbiting a
target.
Rotation. Dropdown list: None/Filter/Peg. Controls the stabilization of the rotation of the
image around the point of interest.
Cut Freq(Hz). Spinner. This is the cutoff frequency (cycles/second) for low-pass filtering
when the peg checkbox(es) are off. Any higher frequencies are attenuated, and
the higher they are, the less they will be seen. Higher values are suitable for
removing interlacing or residual vibration from a car mount, say. Lower values
under 1 Hz are needed for hand-held shots. Note that below a certain frequency,
depending on the length of the shot, further reducing this value will have no
effect.
Auto-Scale. Button. Creates a Delta-Zoom track that is sufficient to ensure that there
are no empty regions in the stabilized image, subject to the maximum auto-zoom.
Can also animate the zoom and create Delta U and V pans depending on the
Animate setting.
Animate. Dropdown list: Neither/Translate/Zoom/Both. Controls whether or not Auto-
Scale is permitted to animate the zoom or delta U/V pan tracks to stay under the
Maximum auto-zoom value. This can help you achieve stabilization with a
smaller zoom value. But, if it is creating an animated zoom, be sure you set the
main SynthEyes lens setting to Zoom.
Maximum auto-zoom. Spinner. The auto-scale will not create a zoom larger than this.
If the zoom is larger, the delta U/V and zoom tracks may be animated, depending
on the Animate setting.
Clear Tracks. Button. Clears the saved point-of-interest track and reference track,
turning off the stabilizer.
Lens Tab
Get Solver FOV. Button. Imports the field of view determined by a SynthEyes solve
cycle, or previously hand-animated on the main SynthEyes lens panel, placing
these field of view values into the stabilizer’s FOV track.
Field of View. Spinner. Horizontal angular field of view in degrees. Animatable.
Separate from the solver’s FOV track, as found on the main Lens panel.
Focal Length. Spinner. Camera focal length, based on the field of view and back plate
width shown below it. Since plate size is rarely accurately known, use the field of
view value wherever possible.
Plate. Text display. Shows the effective plate size in millimeters and inches. To change
it, close the Image Prep dialog, and select the Shot/Edit Shot menu item.
Get Solver Distort. Button. Brings the distortion coefficient from the main Lens panel
into the image prep system’s distortion track. Note that while the main lens
distortion can not be animated, this image prep distortion can be. This button
imports the single value, clearing any other keys. You will be asked if you want to
remove the distortion from the main lens panel, you should usually answer yes to
avoid double-distortion.
Distortion. Spinner. Removes this much distortion from the image. You can determine
this coefficient from the alignment lines on the SynthEyes Lens panel, then
transfer it to this Image Preparation spinner. Do this BEFORE beginning tracking.
Can be animated.
Cubic Distort. Spinner. Adjusts more-complex (higher-order) distortion in the image.
Use to fine-tune the corners after adjusting the main distortion at the middle of
the top, bottom, left, and right edges. Can be animated. (Note: Nuke supports
quartic but not cubic distortion terms.)
Quartic Distort. Spinner. Adjusts more-complex (higher-order) distortion in the image.
Use to fine-tune the corners after adjusting the main distortion at the middle of
the top, bottom, left, and right edges. Can be animated.
Scale. Spinner. Enlarges or reduces the image to compensate for the effect of the
distortion correction. Can be animated.
Lens Selection. Dropdown. Select a lens distortion profile or image map to apply
(instead of the Distortion/Cubic/etc Distort values), or none at all. These curve
selections can help you solve fisheye and other complex wide-angle shots, with
proper advance calibration.
Open. Button. Brings up a file selector to use a specific lens distortion profile or image
map file anywhere on disk (not just in the user or system preset areas).
Reload. Button. Reloads the currently-selected lens profile from disk. This will flush all
the frames that use the old version of the profile when the image preprocessor
panel is closed. File/Find New Scripts will reload any lens profile that has
changed.
Nominal BPW. Text field, invisible when empty. A nominal back-plate-value supplied by
the lens profile, use at your discretion.
Nominal FL. Text field, invisible when empty. A nominal focal length supplied by the
lens profile, use at your discretion.
Apply distortion. Checkbox. Normally the distortion, scale, and cropping specified are
removed from the shot in preparation for tracking. When this checkbox is turned
on, the distortion, scale, and cropping are applied instead, typically to reapply
distortion to externally-rendered shots to be written to disk for later compositing.
When being turned on, the Output Tab's New Width, Height, and Aspect settings
should be configured to those of the original footage, and the main aspect ratio
(on the Shot Settings) should be set to that of the images to which the distortion
is being applied, ie what the image preprocessor was previously outputting. Also
if the VR Mode was Remove, it should be changed to Apply. These settings
changes are all done automatically by using Shot/Change Shot Images and
selecting the Re-distort CGI---change to Apply option.
Adjust Tab
Delta U. Spinner. Shifts the view horizontally during stabilization, allowing the point-of-
interest to be moved. Animated. Allows the stabilization to be “directed,” either to
avoid higher zoom factors, or for pan/scan operations. Note that the shift is in 3-
D, and depends on the lens field of view.
Delta V. Spinner. Shifts the view vertically during stabilization. Animated.
Delta Rot. Spinner. Degrees. Rotates the view during stabilization. Animated.
Delta Zoom. Spinner. Zooms in and out of the image. At a value of 1.0, pixels are the
same size coming in and going out. At a value of 2.0, pixels are twice the size,
reducing the field of view and image quality. This value should stay down in the
1.10-1.20 range (10-20% zoom) to minimize impact on image quality. Animated.
Note that the Auto-Scale button overwrites this track.
Output Tab
Resample. Checkbox. When turned on, the image prep output can be at a different
resolution and aspect than the source. For example, a 3K 4:3 film scan might be
padded up to restore the image center, then panned and scanned in 3-D and
resampled to produce a 16:9 1080p HD image.
New Width. Spinner. When resampling is enabled, the new width of the output image.
New Height. Spinner. The new height of the resampled image.
New Aspect. Spinner. The new aspect ratio of the resampled image. The resampled
width is always the full width of the zoomed image being used, so this aspect
ratio winds up controlling the height of the region of the original being used. Try it
in “Padded” mode and you’ll see.
4:3. Button. A convenience button, sets the new aspect ratio spinner to 1.333.
16:9. Button. More convenience, sets the new aspect ratio to 1.778.
Save Sequence. Button. Brings up a dialog which allows the entire modified image
sequence to be saved back to disk.
Apply to Trkers. Button. Applies the effect of the selected padding, distortion, or
stabilization to all the tracking data, so that tracking data originally created on the
raw image will be updated to correspond to the present image preprocessor
output.Used to avoid retracking after padding, changing distortion, or stabilizing a
shot. Do not hit more than once!
Padding. Checkbox. Apply/remove the effect of the cropping/padding.
… (ellipsis, dot dot dot) Button. Click this to set the output file name to write the
sequence to. Make sure to select the desired file type as you do this. When
writing an image sequence, include the number of zeroes you wish in the
resulting sequence file names. For example, seq0000 will be a four-digit image
number, starting at zero, while seq1 will have a varying number of digits, starting
from 1.
Compression Settings. Button. Click to set the desired compression settings, after
setting up the file name and type. Subtle non-SynthEyes Quicktime “feature:” the
H.264 codec requires that the Key Frame every … frames checkbox in its
compression settings must be turned off. Otherwise the codec produces only a
single frame! Also, be sure to set compression for Quicktime movies, there is no
default compression set by Quicktime.
RGB Included. Checkbox. Include the RGB channels in the files produces (should
usually be on).
Alpha Included. Checkbox. Include the alpha channel in the output. Can be turned on
only if the output format permits it. If the input images do not contain alpha data,
it will be generated from the roto-splines and/or green-screen key. Or, if (only
after) you turn off the RGB Included checkbox, you can turn on the Alpha
Included checkbox, and alpha channel data will be produced from the roto-
spline/green-screen and converted to RGB data that is written. This feature
allows a normal black/white alpha-channel image to be produced even for
formats that do not support alpha information, or for other applications that
require separate alpha data.
Meshes Included. Checkbox. When set, the meshes will be (software) rendered over
top of the shot, for quick previewing only, especially for 360 VR shots. With
checked, only 8-bit/channel output will be produced; keep this off normally,
so that 16-bit and floating channel images can be produced. (Use the
Perspective Window's Preview Movie for normal previews include motion blur
and additional antialiasing options.)
Start. Button. Get going…
Close/Cancel. Button. Close: saves the filename and settings, then close. When
running, changes to Cancel: stop when next convenient. For image sequences
on multi-core processors, this can be several frames later because frames are
being generated in parallel.
Reset from Prefs. Button. Reloads the current scene's list of multiple exports from the
set you've stored as your preferences.
Save as Prefs. Button. Store the current list of exporters as your preferences.
(List of exports). Listbox. Each export will be run when you do a File/Export Multiple.
You can add an extra filename suffix to the scene's basic name for any or all of
the exports; use that to produce different-named files when you have several of
the same type, ie several Python files, so you can tell which is which. Double-
click the entry to bring up a dialog to set the suffix. After setting it; it will be
included on that line after a semi-colon, for example the C4D suffix for Cinema
4D in the example above. To remove an export from the list, select it then click
the delete key.
(Add exporters by ...). Dropdown List. As the name says, you can add exporters to the
export list just be selecting them on the dropdown list.
Notes Editor
Used to edit one or more notes, which appear in the camera and perspective
views.
WARNING: If you place a pinned note off the edge of the (zoomed out) shot
image, then change the note to stationary, you won't be able to see it, it's now
off the edge of the window. (Change it back and fix.)
Color swatch. The color of the note's background (pastel preferred). By default, it is a
special marker color that defers to a color set in the preferences. There's also a
preference for the text color.
Creation wand. Button. Creates a new note top-left in the active camera view.
Delete. Button. Deletes this note.
Go back. Button. Goes to the prior note, as sorted by begin and end frame numbers. If
the prior note isn't visible on the current frame, the current frame is changed to
the prior note's begin frame.
Go forward. Button. Goes to the next note, as sorted by begin and end frame
numbers. If the next note isn't visible on the current frame, the current frame is
changed to the next note's begin frame.
Camera selector. Dropdown. What camera (shot) the note is attached to. Select All to
appear on all cameras.
Begin Frame. Spinner. Sets the first frame that the note is visible on.
Here. Button. Sets the begin frame to the current frame.
Go. Button. Changes the main user interface to the begin frame.
End Frame. Spinner. Sets the last frame that the note is visible on.
Here. Button. Sets the end frame to the current frame.
Text Field. One or more lines of text that are the content of the note.
the Filtering Control button on the solver panel or the Window/Path filtering menu
item. It displays values for the selected camera or object (in the viewports), or more
normally the camera or object listed as the active tracker host (on the toolbar).
Warning: Path and FOV filtering CAUSE sliding, because they move the
camera path AWAY from the position that produces the best, locked-on,
results.
The filtering is best used as part of a workflow where you only filter a few axes,
lock them after filtering, then refine the solution to accommodate the effect of the
filtering.
The selection of solve or seed path and whole shot or playback range are
available only interactively; the solving process always filters the whole shot into the
solve path (if enabled).
Frequency. Animated spinner. Cutoff frequency controlling how quickly the parameter
is allowed to change, in cycles per second (Hz), ranging up to at most 1/2 the
frame rate.
Strength. Animated spinner. Controls how strongly the filtering is applied, ranging from
0 (none) to 1 (completely filtered at the given frequency).
X/Y/Z. Checkboxes. One checkbox for each translational axis, controlling whether or
not it will be filtered.
Rotation. Checkbox. Controls whether rotation is filtered. Note that there are no
separate channels to filter or not for rotation.
Distance. Checkbox. Controls whether the camera/origin (camera tracks) or
object/camera (object tracks) distance is filtered or not. Primarily intended for
difficult object tracks where most error is in the direction towards or away from
the camera.
FOV. Checkbox. Controls whether or not the field of view is filtered.
To Solve Path. The filtered path (and/or FOV) is written into the solve tracks (normal
default).
To Seed Path. The filtered path (and/or FOV) is written into the seed tracks, which is
available only interactively and is intended to generate data for hard or soft
locking the axes.
Whole Shot. The entire shot is filtered (normal default).
Playback range. Only the portion of the shot between the green and red playback
range markers on the timebar will be filtered. For interactive use (only), this can
be quicker and easier than setting up an animated strength value to adjust a
portion of a shot. The filtering will blend in and out at each end of the playback
range.
In this example of a 4-way branch, there are three earlier branches ("Slow but
sure", "Set Constraints/Alignments", and "Set solve-independently + 2 more"); the
current branch is "Set solve-independently"). This is the result of clicking the Slow but
sure checkbox, Undo, Constrain, Undo, Independent, then right-clicking Undo and
selecting the 3-way branch item.
The first three choices would restore the respective state, ie Slow but sure for the
first, or Constrain for the second. The third choice starts out with solve-independently
but contains two additional undo items, ie settings that you've changed earlier.
Tip 1: You can double-click an entry to give it a better name, especially if the
first undo entry (as used for the name) isn't sufficiently indicative of what that
branch is all about.
Tip 2: If you hover the mouse over one of the choices, you'll see a tooltip
listing the undo items it contains (up to a limit set by some preferences). Be
careful to move from outside the listbox directly to the item of interest: once
the tooltip appears, moving the mouse further won't list another entry.
Tip 3: You can select a branch and hit the Delete key to delete that branch
and all the undo records associated with it. You can't undo that, and you can't
delete the current branch.
The fourth list item would restore the Independently branch—here we would be
undoing it, then redoing it, accomplishing nothing in total in this example.
The fifth item would keep us on the same branch, but undo everything back to
the branch. Typically we'd use this right before we did something else, ie to create a
fourth branch!
When selecting any branch, all the associated items are redone (in order), which
includes any additional branch points. So after you select a branch, you may be
presented with additional copies of the dialog, to select which of the subsequent
branches to follow. (That's how you get entire trees of branches.)
The "To leaf without questions" checkbox suppresses those additional
questions; if there would be additional dialogs, the "Redo current" option is
automatically selected.
Start Frame, End Frame: the range of frames to be examined. You can adjust this from
this panel, or by shift-dragging the end of the frame range in the time bar.
Stereo Off/Left/Right. Sequences through the three choices to control the setup for
stereo shots. Leave at Off for normal (monocular) shots, change to Left when
opening the first, left, shot of a stereo pair. See the section on Stereoscopic shots
for more information.
360 VR Mode. Dropdown. Controls processing of spherical 360 degree VR footage,
with three modes: None, for normal non-VR footage; Present, when the footage
is 360 degree footage being processed as that; Remove, for 360 degree footage
that is being dynamically converted to linear footage; or Apply, for linear footage
that is being converted to 360 VR footage.
Time-Shift Animation: enabled only after a Shot/Change Shot Imagery menu item,
this spinner lets you indicate that additional frames have been added or removed
from the beginning of the shot, and that all the trackers, object paths, splines, etc,
for this shot should be shifted earlier or later in the shot.
Frame rate: Usually 24, 23.976, or 29.97 frames per second. NTSC is used in the US &
Japan, PAL in Europe. Film is generally 24 fps, but you can use the spinner for
over- or under-cranked shots or multimedia projects at other rates. Some
software may have generated or require the rounded 25 or 30 fps, SynthEyes
does not care whether you use the exact or approximate values, but it may be
crucial for downstream applications.
Interlacing: No for film or progressive-scan DV. Yes to stay with 25/30 fps, skipping
every other field. Minimizes the amount of tracking required, with some loss of
ability to track rapid jitter. Use Yes, But for the same thing, but to keep only the
other (odd) field. Use Starting Odd or Starting Even for interlaced video,
depending on the correct first field. Guessing is fine. Once you have finished
opening the shot in a second, step through a few frames. If they go 2 steps
forward, one back, select the Shot/Edit Shot menu item, and correct the setting.
Use Yes or None for source video compressed with a non-field-savvy codec such
as sequenced JPEG.
Channel Depths: Process. 8-bit/16-bit/Float. Radio buttons. Selects the bit depth
used while processing images in the image preprocessor. Note that Half is
intentionally omitted because it is slow to process, use Float for processing, then
store as Half. Same controls as on Rez tab of the Image Preprocessor.
Channel Depths: Store. 8-bit/16-bit/Half/Float. Radio buttons. Selects the bit depth
used to store images, after pre-processing. You may wish to process as floats
then store as Halfs, for example. A Half is a 16-bit floating-point number, so it has
enhanced range (not as much as a float) but is only half the size to store as a
float.
Apply Preset: Click to drop down a list of different film formats; selecting one of them
will set the frame rate, image aspect, back plate width, squeeze factor, interlace
setting, rolling shutter, and indirectly, most of the other aspect and image size
parameters. You can make, change, and delete your own local set of presets
using the Save As and Delete entries at the end of the preset list.
Image Aspect: overall image width divided by height. Equals 1.333 for video, 1.777 for
HDTV, 2.35 or other values for film. Click Square Pix to base it on the image
width divided by image height, assuming the pixels are square(most of the time
these days). Note: this is the aspect ratio input to the image preprocessor. The
“final aspect” shown at lower right is the aspect ratio coming out of the image
preprocessor.
Pixel Aspect: width to height ratio of each pixel in the overall image. (The pixel aspect
is for the final image, not the skinnier width of the pixel on an anamorphic
negative.)
Back Plate Width: Sets the width of the “film” of the virtual camera, which determines
the interpretation of the focal length. Note that the real values of focal length and
back plate width are always slightly different than the “book values” for a given
camera. Note: Maya is very picky about this value, use what it uses for your shot.
Back Plate Height: the height of the film, calculated from the width, image aspect, and
squeeze.
Back Plate Units. Shows in for inches, mm for millimeters, click to change the desired
display units.
Anamorphic Squeeze: when an anamorphic lens is used on a film camera, it squeezes
a wide-screen image down to a narrower negative. The squeeze factor reflects
how much squeezing is involved: a value of 2 means that the final image is twice
as wide as the negative. The squeeze is provided for convenience; it is not
needed in the overall SynthEyes scene.
Negative’s Aspect: aspect ratio of the negative, which is the same as the final image,
unless an anamorphic squeeze is present. Calculated from the image aspect and
squeeze factor.
Rolling Shutter Enable. Checkbox. Enables rolling-shutter compensation during
solving for the tracker data of the camera and any objects attached to this shot.
CMOS cameras are subject to rolling shutter; it causes intrinsic image artifacts.
Rolling Shutter Fraction. Spinner. This is the fraction of the frame time that it takes to
read out the image data from the CMOS chip. For an old NTSC TV camera, there
are 486 active lines, and 525 total (including vertical blanking) for a fraction of
0.9257. For a camera running at 60 fps, the frame time is 16.667 msec
(1sec/60frames*1000msec/sec). If it takes 15 msec to read out the image, the
rolling shutter fraction should be set to 0.9, ie 15/16.667. You will have to obtain
these values from the camera manufacturer or by experimentation.
+Alpha. Button. Click this to add a separate alpha image sequence to the shot. Click
the button, cancel the file selector, then answer Yes to remove the alpha
sequence. Button is blue when a sequence has been explicitly selected. Note
that sequences can also be attached implicitly, if their names match the
Separate-alpha suffix preference.
Keep Alpha: when checked, SynthEyes will keep the alpha channel when opening files,
even if there does not appear to be a use for it at present (ie for rotoscoping).
Alpha data can be in the RGB files, ie RGBA, or in separate alpha-channel files,
see Separate Alpha Channels. Turn on when you want to feed images through
the image preprocessor for lens distortion or stabilization and then write them,
and want the alpha channel to be processed and written also.
F.-P. Range Adjustment: adjusts the shot to compensate for the range of floating-point
image types (OpenEXR, TIFF, DPX). Values should go from 0..1, if not, use this
control to increase or decrease the apparent shot exposure by this many f-stops
as it is read in. Different than the image preprocessor exposure adjust, because
this affects the display and tracking but not images written back to disk from
the image preprocessor.
HiRez: For your supervised trackers, sets the amount of image re-sampling and the
interpolation method between pixels. Larger values and fancier types will give
sharper images and possibly better supervised tracking data, at a cost of
somewhat slower tracking. The default Linear x 4 setting should be suitable
most of the time. The fancier types can be considered for high-quality
uncompressed source footage.
Image Preprocessing: brings up the image preprocessing (preparation) dialog,
allowing various image-level adjustments to make tracking easier (usually more
so for the human than the machine). Includes color, gamma, etc, but also
memory-saving options such as single-channel and region-of-interest processing.
This dialog also accesses SynthEyes’ image stabilization features.
Memory Status: shows the image resolution, image size in RAM in megabytes, shot
length in frames, and an estimated total amount of memory required for the
sequence compared to the total still available on the machine. Note that the last
number is only a rough current estimate that will change depending on what else
you are doing on the machine. The memory required per frame is for the first
Off/Align/Solve. Button. Controls the mode in which the spinal editing features run, if at
all. In align mode, the scene is re-aligned after a change. In solve mode, a
“refine” solve cycle is run after a change.
Finish. Button. Used to finish a refine solve cycle that was truncated to maintain
response time. Equivalent to the Go button on the solver control panel.
Lock Weight. Spinner. This weight is applied to create a soft-lock key on each
applicable channel when the camera or object is moved or rotated. When this
spinner is dragged, the solver will run in Solve mode, so you can interactively
adjust the key weight.
Drag time (sec). Spinner. (Solve mode only.) Refine cycles will automatically be
stopped after this duration, to maintain an interactive response rate. If zero, there
will be no refine cycles during drag.
Time at release. Spinner. (Solve mode only.) An additional refine operation will run at
the completion of a drag, lasting for up to this duration. If zero, there will not be a
solve cycle at the completion of dragging (ie if the drag time is long enough for a
complete solve already).
Update ZWTs, lights, etc on drag. Checkbox. If enabled, ZWTs, lights, etc will be
updated as the camera is dragged, instead of only at the end.
Message area. Text. A text area displays the results of a solve cycle, including the
number of iterations, whether it completed or was stopped, and the RMS error. In
align mode, a total figure of merit is shown reflecting the extent to which the
constraints could be satisfied—the value will be very small, unless the constraints
are contradictory.
Preferences Controls
The spinal settings are stored in the scene file. When a new scene is created, the
spinal settings are initialized from a set of preferences. These preferences are
controlled directly from this panel, not from the preferences panel.
Set Prefs. Button. Stores the current settings as the preferences to use for new scenes.
Get from Prefs. Button. Reloads the current scene’s settings from the preferences.
Restore Defaults. Button. Resets the current scene’s settings to factory default values.
They are not necessarily the same as the current preferences, nor are these
values automatically saved as the preferences: you can hit Set Prefs if that is
your intent.
Image List. List box. Shows the list of files in the image list, in frame-by-frame order. To
delete an image, click on it to select it, then hit the delete key. If you are editing
an existing file list, the selected frame will be shown in the camera view for your
inspection.
Add. Button. Launches the file-selection dialog, so you can select one or more images
to add to the list. Files are inserted after the currently-selected image, or at the
end of the list if none. Note that files that you select in the operating system's file-
selection dialog are added in an order the operating system determines, it is not
the order you click on them. To control ordering, Add one file at a time or
rearrange them. Warning: if you have already created trackers, you must always
add images at the end!
Move Up. Button. Moves the selected image up one spot. Don't do this if you've already
created trackers!
Move Down. Button. Moves the selected image up one spot. Don't do this if you've
already created trackers!
Make Keys. Button. When on, keys are created and shown at the current frame. When
off, the value and status on the first frame in the shot are shown—for non-
animated fixed parameters.
Back to Key. Button. Moves back to the next previous frame with any stereo-related
key.
Forward to Key. Button. Moves forward to the next following frame with any stereo-
related key.
Dominant Camera. Drop-down list. Select which camera, left or right, should be taken
to be the dominant stationary camera; the stereo parameters will reflect the
position of the secondary camera moving relative to the dominant camera. The
Left and Right settings are for rigs where only one camera toes in to produce
vergence; the Center-Left and Center-Right settings are for rigs where both
cameras toe in equally to produce vergence. When you change dominance, you
will be asked if you wish to switch the direction of the links on the trackers (and
solver modes).
Show Actuals. Radio button. When selected, the Actual Values column shows the
actual value of the corresponding parameter on the current frame.
Show Differences. Radio button. When selected, the Actual Values column shows the
difference between the Lock-To Value and the actual value on the frame.
The following sections describe each of the parameters specifying the
relationship between the two cameras, ie one for each row in the stereo geometry
panel. Note that the parameters are all relative, ie they do not depend on the overall
position or orientation within the 3-D environment. If you move the two cameras as a
unit, they can be anywhere in 3-D without changing these parameters. For each
parameter, there is a number of columns, which are specified in the section after this.
Distance. Parameter row. This is the inter-ocular distance between the (nodal points) of
the two cameras. Note that this value is unit-less, like the rest of SynthEyes, its
units are the same as the rest of the 3-D environment. So if you want the main 3-
D environment to be in feet, you should enter the inter-ocular distance in feet
also.
Direction. Parameter row. Degrees. Direction of the secondary camera relative to the
primary camera, in the coordinate system of the primary camera. Zero means the
secondary is directly beside the primary, a positive value moves it forward until at
90 degrees it is directly in front of the primary (though see Elevation, next).
However: in Center-Left or Center-Right mode, the zero-direction changes as a
result of vergence to maintain symmetric toe-in. See other material to help
understand that.
Elevation. Parameter row. Degrees. Elevation of the secondary camera relative to the
primary camera, in the coordinate system of the primary camera. At an elevation
of zero degrees, it is at the same relative elevation. At an elevation of 90
degrees, it would be directly over top of the primary.
Vergence. Parameter row. Degrees. Relative in/out look direction of the two cameras.
At zero, the cameras axes are parallel (subject to Tilt and Roll below), and
positive values toe in the secondary camera. In center-left or center-right mode,
the direction changes to the secondary camera to achieve symmetric toe-in.
Tilt. Parameter Row. Degrees. Relative up/down look direction of the secondary
camera relative to the primary. At zero, they are even, as the value increases,
the secondary camera is twisted looking upwards relative to the primary camera.
Roll. Parameter Row. Degrees. Relative roll of the secondary camera relative to the
primary. At zero, they have no relative roll. Positive values twist the secondary
camera counter-clockwise, as seen from the back.
Move Left Camera Set 1f. Button. The left camera is moved to a position determined
by the right camera and stereo parameters (excluding any that are As-Is). The
seed path, solve path, or both are affected, see the checkboxes at bottom. Unlike
Live, this is a one-shot event each time you click the button.
Move Left Camera Set PB. Button. Same as Set 1f, but for the playback range.
Move Left Camera Set All. Button. Same as Set 1f, but updates the left camera for the
entire shot. For example, you might want track the right camera of a shot by
itself; if you have known stereo parameters you can use this button to instantly
generate the left camera path for the entire shot.
Move Both from Center Live/Set 1f/Set PB/Set All. Same as the Left version, except
that the position of the two cameras is averaged to find a center point, then both
cameras are offset half outwards in each direction (including tilt, roll, etc) to form
new positions.
Move Right Camera Live/Set 1f/Set PB/Set All. Same as the Left version, except the
right camera is moved based on the left position and stereo parameters.
Write seed path. Checkbox. Controls whether or not the Move buttons affect the seed
path. You will need this on if you wish to create hard or soft camera position
locks for a later solve. You can keep it off if you wish to make temporary fixes. If
you write the solve path but not seed path, anything you do will be erased by the
next solve (except in refine mode).
Write solve path. Checkbox. Controls whether or not the Move buttons affect the solve
path. Normally should be on if the camera has already been solved; keep off if
you are generating seed paths. If both Write boxes are off, the Move buttons will
do nothing. If Write seed path is on, Write solve path is off, and the camera is
solved, the Move buttons will be updating the seed path, but you will not be able
to see anything happening—you will be seeing the solve path unless you select
View/Show seed path.
(file name). Static text field. Shows the base file name of the texture on the
selected mesh; this file is either read, if Create Texture is off, or written, if Create
Texture is on.
Set. Button. Brings up the file browser to set the file name. Important: be sure to
set Create Texture appropriately before clicking Set, so that the correct File Open or
File Save dialog can be displayed.
Clear. Button. Removes the texture file name.
Options. Button. Allows the compression options for the selected file type to be
changed.
Save. Button. All selected meshes with extracted textures are re-saved to disk.
Use after painting in the alpha channel, for example.
Create Texture. Checkbox. When set, this mesh will have a texture computed
for it on demand (via Run, Run All, or after a solve), which will then be written to the
designated file. When the checkbox is clear, the specified texture will just be shown in
the viewport.
Enable. Stoplight button. Animated control over which frames are used in the
texture calculate, ie, avoid any with explosions, object passing in front, etc. Right-click,
shift-right, and control-right delete a key, truncate, or delete all keys, respectively.
Orientation Control. Drop-down. Shows as "None" here. Allows a texture on
disk to be oriented differently from the SynthEyes default, for more convenient
interaction with other programs.
XRes. Editable drop-down. Shows here as 1024 here. Sets the X horizontal
resolution of the created texture. Note that you can type in any value, and it is not
limited to any particular maximum size.
YRes. Editable drop-down. Shows here as 512 here. Sets the Y vertical
resolution of the created texture. Note that you can type in any value, and it is not
limited to any particular maximum size.
Channel Depth. Drop-down. Shows here as 8-bit. Selects the pixel bit depth of
the created texture on disk, ie 8 bit, 16 bit, half-float, or float. Note that textures are
always computed as floats.
Filter type. Drop-down. Shows here as 2-Lanczos. Selects the interpolation filter
type for texture extraction.
Lit texture. Checkbox. When checked, the textured mesh will be lighted in the
perspective view. Unchecked, it will not. Lighting is desirable for meshes with painted
textures such as a soda can, but undesirable when the texture is generated from the
pixels of the shot's image: in that case you want the pixels used exactly as is. Has no
effect on untextured objects.
Run. Button. Runs the texture extraction process for (only) the selected
texture/mesh right now.
Run All. Button. Runs all runnable texture extractions immediately.
Run all after solve. Checkbox. When set, all texture extractions will be re-run
automatically after each solve cycle, including refines.
Show only texture alpha. Checkbox. When on, the texture's alpha channel will
be displayed as the texture, instead of the RGB texture. This can make alpha-painting
easier.
Hide mesh selection. Checkbox. When on, selected meshes are drawn
normally, without the red highlighting. This makes painting and texture panel
manipulations easier to see, though it can be confusing, so this option turns off
automatically when the Texture Control Panel is closed.
Extraction Mode. Dropdown. Best pixel: the highest-weighted pixel is produced,
ie considering tilt and distance. Average: the (weighted) average pixel intensity is
produced. Average w/alpha: the extracted texture includes the (weighted)average pixel
plus an alpha channel reflecting its degree of consistency (very repeatable pixels will be
opaque, and very variable pixels transparent). Alpha creation is subject to the additional
controls in the Alpha Creation Controls section below.
Tilt Limit. Spinner. Texture extraction suffers from reduced accuracy as triangles
are seen edge on, as the pixels squish together. This control limits how edge-on
triangles are used for texture extraction, zero is no limit at all (all triangles are used),
one means only perfectly camera-facing triangles are used. Increase this value when
the camera moves extensively around a cylinder or sphere; be sure to increase the
segment count since the control applies on a triangle-by-triangle basis. Literal example:
multiplying the default 0.1 by 90 degrees says triangles within 9 degrees of edge-on are
rejected, eg within 81 degrees of head on.
Blocking control. Drop-down. Selects whether this mesh is considered opaque
for the texture extraction operations of other meshes, ie if this mesh is between the
camera and a mesh whose texture is being extracted, the blocked pixels will not be
used, ie as this mesh passes in front. This control can be set regardless of whether this
mesh is having its own texture extracted. Note that each mesh that is blocking imposes
a non-trivial performance penalty during extraction.
Blocked by Garbage Splines. Checkbox. When set, any garbage splines will be
masked out of the texture extraction, ie, you can set up a garbage spline around a
person moving in front of a wall, for example, so that the person will not affect the
extracted wall texture.
Alpha Creation Controls
Low. Spinner. Sets the (smaller) RMS level that corresponds to an opaque pixel.
High. Spinner. Sets the (larger) RMS level that corresponds to a transparent
pixel.
Sharpness. Spinner. Gamma-like control that affects what happens to the alpha
for RMS levels between the low and high limits.
Shadow Maker. Button. Brings up the Shadow Map Maker script. From a light
and mesh, it creates a texture map that is the shadow cast on the mesh by the light,
from all shadow-casting meshes in the scene. The UV texture coordinates of the mesh
are also set (to coordinates "as seen by" the light). To save the texture, turn Create
Texture on (even though it is already created), then Set (filename), then Save, and
finally turn Create Texture back off (so a texture extraction won't be run later).
Tidy Up Scripts
This dialog appears at startup if any problematic scripts are detected, or in
response to the File/Tidy Up Scripts menu item. It focuses on detecting and deleting
older misplaced copies of scripts that may interfere with the new current versions. For
more general information on this process, see Script Tidy Up.
Note: "Script" should be taken loosely: it includes any file in the script folders.
The top list area (showing geomshrepl) contains a description of problems in the
system script folder, while the bottom list area (containing filmbox.szl) shows problems
in the user script folder. You can open either of those two folders in your file explorer
(Finder on macOS) with respecive buttons above each list area.
System Script Files. Opens the system script folder, ie Script/System Script
Folder menu item.
System Script List (geomshrepl etc above). You can double-click a file to get
more information about it, or select it then hit delete to delete the individual file. Script
files that you the user have added, that aren't part of the distribution and that don't
conflict with anything, are listed only if the "Include novel (user-supplied) files" checkbox
is checked at the bottom of the panel.
Copy Novel, Copy Mis-placed, Copy All (top set of buttons). Each button
copies the list of respective system scripts onto the clipboard as a textual list of files,
one per line. You can use this list to create batch files to move them around, for
example.
Delete Novel, Delete Mis-placed, Delete All (top set of buttons). Each button
deletes the kind of problematic files from the system folder. (You'll be asked to confirm.)
User Script Files. Opens the user script folder, ie the Script/User Script Folder
menu item.
User Script List (filmbox etc above). You can double-click a file to get more
information about it, or select it then hit delete to delete the individual file. Script files
that you the user have added, that aren't part of the distribution and that don't conflict
with anything, are listed only if the "Include novel (user-supplied) files" checkbox is
checked at the bottom of the panel.
Copy Novel, Copy Mis-placed, Copy Overrides, Copy All (bottom set of
buttons). Each button copies the list of respective user scripts onto the clipboard as a
textual list of files, one per line. You can use this list to create batch files to move them
around, for example.
Delete Novel, Delete Mis-placed, Delete Overrides, Delete All (bottom set of
buttons). Each button deletes the kind of problematic files from the user script folder.
(You'll be asked to confirm.)
Include novel (user-supplied) files in both system and user areas.
Checkbox. If checked, the system and user file areas will include files that aren't part of
SynthEyes, ie that you have added. Usually such files should only be in the user area!
Re-Scan. Button. Click this to force SynthEyes to rescan the system and user
script areas, ie after you've made some changes using your file explorer.
Proceed As-Is. Button. Click this or close the dialog to start working in
SynthEyes. Ideally, the system and user file list areas should be empty, ie with no
conflicts. (Novel user-supplied files aren't an issue.) Any mis-placed files will be
blocked and ignored in SynthEyes, to prevent confusion.
Copy Novel, Copy Mis-placed, Copy All (bottom set of buttons). Each button
copies the list of respective scripts onto the clipboard as a textual list of files, one per
line. You can use this list to create batch files to move them around, for example.
ViewShift Dialog
This floating panel is the user interface for SynthEye's ViewShift system, which
helps with 3D object removals and combining split takes. You can open as many of
these panels at once as you like, initially view Shot/ViewShift or via the Details button
on the Phase panel if you have previously created a ViewShift phase. At any point a
panel is displaying one of the ViewShift phases, which can be seen in the Phase view. It
displays the currently-selected phase, or sticks on a certain one if you've clicked the
Lock/unlock to this phase button. If no phase is selected, or the sticky phase
has been deleted, the controls are grayed out.
A condensed summary version of this panel appears on the Phase panel; use
the Details button to open the full version here.
NOTE: ViewShift is available to the Intro version via Shot/ViewShift, but not
through the Phase view, which is not available in the Intro Version. The Intro
version has a few additional controls to be able to manage multiple viewshifts.
They are described below but not shown in the image capture.
View cam(obj). Drop-down. Selects the viewing camera of the ViewShift engine;
generated images match the format and viewpoint of this camera. Can also be
any "moving object" attached to the desired camera; in this case only splines are
used that match this particular moving object, though behavior is otherwise as if
the camera had been selected.
Output to: (unpost.mp4 here). Display only. This shows the base filename of the output
file of this ViewShift phase. It can be a movie file for preview results, but more
typically output is an image sequence with alpha data. In that case the filename
here must be the first file name for the sequence including any leading zeroes.
allowed to stray past them to use possible fill-in areas in some compositors. If
clipping is turned off, you must be sure to use a floating-point image format.
Interpolation filter. Drop-down. Selects the interpolation filter to use when pulling
pixels from the source image. Same as in the image preprocessor.
Edge Antialiasing. Drop-down. Select an amount of antialiasing to use for the alpha
channel to get non-jaggy edges on the insert. This antialiasing is subject to
further refinement due to th Pull in and Push out settings.
Lock/unlock to this phase. Button. Once clicked, a panel is locked to the current
phase, even if a different phase is subsequently selected. You can use this to
open several panels, each examining a different ViewShift phase. You can see
the lock status on the panel's title bar. If locked, clicking this button again unlocks
the panel.
Duplicate phase. Button. Creates a new duplicate phase with the same settings. The
new phase is selected for editing even if the panel is locked (the lock is shifted
to the new phase). Be sure to change the output file!
Source cam. Drop-down. This selects the source camera, from which camera/shot
pixels will be pulled. This can be the same as the viewing camera (for time
shifting), or a different one (for combining split takes). Can also be any "moving
object" attached to the desired camera; in this case only splines are used that
match this particular moving object, though behavior is otherwise as if the
camera had been selected.
Source mode. Drop-down. Sets the algorithm used to determine the source frame used
for each output frame.
1:1. The first frame of the output comes from the first frame of the source, the
second from the second, etc. Source frames may have to be repeated at the end,
or ignored.
Scaled. Frames are pulled from the source shot proportionately, ie the beginning
of the source for the beginning of the output, the end for the end, the exact
middle for the exact middle, etc.
Absolute. The animated Frame control spinner, below, directly specifies the
frame number.
Relative. The output frame number is offset animated Frame control spinner to
give the source frame number. The offset can be positive or negative.
Distance. Uses the (animated) Distance spinner below to control where the
selected source camera is relative to the view camera. If the distance value is
zero, selects the frame where the source camera is physically closest in XYZ to
the output camera. If the value is positive, looks for a source frame where the
source camera is "out in front" by at least the distance. If the value is negative,
looks for a source frame where the source camera trails the viewing camera by
at least that amount. Clearly, not for tripod shots!
Same direction. Intended for tripod shots, pick the frame where the source
camera's view most overlaps the output viewing camera's view on the frame of
interest.
Disjoint leading. Pick the first source frame where the source splines and mesh
outlines don't intersect with the viewing camera splines and mesh outlines, ie
viewing pixels can be replaced without seeing anything bad (ie themselves) from
the source. This one keeps the source camera "out in front" at higher frame
numbers, unless such a frame can't be found, in which case a search is made in
earlier frames.
Disjoint following. Pick the first source frame where the source splines and
mesh outlines don't intersect with the viewing camera splines and mesh outlines,
ie viewing pixels can be replaced without seeing anything bad (ie themselves)
from the source. This one keeps the source camera "behind" at lower frame
numbers, unless such a frame can't be found, in which case a search is made in
later frames.
Limited source frames? Checkbox. When set, frames will only be pulled from the
given Start/End frame range. When not set, frames can be pulled from the entire
source shot.
Set Start frame. Button. Sets the Start Frame to the current frame. Right-click to go to
this frame.
Start Frame (right column). Spinner. The first frame that can be used from the source.
End Frame (right column. Spinner. The last frame that can be used from the source.
Set End frame. Button. Sets the End Frame to the current frame. Right-click to go to
this frame.
Distance. Spinner (animated). This value supplies the distance for Leading by and
Trailing by modes. It can be animated to vary the leading/following distance over
the course of the shot, as necessary. That can be done with the panel, or the
graph editor.
Frame control. Spinner (animated). Supplies the frame value for Absolute and Relative
frame modes. Note: in Absolute mode, this value starts at zero, ie is an offset
from the true beginning of the source shot media, regardless of any other setting.
Important: many of the source modes output animation to this track when they
are run, providing feedback on what source frame was used for each destination
frame, especially via the graph editor. You can then switch to Absolute mode to
further refine what frames are used for subsequent runs.
Use Source Splines. Checkbox. When checked, output pixels will be produced only if
they are drawn from within splines assigned to the listed source camera or
object. Used to pull a specific portion of the source shot (typically not the same
as the viewing shot), eg to pull a moving person from a split take into a
destination shot.
Use Viewing Splines. Checkbox. When checked, output pixels will be produced only
within splines assigned to the viewing camera or object. Used to restrict the
ViewShift to a particular region, eg to follow a moving person who is being
removed.
Use Garbage Splines. Checkbox. When checked, no pixels are transferred that fall
within any garbage spline of the source or viewing shot.
Removal meshes. Text string of comma-separated mesh names. The outline of these
meshes are treated as if they are (additional) viewing splines, so that pixels will
be generated only within these outlines, ie as part of setups to remove these
meshes from the output footage.
Set removal meshes. Button. Takes the currently-selected meshes, and puts the
comma-separated list of mesh names into the Removal Meshes field. Right-click
to select these meshes.
Ignore meshes. Text string of comma-separated mesh names. The listed meshes will
be ignored by this ViewShifter, unused as reflectors or removal meshes.
Set ignore meshes. Button. Takes the currently-selected meshes, and puts the
comma-separated list of mesh names into the Ignore Meshes field. Right-click to
select these meshes.
Compensate Illumination. Checkbox. When set, ViewShift looks for a light on the
source shot and the destination shot that have an Illumination track set up (ie via
the Set Illumination from Trackers script). If the destination and source shots are
the same, that will be the same light. The ratio of the destination illumination
divided by the source illumination is used to adjust pixels being transferred from
the source to destination, to compensate for illumination variations. Note: there
should be only a single illuminated light on the/each shot for this mode. You can
hide unneeded lights temporarily.
Play when done. Checkbox. When checked, output movies (not sequences) will be
played, ie by Windows Media Player or Quicktime player, after they are
produced.
Run Now. Button. Push this to cause the ViewShift operation to be performed
immediately (a progress bar will appear). The ViewShift can also be initiated by
normal Phase processing, ie after a phase-based solve, or using Run This or
Run All in the phase view window. The full phase processing permits multiple
ViewShifts to be set up and run as a group.
Previous phase. Button. Display the ViewShift immediately previous to the currently-
displayed one (in the order they were created).
Next phase. Button. Display the ViewShift immediately following the currently-displayed
one (in the order they were created).
Delete phase. Button. Delete the current ViewShift.
Timing Bar
The timing bar shows valid regions and keys for trackers, roto masks, etc,
depending on what is currently selected, and the active panel. Shows hold regions with
magenta bars at the top of the frames.
There are two possible backgrounds: one showing whether frames are cached or
not, the other whether there a reasonable number of trackers or not (for the given
solving mode). Select between them via the View/Timebar background menu item.
There is an extensive and largely self-explanatory right-click menu for the Timing
bar; it lets you jump to various frames, set various time-related values, and even save
and recall multiple playback ranges.
Camera Window
There can be multiple camera views; in the viewport layout manager or the
viewport select tab at top-left of any view pane you can find Camera, Camera B,
LCamera, and RCamera. These differ only in the way they initially attach to an active
stereo pair: Camera B attaches to the other camera of the pair, LCamera to the left
camera, and RCamera to the right camera. This allows you to set up complex viewport
configurations for stereo tracking that configure themselves automatically. You can also
change the object a camera view displays from the bottom of the right-click menu. (Note
that right-click menu largely duplicates the items on the main menu that are related to
the camera view.)
Note: There is a very extensive additional set of mouse operations for planar
trackers, please see the Planar Trackers in the Camera View in the Planar
Tracking Manual.
The camera view can be floated with the Window/Floating camera menu item.
Left Mouse: Click to select and drag a tracker, or create a tracker if the Tracker panel’s
create button is lit. Shift-drag to move faster, control-drag to move a tracker
slower (more precisely). While dragging, spot or symmetry trackers will snap to
the best nearby location (within two pixels). Hold down ALT/Command to
suppress snapping. Shift-click to include or exclude a tracker from the existing
selection set. If there is only one selected tracker and it has an offset, shift-drag
drags the tracker, leaving the offset marker (final position) unchanged. Drag in
empty space to lasso 2-D trackers, control-drag to lasso both the 2-D trackers
and any 3-D points, shift-drag to lasso additional trackers. Lasso meshes instead
if "Edit/Lasso meshes instead" is turned on. Control-drag in empty space for RGB
color readout on the status line. ALT-Left-Click (Mac: Command-Left-Click) to link
to a tracker, when the Tracker 3-D panel is displayed. Click the marker for a
tracker on a different object, to switch to that object. Drag a Lens panel alignment
line. Click on nothing to clear the selection set. If a single tracker is selected, and
the Z or apostrophe/double-quote key is pressed, pushing the left mouse button
will place the tracker at the mouse location and allow it to be dragged to be fine-
tuned (called the Z-drop feature). Or, drag a tracker’s size or search region
handles.
Middle Mouse Scroll: Zoom in and out about the cursor. (See mouse preferences
discussion above.)
Right Mouse: Drag vertically to zoom. Click to bring up the right-click menu. Or, cancel
a left or middle button action in progress.
Note: The mouse operations are different for planar trackers, please see
Planar Trackers in the Tracker Mini-View in the Planar Tracking Manual.
Left Mouse: Drag the tracker location. The control key will reduce sensitivity for more
accurate placement. Spot or symmetry trackers will snap to the best nearby
location (within two pixels). Hold down ALT/Command to suppress snapping.
Also, drag a tracker offset marker. On offset trackers, shift-drag to move the
tracker, leaving the final (offset) position stationary.
Middle Scroll: Advance the current frame, tracking as you go.
Right Mouse: Add or remove a position key at the current frame. Or, cancel a drag in
progress.
3-D Viewport
Left Mouse. There's a variety of functionality here:
Click and Drag repeatedly to create an object, when the 3-D Panel’s Create
button is lit. Shift-drag to create "square" objects.
Click something to select it.
Shift-click to multi-select or unselect trackers or meshes.
Drag a lasso to select multiple trackers, shift-drag to lasso additional trackers,
control-drag to un-lasso trackers.
Lasso etc meshes when "Edit/Lasso meshes instead" is selected.
Move, rotate, or scale an object, depending on the tool selected on the 3-D Panel
(even if it has since been closed). Use the control key when rotating or scaling for
very fine adjustment. (See additional discussion below.)
ALT-Left-Click (Mac: Command-Left-Click) to link to a tracker, when the Tracker
3-D panel is displayed.
Normally rotation and scaling are around the pivot point. The pivot of
meshes or GeoH objects can be moved, see Edit Pivots on the right-click or main
menu, described on the main menu.
To rotate or scale around any position in the viewport, turn on Lock on the
3D panel, which makes sure the object doesn't un-select when you try to rotate
or scale around some position that isn't on the object!
Middle Mouse: Drag to pan the viewport. (See mouse preferences discussion above.)
Middle Scroll: Zoom the viewport.
Right Mouse: Drag vertically to zoom the viewport. Or, cancel an ongoing left or
middle-mouse operation.
Hierarchy View
The Hierarchy view allows you to see a representation of the parenting of various
objects within SynthEyes, especially cameras, moving objects, trackers, and meshes.
You can also make various changes in parenting, as permitted.
NOTE: the hierarchy shown is that of GeoH tracking, ie where moving objects
may be parented to one another. There's a simpler underlying hierarchy
where all moving objects are parented equally to their camera, not to one
another.
Note: "error" means the distance between the 2-D position of the tracker's
solved XYZ coordinates, projected into the image (ie the yellow X), and the
tracker's actual position in the image.
If no trackers are selected, then the per-frame error is shown for each frame,
(root-mean-square, RMS) averaged over all trackers. The object's overall error value
(from the Solver panel) is shown, labeled in upper case as "HPIX."
Bonus! If the camera/object isn't solved yet, the error view shows the count
of how many trackers are tracked on both the frame and the following frame.
This helps you identify areas requiring more trackers. (The simple tracker
count alone does not suffice; trackers must be valid on a frame and the
following frame to let the camera motion from one frame to the next to be
determined).
Note: in the error curve mini-view, errors are not weighted according to the
tracker weights on the tracker panel, or the transition weighting from the
solver panel, so that you can see the actual errors.
If a single unsolved tracker is selected, it's tracking figure of merit (FOM) curve
is shown in blue, which is helpful during supervised tracking.
If the error curve view has nothing to display, it is not shown.
A dashed vertical line shows the current frame.
Left Mouse: Click and optionally drag to go to a different current frame within the
playback frame range (small green/red triangles at top of timebar).
Right Mouse: Click to cancel an ongoing left-drag.
Right Double-click: Resets the playback range to the entire shot.
The graph editor can be launched from the Graphs room on the main
toolbar, the Window/Graph Editor menu item, or the F7 key. It can also appear as a
viewport in a layout. The graph editor contains many buttons; they have extensive
tooltips to help you identify the function and features.
All graph editors share a single clipboard for key data, so you can move keys
from one channel to another, object to another, or one editor to another. The clipboard
can be modified from Sizzle scripts to achieve special effects.
The graph editor has two major modes, graphs and tracks, as these examples
show:
Tracks Mode:
Tracker 7 is unlocked and selected in the main user interface, and a selection of
keys from trackers 6, 7, and 9 are selected in the graph editor. While the other trackers
are automatic, #1,2,3 and 7 are now supervised and track in the forward direction (note
the directionality in the key markers). The current frame # is off to the left, before frame
35.
Graphs Mode:
The capture shows a graph display of Camera01. The red, green, and blue
traces are solved camera X,Y, and Z velocities, though you would have to expose the
solved velocity node if you did not know. The magenta trace with key marks every frame
is a field-of-view curve from a zoom shot. The time area is in scroll mode, the graph
shows frames 62 to 143, and we are on frame 117.
Hint. This panel does a lot of different stuff. If you only read this, you will
probably not understand exactly what or why everything does what it does.
We could go on and on trying to describing everything exactly, to no purpose.
Keep alert for what SynthEyes can do, and give it a try inside SynthEyes—
you will understand a lot better.
Squish Tracks. . [Only in tracks mode.] When on, all the tracks are squished
vertically to fit into the visible canvas area. This is a great way to quickly see an
overview. You can see the display with or without individual keys: it has three
states: off, with keys, and without keys. Clicking the button sequences among the
3 modes, right-clicking sequences in the reverse direction.
Draw Selected. . [Only in graphs mode.] When on (normally), the curves of all
selected nodes or open nodes are drawn. When off, only open nodes are drawn.
Time Slider
The graph editor time slider has two modes, controlled by the selector icon at left
in the images below.
the keys not shown (see Shared Features, above). With no keys shown, the key
manipulation modes do not make sense. Instead the following mode, modified from the
hierarchy’s name area, is in effect:
click or drag to select and flash one node,
control-click or drag to toggle the selection,
control-shift-drag to clear a range of selections,
shift-click to select an additional tracker,
shift-click an already-selected tracker to select the range of trackers from
this one to the nearest selected one.
Visibility. . Show or do not show the node (tracker or mesh) in the viewports.
Color (node). . Has the following modes for trackers; only the last applies to other
node types:
shift-click to add trackers with this color to the selection set,
control-click on the color square of an unselected tracker to select all
trackers of this color,
control-click on the color square of a selected tracker to unselect all
trackers of this color, or
double-click to set the color of the node (tracker or mesh).
Lock. . Lock or unlock the tracker.
Enable. . Tracker or spline enable or disable.
Node/Channel Name. Selected nodes have a white background. Only some types of
nodes can be selected, corresponding to what can be selected in SynthEyes’s
viewports. The channel name is underlined when it is keyed on the current frame.
In the following list, keep in mind that only one of most objects can be selected at a
time; only trackers can be multi-selected.
click or drag to select one node (updating all the other views),
control-click or drag to toggle the selection,
control-shift-drag to clear a range of selections,
shift-click to select an additional tracker,
shift-click an already-selected tracker to select the range of trackers from
this one to the nearest selected one, or
double-click to change the name of a node (if allowed).
Show Channel(s). . When on (as shown), the channel’s graph is drawn in the
canvas area. On a node, controls all the channels of the node, and the control may
have the on state shown, a partially-shown state (fainter with no middle dot), or may be
off (hollow, no green or dot).
Zoom Channel. . Controls the vertical zoom of this channel, and all others of the
same type: they are always zoomed the same to keep the values comparable.
Left-click to see all related channels (their zoom icons will light up) and
see the zero level of the channel in the canvas area, and see the range of values
displayed on the status line.
Left-drag to change the scale. It will change the offset to keep the data
visible—hold the ALT key to keep the data visible over the entire length of the
shot.
Right-click to reset the zoom and offsets to their initial values.
Double-click to auto-zoom each channel in the same group so that they
have the same scale and same offsets. Compare to double-clicking the pan
icon.
Shift-double-click auto-zooms all displayed channels, not just this group.
Alt-double-click auto-zooms over the entire length of the shot, not just the
currently-displayed portion. Can be combined with shift.
Pan Channel. . Pans all channels of this type vertically.
Left-click to see the zero level of the channel in the canvas, and to show
the minimum/maximum values displayed on the status line.
Left-drag to pan the channels vertically.
Right-click to reset the offset to zero.
Double-click to auto-zoom each channel in the same group so that they
have the same scale but different offsets. Compare to double-clicking the
zoom icon.
Shift-double-click auto-zooms all displayed channels, not just this group.
Alt-double-click auto-zooms over the entire length of the shot, not just the
currently-displayed portion. Can be combined with shift.
Color (channel). . Controls the color of this channel, as drawn in the canvas:
double-click to change the color for this exact node and channel only, for
example, only for Tracker23,
shift-double-click to change the preference for all channels of this type, or
right-click to change the color back to its preference setting.
Mouse Modes
The mouse mode buttons at the bottom center control what the mouse buttons do in the
canvas area. Common operations shared by all modes:
Middle-mouse pan,
Middle-scroll to change the current frame and pan if needed.
Shift-middle-scroll to zoom the time axis
Right-drag to zoom or pan the time axis (like the main timebar)
Right-click to bring up the canvas menu.
Double-left-click a number in the Number Zone to bring up the dialog to
change the value, if it is changable.
Isolate. . Intended to be used when all trackers are selected and displayed. The
shared operations at top plus:
Left-click or -drag on a curve or key to isolate only that tracker, by
selecting it and unselecting all the others. Keep the left mouse button down and
roam around to quickly look at different tracker curves.
Right-click on the isolate button at any time selects all the trackers, even
if isolate mode is not active.
Activated by double-clicking a key from the graph or tracks views, or the number
itself in the Number Zone, to change one or more keys to new values, specified
numerically.
If multiple keys are selected when the dialog is activated, the values can all be
set to the same value, or they can all be offset by the same amount, as selected by the
radio buttons at the bottom of the panel.
The value is controlled by the spinner, but also by up and down buttons for each
digit. You can add 0.1 to the value by clicking the ‘+’ button immediately to the right and
below the decimal point. The buttons add or subtract from the overall value, not from
only a specific digit.
Right-clicking an up or down button clears that digit and all lower digits to zero,
rounding the overall value.
The values update into the rest of the scene as you adjust them. When you are
finished, click OK or Cancel to cancel the change.
Approximate Keys Dialog
This dialog is launched by right-clicking in the canvas area of the graph editor,
when it is in graphs mode, then selecting the Approximate Keys menu item.
Approximate Keys does what the name suggests, examining the collection of
selected keys, and replacing them with a smaller number that produces a curve
approximating the original. This feature is typically used on camera or moving object
paths, and zooming field of view curves.
Fine Print: SynthEyes approximates all keys between the first-selected and
the last-selected, including any in the middle even if they are not selected. All
channels in the shared-key channel group will be approximated: if you have
selected keys on the X channel of the camera, the Y and Z channels and
rotation angles will all be approximated because they all share key positions.
You can select the maximum number of keys permitted in the approximated
curve, and the desired error. SynthEyes will keep adding keys until it reaches the
allowed number, or the error becomes less than specified, whichever comes first.
The error value is per mil (‰), meaning a part in a thousand of the nominal
range for the value, as displayed in the SynthEyes status line when you left-click the
zoom control for a channel. For example, the nominal range of field of view is 0 to 90,
so 1 per mil is 0.09 degrees. In practice the exact value should rarely matter much.
At the bottom of the display, the error and number of keys will be listed. You can
dynamically change the number of keys and error values, and watch the curves in the
viewport and the approximation report to decide how to set the approximation controls.
Tip: You can turn on the Maya-style navigation preference; when enabled,
ALT-left will Orbit, ALT-middle will Pan, ALT-right will Dolly. On Macs, you can
use either the Maya-equivalent Opt key, or use the command key. These are
available at all times within the perspective view, independent of the mode,
and you can switch between them while dragging. On Linux, you may have to
adjust your modifier key so that ALT is not taken by your window manager, as
you do for Maya itself.
Tip: For increased Maya compatibility, you can turn on the preference "In
Maya-style mode, tumble not orbit" though we believe that lowers typical
productivity.
The middle-mouse scroll wheel moves forward and back through time if the view
is locked to the camera (may result in GeoH tracking), zooms in and out in 2D if the
view is already zoomed or panned in 2D, and dollies in and out in 3D when the camera
is not locked.
The N key will switch to Navigate mode from any other mode.
If you hold down the Z key or apostrophe/double-quote when you click the left
mouse button in any mode, the perspective window will switch temporarily to the
Navigate mode, allowing you to use the left button to navigate. The original mode will be
restored when your release the mouse button.
showing the object the view becomes locked to when you re-lock the view. When
Stay Locked to Host is on and the view is not locked, no camera/object will be
checked, because it will be determined by the Active Tracker Host at the time
that it becomes locked.
View Submenu
Local coordinate handles. The handles on cameras, objects, or meshes can be
oriented along either the global coordinate axes, or the axes of the item itself, this
menu check item controls which is displayed.
Path-relative handles. The handles are positioned using the camera path: slide the
camera along the path, inwards with respective to curvature, or upwards from the
curvature. This option applies only for cameras and objects.
Stereo Display. If in a stereo shot, selects a stereo display from both cameras. See
Perspective View Settings to configure.
Treat wireframes as solid. Affects hit-testing of the mouse on wireframe meshes.
When off, the mouse must be over one of the wires to hit, whereas if on the
mouse can be anywhere inside a facet (triangle), as if the mesh was being drawn
solid. Controls the preference of the same name as well.
Affect whole path. Moves a camera or object and its trackers simultaneously. See 3-D
Control Panel.
Whole affects meshes. Controls whether or not the Whole button affects meshes as it
moves a scene. Keep on if you have already placed the meshes, turn off if you
are moving the scene relative the meshes to align it.
Reset 2D zoom. Any 2-D zoom into the perspective viewport image is removed, so that
the entire field of view and image are visible.
Reset FOV. Reset the field of view to 45 degrees.
Lock position only. When on, modifies the normal Lock mode so that the perspective
view follows only the position of the camera, not the orientation and field of view
as well as normal. This permits the perspective view to be used as a 360 VR
viewer, in conjunction with the Create Spherical Screen script. Don't leave this on
for general use, as it may adversely affect operations that expect the regular
lock.
Perspective View Settings. Brings up the Scene Settings dialog, which has many
sizing controls for the perspective view: clip planes, tracker size, etc.
Show selection handles. The selection handles are shown in the viewport; you can
use this control to hide them to declutter when painting etc.
Show bones. The GeoH bone objects are shown in the viewport; you can use this
control to hide them to declutter when painting etc.
Isolate object layer. When a single GeoH object is selected and this mode is on, the
weight map of this object only is shown, rather than the composite map normally
shown.
Lock Selection. Prevents the selection from being changed when clicking in the
viewport, good for dense work areas.
Freeze on this frame. Locks this perspective view at the current frame; you can use it
to look at the scene from a certain view or frame while you work on it on a
different frame in other viewports. Handy for working with reference shots.
Keyboard commands ‘A’, ‘s’, ‘d’, F’, ‘.’, ‘,’ allow you to quickly change the frozen
frame (with the default keyboard map).
Unfreeze. Releases a freeze, so the perspective view tracks the main UI time.
Show Only Locked. When the perspective view window is locked to a particular object
(and image), only the trackers for that particular object will be shown.
Show as Dots. Trackers are shown as fixed-size dots instead of 3-D triangle markers.
This reduces clutter at the expense of less ability to assess depth.
Solid Meshes. Shows meshes as solids; otherwise, wire frames. This control is
independent of the main-menu setting, which is used for the camera view. The
solid mesh mode can be set separately for each perspective window.
Outline Meshes. When solid meshes are shown, outline meshes causes the wire frame
to be overlaid on top as well, making the triangulation visible.
Cartoon Wireframe Meshes. A special wireframe mode where only the outer boundary
and any internal creases are visible, intended for helping align set and object
models.
Lit wireframes. When on, wireframes are lit in the perspective view (for this perspective
view). When off, they are the flat solid color. A preference in the Appearance
area is used as a default for new perspective views.
Occluded wireframes. When on, meshes occlude the wireframes as if they are solid,
even though only the wireframe is drawn. This is a more complex and time-
consuming type of draw. NOTE: in this mode, you can counter-intuitively click on
occluded portions of wireframe, because the hit-testing does not know that the
occluded portions have not been shown. Also note that the wireframes are
drawn a pixel wider in this mode to maintain their visibility.
Horizon Line. Shows the (infinitely far away) horizon line in the perspective window.
Sticky preference-type item.
Camera Frustum. Toggles the display of camera viewing frustums—the visible area of
the camera, which depend on field of view, aspect, and world size.
Show Object Paths Submenu:
Show no paths. Paths aren't shown for any cameras or moving objects.
Show all paths. Paths are shown for all cameras and moving objects.
Show selected object. The path is shown for the selected camera or moving object, if
any.
Show selected and children. The paths are shown for the selected camera or moving
object, plus its GeoH children.
View/Reload mesh. Reloads the selected (imported) mesh, if any, from its file on disk.
If the original file is no longer accessible, allows a new location to be selected.
Other “show” controls in the View menu are described on the main window’s view
menu.
Other Modes Submenu
Place on mesh. Slide a tracker’s seed position, an extra helper point, or a mesh around
on the surface of a mesh, or place onto a tracker, seed position, or extra point, or
the vertex of a lidar mesh. Use to place seed points on reference head meshes,
for example, or a mesh onto a tracker. With control key pushed, position snaps
only onto vertices, not anywhere on mesh. Shift-click to select a different object
to move, or use control/command-D to unselect everything, then click the
different object.
Field of View. Adjust the perspective view’s field of view (zoom). Normally you should
drive forward to get a closer view.
Lasso Trackers. Lasso-select trackers. Shift-select trackers to add to the selection, and
control-select to unselect them.
Lasso Vertices. Lasso-select vertices of the current edit mesh. Or click directly on the
vertices. Shift-click/lasso to add to the current set, control-click/lasso to remove.
Pays attention to faces, treating the object as solid. SynthEyes planes ("clipping
planes") can be positioned to prevent any vertices behind them from being
selected—useful during triangulation.
Lasso Entire Meshes. Lasso-select entire meshes, which become selected or not.
Add Card. Create a 3D plane by fitting to the trackers within the lasso region, or using
the existing trackers. See Creating the Card. Use control-drag to pre-selected
trackers, control+shift to add to that set. (***It used to be ALT/Command-drag
etc, not control!) The bounding box of the swept region defines the plane
size/position in any case.
Add Vertices. Add vertices to the edit mesh, placing them on the current grid. Use the
shift key to move up or down normal to the grid. If control is down, build a facet
out of this vertex and the two previously added.
Move Vertices. Move the selected vertices around parallel to the current grid, or if shift
is down, perpendicular to it. Use control to slow the movement. If clicking on a
vertex, shift will add it to the selection set, control-shift will remove it from the
selection set.
Scrub Timebar. Scrub through the shot quickly by dragging in the perspective view.
Zoom 2D. Zoom the perspective viewport in and out around the clicked-on point in the
view. Use control and shift to speed up or slow down the zooming.
Paint Alpha. Paint on the alpha channel of an extracted mesh texture to adjust its
coverage. Use the Paint toolbar overlay to control the paintbrush parameters.
Paint Loop. Paint a filled loop on the alpha channel.
Pen Z Alpha. Click repeatedly to create a zig-zag-type non-splined painted alpha path,
for example to soften a straight edge in the extracted texture.
Pen S Alpha. Click repeatedly to create a smooth splined painted alpha path, for
example to soften a circular edge in the texture.
Add Stereo 2nd. Creates a matching stereo tracker in the perspective viewport for the
selected tracker (ie just created in the camera view). See Supervised Setup in
Camera+Perspective Views.
Lightsaber Deletion. Lasso selects in the edit mesh, or if there is none, the single
selected mesh, deleting facets and vertices completely through the object (not
just on the visible surface). This is handy for carving up primitives into convenient
shapes. Shift-click a different mesh to select that mesh instead.
Mesh Operations Submenu
Assemble Mesh. (Mode) Use to quickly build triangular meshes from trackers. As you
click on each additional tracker, it is converted to a vertex and a new triangle
made, extended from the previous triangle (selected vertices). Click a selected
vertex to deselect or re-select it, to specifically control which vertices will be used
to build a triangle for the next converted tracker. Hold down control as you click a
selected vertex to deselect all, to begin working in a different area. New vertices
and triangles are added to the current edit mesh; if there is none, one is created.
Convert to Mesh. Converts the selected trackers, or all of them, and adds them to the
edit mesh as vertices, with no facets. If there is no current edit mesh, a new one
is created.
Triangulate. Adds facets to the selected vertices of the edit mesh. Position the view to
observe the collection from above, not from the side, before triangulating. Use
the clipping plane feature of Lasso Vertices to quickly isolate vertices to
triangulate.
Punch in Trackers. The selected trackers must fall inside the edit mesh, as seen from
the camera. Each triangle containing a tracker is removed, then the hole filled
with new triangles that connect to the new tracker. Allows higher-resolution
trackers to be brought into an existing lower-resolution tracker mesh.
Remove and Repair. The selected vertices are removed from the mesh, and the
resulting hole triangulated to paper it over without those vertices.
Subdivide Edges. The selected edges are bisected by new vertices, and selected
facets replaced with four new ones. Creates non-watertight meshes without
skinny triangles, this is most useful. A watertight version is available through
Sizzle and Synthia, though it can contain some skinny triangles around the
edges.
Subdivide Facets. Selected facets have a new vertex added at their center, and each
facet replaced with three new ones surrounding the new vertex.
Delete selected faces. Selected facets are deleted from the edit mesh. Vertices are left
in place for later deletion or so new facets can be added.
Delete unused vertices. Deletes any vertices of the edit mesh that are not part of any
facet.
Add Many Trackers. Brings up the "Add Many Trackers" dialog from the main menu,
for convenience.
GeoHTracking Submenu
Contains items for geometric hierarchy tracking. See the Geometric Hierarchy
Tracking manual for more information. Contains an Edit Pivots, though the one on the
main menu bar is likely more useful, see the main Edit menu description.
Texturing Submenu
Open Texture Panel. Opens the texture control panel, so you can apply an existing
texture to a mesh, or calculate a new one.
Texture Mesh from Shot. Applies the current shot to the edit or selected mesh, using
its existing UV texture coordinates. Used to apply animated textures to a mesh,
from a sequence or movie opened with Shot/Add Shot.
Texture from Shot with Alpha. Applies the current shot to the edit or selected mesh,
using its existing UV texture coordinates. Used to apply animated textures to a
mesh, from a sequence or movie opened with Shot/Add Shot. With this option,
the images' alpha channel is used during texturing. (Be sure to have Keep Alpha
turned on in the shot's Edit Shot panel.)
Texture from Shot Greenscreen. Applies the current shot to the edit or selected mesh,
using its existing UV texture coordinates. Used to apply animated textures to a
mesh, from a sequence or movie opened with Shot/Add Shot. With this option,
Green Screen processing is applied to the texture to create an alpha channel.
Green Screen processing is set up from the button on the Summary panel.
Frozen Front Projection. The current frame is frozen to form a texture map for every
other frame in the shot. The object disappears in this frame; in other frames you
can see geometric distortion as the mesh (with this image applied) is viewed from
other directions.
Rolling Front Projection. The edit mesh will have the shot applied to it as a texture,
but the image applied will always be the current one.
Remove Front Projection. Texture-mapping front projection is removed from the edit
mesh.
Assign Texture Coordinates. Assigns UV texture coordinates using camera mapping,
then crops them to use the entire range.
Crop Texture Coords. Adjust the UV coordinates of the edit mesh so that they use the
entire 0..1 range. Use this after a camera map or heavy edit of a mesh, to utilize
more of the possible texture map's pixels.
Clear Texture Coords. Any UV texture coordinates are cleared from the edit mesh,
whether they are due to front projection or importing.
Create Smooth Normals. Creates a normal vector at each vertex of the edit mesh,
averaging over the attached facets. The smooth normals are used to provide a
smooth perspective display of the mesh.
Clear Normals. The per-vertex normals are cleared, so face normals will be used
subsequently.
Linking Submenu
Align via Links dialog. Brings up a dialog that uses existing links to either align a mesh
to the location of the trackers it is linked to, or align the entire world (shot) to
match the mesh. This latter option is useful when you have a mesh model and
want the matchmove to match your existing model, it is a form of Coordinate
System Alignment.
Update mesh using links. Using the links for the shot to which the perspective view is
linked, update the 3-D coordinates of each linked vertex to exactly match the
current solved 3-D location of the tracker to which the vertex is linked. Do this for
the current edit mesh, each selected mesh if there is no edit mesh, or all meshes
if there is no edit mesh or selected meshes.
Show trackers with links. Trackers are flashed that have links, either those on the edit
mesh, or on all selected meshes, or on all meshes if none are selected. On the
edit mesh (only), if there are vertices selected on the edit mesh, then only the
trackers that are linked to selected vertices are flashed, allowing you to locate the
trackers linked to specific vertices.
Add link and align mesh. Small tool to help align vertices on a mesh to trackers,
typically to align a plane mesh to some trackers. Each time you select this item,
you should have one tracker and one vertex selected. The first time you select
this item, a link will be created, and the mesh will be translated so that the vertex
matches the tracker position. The second time you select this item, a link will be
created, and the mesh will be translated, scaled, and rotated so that both links
are satisfied. The third time you select this item, the mesh will be spun around
the axis of the two prior links so that the vertex and tracker fall in the same
plane—usually they will not be able to be matched exactly—and a special kind of
link will be created.
Add links to selected. Add links in the first of three possible situations. #1: If there is
one tracker and one or more vertices selected, set up link(s). #2: if there is one or
more selected vertices, for each, create a link to any tracker that contains the
vertex, as seen in 2-D from the camera viewpoint, and update the vertex
location to match the solved tracker location. #3: for each selected tracker,
create a link to any vertex at the same 2-D location.
Remove links from selected. Deletes links to selected vertices on the edit mesh to the
current shot, if the view is locked, or to all shots, if the view is not locked. If there
is no edit mesh, then all links are deleted from any selected trackers.
Remove all links from mesh. Delete all tracker/vertex links for the edit mesh, if any, all
selected meshes, if any, or all meshes, if none. The links are deleted for the shot
to which the perspective view is linked, if any, or for all shots, if the view is not
locked to any shot.
Grid Submenu
Show Grid. Toggle. Turns grid display on and off in this perspective window. Keyboard:
‘G’ key.
Move Grid. Mouse mode. Left-dragging will slide the grid along its normal mode, for
example, allowing you to raise or lower a floor grid.
Floor Grid, Back Grid, Left Side Grid, Ceiling Grid, Front Grid, Right Side Grid.
Puts the grid on the corresponding wall of a virtual room (stage), normally viewed
from the front. The grids are described this way so that they are not affected by
the current coordinate system selection.
To Facet/Verts/Trkrs. Aligns the grid using an edit-mesh facet, 1 to 3 edit-mesh
vertices, if a mesh is open for editing, or 1 to 3 trackers otherwise. This is a very
important operation for detail work. With 3 points selected, the grid is the plane
that contains those 3 points, centered between them, aligned to preserve the
global upwards direction. With 2 points selected, the current grid is spun to make
its “sideways” axis aligned with the two points (in Z up mode, the X axis is made
parallel to the two points). With 1 point selected, the grid is moved to put its
center at that point. Often it will be useful to use this item 3 times in a row, first
with 3 then with 2 and finally 1 vertex or tracker selected.
Return to custom grid. Use a custom grid set up earlier by To Facet/Verts/Trkrs. The
custom grid is shared between perspective windows, so you can define it in one
window, and use it in one or more others as well.
File name/… Select the output file name to which the movie should be written.
ASF(Win), AVI(Win), BMP, Cineon, DPX, JPEG, MP4(Win), MOV (Quicktime),
OpenEXR, PNG, SGI, Targa, TIFF, or WMV(Win). Only image sequences are
available on Linux. For image sequences, the file name given is that of the first
frame; this is your chance to specify how many digits are needed and the starting
value, for example, prev1.bmp or prevu0030.exr.
Clear. Clears the file name. In some circumstances when a file is moved from one
operating system to another, your current operating system may not be able to
display its file picker at all if the existing file is from a different OS. If that
happens, use the Clear button, and then the ... picker.
Compression Settings. Set the compression settings for Quicktime and various image
formats. Since the compression settings depend on the type of file being
produced, you must set the file name first. Note that different codecs can have
their own quirks!
Show All Viewport Items. . When checked, produces a literal rendition of the
perspective view. Includes all the trackers, handles, etc, shown in the viewport as
part of the preview movie.
Show Grid. When checked, includes the main perspective-view grid, ie typically the
ground plane. This checkbox is irrelevant if the Show All Viewport Items
checkbox is checked, as in that case whether the grid is shown or not depends
on the normal grid settings in the perspective view's right-click menu.
Square-Pixel Output. When off, the preview movie will be produced at the same
resolution as the input shot. When on, the resolution will be adjusted so that the
pixel aspect ratio is 1.0, for undistorted display on computer monitors by standard
playback programs.
RGB Included. Must be on to see the normal RGB images. See below.
Alpha Included. Currently supported for EXR, PNG, SGI, TGA, TIF. Turn off right-
click/View/Show Image so the shot doesn't cover the entire image.
Depth Included. Output a monochrome depth map. See below.
Anti-aliasing and motion blur. Select None, Low, Medium, High, or Moblur Low,
Moblur Medium, Moblur High, or Moblur Max to determine the amount of
antialiasing and optionally motion blur.
The allowable output channels depend on the output format. Quicktime accepts
only RGB. Bitmap can take RGB or depth, but not both at once. OpenEXR can have
either or both.
You can tell if a phase has been solved yet by looking at the triangular tab at top
right corner of the phase. It is either red (unsolved) or green (solved). You can double-
click to solve the phase (and any phases its inputs require).
There can be multiple independent collections of phases in the phase view. One
of the phases is special: the 'root.' During a solve, the root phase is the one that is
queried to produce the new solve. The root has an extra-wide right or bottom edge, ie
Phase2 above. Any phase downstream of the root is ignored, as are any independent
phase collections. Phases that will be unused are darkened and marked with a red X, ie
Phase3. Phase1 is selected.
When a scene is solved that has no root, the phase subsystem, and all your
phases, are intentionally ignored. The scene is solved 'as-is.' This is an error only if it is
not what you had in mind, otherwise, it is a feature!
New phases are added using the right-click menu; the phases are grouped into
categories alphabetically at the bottom of the menu.
When you add a new phase, it is placed at the location you (right) clicked, and it
is wired in after the currently-selected phase, ie the selected phase's output is wired to
the new phase's input, and the selected phase's outputs are connect to the new phase
instead. If the selected phase was the root, the new phase becomes the root instead.
Note that you cannot create circular loops in the wiring.
Selected phases can be copied to the clipboard, producing textual XML. They
can be pasted back into SynthEyes, pre-wired and pre-configured. The clipboard text
can be 'seen' by other apps.
The phase configuration can be written to a file on disk as well, and later
retrieved, typically for insertion into a different shot. To support the creation of libraries
of phases, there is a Folder Preference (Phases) set up so you can easily save and
reopen phase configurations. In enterprises, you can move that folder preference to a
shared location.
Mouse Operations
Left-mouse operations:
click on a phase to select it. Shift-click to toggle its selection state, for adding or
removing phases from an existing set of selected phases.
drag in empty space to sweep out a rectangle to select phases.
drag the lower-right corner of a phase to resize it.
drag a phase to reposition it (and other also-selected phases)
drag from an output pin to an input pin to create a wire
left-double-click the solved/unsolved marker at the top right of a phase to cause a
solve to be started on it immediately.
Middle-mouse operations:
drag to pan the workspace
scroll wheel to zoom the workspace
should already be solved. If not, you will be asked whether you wish to solve it or
cancel.
Un-solve this. The right-clicked phase is marked as not solved.
Un-solve. All selected phases, or all phases if none, are marked as not solved.
If the click was NOT on a phase
Clear root. The scene is reset to have NO root. If a solve is run, the phase subsystem
will be ignored.
Run all. Runs the scene, starting at the root.
Un-solve. All phases are marked as not solved.
This tile corresponds to frame 142 of Tracker88. Clicking on the frame number
will send the SynthEyes user interface to frame 142.
The tracker name is listed at the bottom of the pane; clicking the name will select
(only) this tracker. Shift-clicking the name will un-select the tracker, removing it from the
SimulTrack display (useful when many are selected). Either way, clicking on the tracker
name will also flash the tracker in the other viewports, to make it easier to find
elsewhere.
The parentheses "()" around the tracker name indicate that it is locked; use the
right-click menu to change that. The underline below the tracker name shows the
specific color that has been assigned to this tracker, if any.
The wide rim indicates that there is a tracker position key on this frame, and the
blue color means that this frame (142) is the current active frame in the main user
interface.
Normally only frames with keys are shown in the SimulTrack view (this can be a
lot for auto-tracked trackers), so that the user-created keys can quickly be examined
and modified during supervised tracking. The space between keys can be expanded to
show intervening unkeyed frames by clicking on the gutter, or by using various right-
menu commands.
Tip: clicking in the gutter or using a right-click expand menu operation makes
a difference only on keyed tiles.
The light and dark blue curves overlaid on the tile show the figure-of-merit (FOM)
and 3-D error curves of the tracker between this key and the next. The curves can be
enabled or disabled from the right-click menu.
Dragging the interior of a key, or dragging the offset marker, has the same effect
as it does within the tracker mini-view of the Tracker panel, setting a position or offset
key. Use control to slow down the movement of the tracker for more accurate
repositioning. Spot or symmetry trackers will snap to the best nearby location (within
two pixels). Hold down ALT/Command to suppress snapping.
Similar to, but not identical to, the tracker mini-view, shift-right-click within a tile to
add or remove a position key on that frame. Right-clicking brings up the right-click
menu, so shift-right is needed in SimulTrack, whereas plain right-click is used in the
tracker mini-view, which has no right-click menu.
Clicking on the "S" at the upper-right of a tile will turn on or off the strobe setting
for that frame. When strobing is enabled, the tile will sequence rapidly between the
image of the prior key, the current frame, and the following key.
Hovering over a tile will bring up a tooltips with statistics on the tracker.
Display Modes
The overall SimulTrack window shows many tiles simultaneously, in one of three
different configurations, depending on the number of trackers selected in the SynthEyes
user interface. Or, use the "Force row mode" option to stay in row mode the entire time.
Use the middle-mouse button or the scroll bar to pan the entire SimulTrack view.
Use ALT-left (Command-Left) to scrub through the shot from inside the
SimulTrack view.
The middle-mouse scroll wheel will step through the frames of the shot. Use
shift-scroll (command-scroll on Macs) to scroll the tiles instead.
Grid Mode
The SimulTrack view is in Grid mode whenever there are more selected trackers
than can be shown in Rows mode (unless the Force row mode menu item is on). Only a
single tile is shown for each tracker, the tile for the current frame.
In the view above, notice that trackers 27R, 11R, 23R, 26R, 39R, and 44R are
disabled on the current frame. All the other trackers are valid and keyed (they are auto-
trackers and keyed on each frame).
The sort order of the trackers in the SimulTrack view is determined by the Sort
settings on the main View menu.
There are 6 pink and 4 blue trackers at the beginning of the list, grouped together
as the overall View/Group by Color option is turned on.
Row Mode
Here, the SimulTrack view is in Rows mode, with exactly 5 trackers selected.
Row mode is used when there is more than one tracker selected, but when there are
few enough that all rows can be shown without scrolling. Or, use the Force row mode
setting to stay in row mode at all times (potentially with scrolling).
The light-blue background shows where the current tile is being displayed;
panning the view can move that blue region.
Trackers 41 and 88 are valid on the current frame, which is frame 142 as
indicated by the blue rim on Tracker88 at center. Trackers 129, 159, and 198 have tiles
in the middle section but they end or begin after the current frame. These trackers have
been fine-tuned with keys every 8 frames, as can be seen.
Single Mode
Select only valid. Starting from the set of selected and displayed trackers, unselect
those that are not valid on the current frame, to reduce clutter.
Select same color. Select all other trackers on the same camera/object that have the
same color as the clicked-on tracker.
Stereo spouses. Instead of showing the selected trackers, the SimulTrack view shows
the matching tracker on the other camera of the stereo pair. Open two
SimulTrack views simultaneously, and see both sides at once.
Stereo lefts. The SimulTrack view shows the selected trackers on the left camera of the
stereo pair. (If the left camera is not the active object, the matching trackers of
selected trackers in the right camera are shown.)
Stereo rights. The SimulTrack view shows the selected trackers on the right camera of
the stereo pair. (If the right camera is not the active object, the matching trackers
of selected trackers in the left camera are shown.)
(Locked). Shows whether the clicked-on tracker is locked or not (though you can
already tell if it's name is enclosed in parentheses, ie "(Tracker1)"), and allows
you to unlock or relock it.
Lock All. Locks all currently-selected trackers.
Unlock All. Unlocks all currently-selected trackers.
Is ZWT. Shows whether the clicked-on tracker is a zero-weighted tracker, and toggles
that status.
Exactify. Sets a key on this frame of the clicked-on tracker, exactly at its solved 3-D
location (as seen in the image).
Generate autokeys. Fills out additional keys at a spacing determined by the Key
(every) setting of the tracker panel, based on the 3-D location, computed as if the
tracker was a zero-weighted tracker (which it may be). Adjust these keys to refine
the track.
Strobing this. Show and toggle whether or not the clicked-on tracker is strobing at this
frame.
Unstrobe all. Stop all trackers from strobing, on all frames.
Expanded this. Shows and toggles whether or not the clicked-on tracker is expanded
(showing all the intervening non-keyed, tracked, frames between this key and the
next).
Close all. Closes (un-expands) all key frames on all trackers.
Force row mode. When checked, always uses the row-style display, regardless of the
number of trackers selected.
Remove menu ghosts. Some OpenGL cards do not redraw correctly after a pop-up
menu has appeared, this control forces a delayed redraw to remove the ghost.
On by default and harmless, but this lets you disable it. This setting is shared
throughout SynthEyes and saved as a preference.
Strobe Submenu
Strobe all frames. Begins strobing on all displayed frames of the clicked-on tracker.
Unstrobe all frames. Stops strobing on all frames of the clicked-on tracker
Strobe all on this frame. Begins strobing on this frame of all selected trackers.
Unstrobe all on this frame. Stops strobing on this frame of all selected trackers.
Expand Submenu
Expand All. Expand all frames on all selected trackers
Expand all frames. Expand all frames on the clicked-on tracker.
Close all frames. Close all frames on the clicked-on tracker.
Expand all on this frame. Expand this frame on all selected trackers.
Close all on this frame. Close this frame on all selected trackers.
texture-extraction geometry on moving objects where the exact 3-D path can not
be determined. The object can face the camera exactly, or be spin solely about
its vertical axis.
Mark Seeds as Solved. You can create seed trackers at different locations, possibly on
a mesh, then make them appear to be solved at those coordinates.
Mesh Information. Shows the number of vertices and facets of the current selected
mesh. Note that if normals or texture coordinates are present, there is always
one value for each position vertex in SynthEyes, so that information would be
redundant.
Motion Capture Camera Calibrate. See motion capture writeup.
Perspective Projection Screen Adjust. Use this script to adjust some per-shot/camera
behind-the-scenes controls for the perspective viewport's built-in projection
screen, such as the distance from the camera to the screen and the grid
resolution.
Preferences Listing. (exporter!) Creates a list of preferences available for SynthEyes
scripting, along with tooltip reference text.
Projection Screen Creator. Creates a "projection screen" mesh in the 3-D
environment, textured by the current shot. Allows you to see the shot imagery in
place, even when you are not locked to it. You can matte out the chroma key that
you have set up on the Green Screen Panel, or use an existing alpha channel.
Rename Selected Trackers. Manipulates the names of the selected trackers. They can
be renamed with a shared basic name and new numbers assigned, for example
Fave1, Fave2, Fave3. Or, new prefixes or suffixes can be inserted, ie Tracker1
becomes LeftTracker1 or TrackerL1 (or less desirably, Tracker1L, if "Keep
tracker# at end" is turned off). Portions of the names can be removed as well.
The script will automatically increment the numbering so that the resulting tracker
names are unique.
Reverse Shot/Sequence. Use to avoid re-tracking when you’re suddenly told to
reverse a shot. Reverses tracker data but not other animated data.
Select By Type. Use to select all Far trackers, all unsolved trackers, etc.
Set Color by RMS Error. Re-colors trackers based on their RMS error after solving for
easier checking. Sets up 3 different colors, good/OK/bad aka green/yellow/red.
Changes a secondary color for each tracker. You can adjust the colors and rms
error levels. Switch back and forth between the two sets of colors using the
View/Use alternate colors menu item.
Set Plane Aspect Ratio. Adjusts the width or height of the selected 3-D plane to a
specified value, usually so it can be used to hold a texture while maintaining the
proper image and pixel aspect ratio.
Set Tracker Color from Image. Sets the primary or alternate color of the tracker based
on the average image color inside the tracker on the current frame, or on the
tracker's first valid frame. Can be made to invert the color to enhance the tracker
visibility in the camera view, or not do so, to suggest the scene in the 3-D point
cloud in the perspective view.
Shift Constraints. Especially using GPS survey data, use this script to adjust the data
to eliminate the common offset: if X values are X=999.95, 999.975, 1000.012,
you can subtract 1000 from everything to improve accuracy.
Splice Paths. Sometimes a shot has several different pieces that you can track
individually; this script can glue them together for a final track. Open the shot
repeatedly within the same scene file (or merge files), and adjust the start and
end ranges on the time bar so that they overlap at a single frame, ie shot1 goes
0..100, shot2 goes 100 to 200 (NOT 101 to 200). Have each section by solved,
then run Splice Paths.
Step by tracker auto-key. Steps the user-interface frame# forward or backward by the
auto-key setting of the single selected tracker. Use this to "rough-out" supervised
trackers, stepping into the as-yet-untracked portion and typically using z-drop
and z-drop-lock.
Preferences
Preferences apply to the user interface as a whole. Some preferences that are
also found on the scene settings dialog, such as the coordinate axis setting, take effect
only as a new scene is created; subsequently the setting can be adjusted for that scene
alone with the scene settings panel. Other preferences, especially those having to do
with layout or sizing, take effect only when the SynthEyes window is resized, or
SynthEyes restarted. Other preferences are set directly from the dialog that uses them,
for example, the spinal editing preferences.
Resetting Your Preferences
The Edit/Reset Preferences item resets all preferences to the factory values.
You can reset most of them (the ones on the preferences panel) using the Preferences
as Script export described in the next section.
If your machine crashes at an inopportune time, or suffers from some other
problem, you might manage to corrupt the preferences file, resulting in SynthEyes
crashing at startup. To manually clear the preferences, delete the following file:
Windows: C:\Users\YourNameHere\AppData\Roaming\SynthEyes\prefs14.dat
macOS: /Users/YourNameHere/Library/Application Support/SynthEyes/prefs14.dat
Linux: ~/.SynthEyes/prefs14.dat
Saving Your Preferences
You can save most preferences using the Save button on the preferences panel,
which launches the exporter "File/Export/Plain Text/Preferences as script". It produces a
.szl file that you can run later on the same machine, or on a different machine to transfer
your preferences there. Use Script/Run Script and select the exported file to run it.
This exporter has a variety of options, and can produce an explanatory listing as
well (which produces a .txt file that is then opened).
SynthEyes stores some information in separate files, not in the preferences, such
as keybd14.ini (key mapping), layout14.ini (viewport layouts), safe14.ini (safe areas),
shotpreset.txt, camtool14.xml, pertool14.xml, usertoken.dat, and winlayout14.xml. There
are standard system versions of these files in the SynthEyes install; if you have
changed the listed items, a custom copy of the entire file is stored in your user area (not
a list of changes, for example).
As you save the preferences, you are given the option to include those files in the
created script, so that you can carry only the single .szl file to the new system in order to
recreate those files. Only user-customized versions of the files will be saved; the system
versions are never transferred.
When you run the exported script, you'll have the option to allow or deny each
type of information to be changed, and each individual file to be overwritten or not.
If you've customized these additional files, you may want to Save the preferences
and reset the associated files from time to time to ensure you are picking up the latest
capabilities in new SynthEyes versions, so that you don't have orphaned viewport
configuration file, especially.
Tip: Use the Listing button on the Edit/Edit Keyboard Map panel to see what
keyboard mappings you've changed versus the standard factory settings. If
you haven't changed anything, you can skip including them in the script.
If you want to be especially tricky, you can export different sets of preferences to
different files within your user scripts area, edit the first line to give them somewhat
different names, then switch between different sets of preferences for different tasks.
This is almost certainly overkill, however.
Preferences Panel
Apologies in advance: We concede that there are too many preferences! We
have listed some of them below in alphabetic order for your reference. The left portion
of the panel is a very long list of preferences. The main dropdown (Appearance below)
lets you jump through the list to different sections; the PgUp and PgDn buttons similarly
page through them. The right-hand side of the panel has some others with special
requirements that do not fit into the main list.
You can use the search field to find related preferences; the search field
searches the preference description as well as the preference name.
Right-Side Controls
Save. Button. Clicking this brings up the File/Export/Plain Text/Preferences as Script
exporter, which writes a Sizzle script that can restore your same preferences on
this machine or another machine. Use Script/Run Script to run the exported script
at a later time. Read the description above and the tooltips of the script's controls
for more details.
Search. Button and (search text) field. Scrolls through the preferences, looking at both
the preference name and their tooltips for the search text. Matches are shown
at the top of the list. Right-click to search backwards. Matching is exact but case-
independent.
UI Colors. (Drop-down and color swatch) Change the color of many user-interface
elements. Select an element with the drop-down menu, see the current color on
the swatch, and click the swatch to bring up a dialog box that lets you change the
color.
Default Back Plate Width. Width of the camera’s active image plane, such as the film
or imager.
Back Plate Units. Shows in for inches or mm for millimeters, click it to change the
display units for this panel, and the default for the shot setup panel.
Post-solve sound [hurrah]. Button. Shows the name of the sound to be played after
long calculations.
Default Export Type. Selects the export file type to be created by default.
UI Language. Drop-down. Selects one of the available XML language translation files
from the user and system script folders. Takes effect only when SynthEyes
starts. When blank (by default), no modifications are applied.
Folder Presets. Helps workflow by letting you set up default folders for various file
types: batch input files, batch output files, images, scene files, scripts, imports,
and exported files. Select the file type to adjust, then hit the Set button. To
prevent SynthEyes from automatically to a certain directory for a given function,
hit the Clear button.
Main List (Partial)
16 bit/channel (if available). Store all 16 bits per channel from a file, producing more
accurate image, but consuming more storage.
After … min. Spinner. The calculation-complete sound will be played if the calculation
takes longer than this number of minutes.
Anti-alias curves. Checkbox. Enables anti-aliasing and thicker lines for curves
displayed by the graph editor. Easier to read, but turn off if it is too slow for less-
powerful OpenGL cards.
Auto-switch to quad. Controls whether SynthEyes switches automatically to the quad
viewport configuration after solving. Switching is handy for beginners but can be
cumbersome in some situations for experts, so you can turn it off.
Axis Setting. Selects the coordinate system to be used.
Bits/channel: 8/16/Half/Float. Radio buttons. Sets the default processing and storage
bit depth.
Click-on/Click-off. Checkbox. When turned on, the camera view, tracker mini-view, 3-D
viewports, perspective view, and spinners are affected as follows: clicking the left
or middle mouse button turns the mouse button on, clicking again turns it off.
Instead of dragging, you will click, move, and click. This might help reduce strain
on your hand and wrist.
Compress .sni files. When turned on, SynthEyes scene files are compressed as they
are written. Compressed files occupy about half the disk space, but take
substantially longer to write, and somewhat longer to read.
Constrain by default (else align). If enabled, constraints are applied rigorously,
otherwise, they are applied by rotating/translating/scaling the scene without
modifying individual points. This is the default for the checkbox on the solver
panel, used when a new scene is created.
Enable cursor wrap. When the cursor reaches the edge of the screen, it is wrapped
back around onto the opposite edge, allowing continuous mouse motion. Disable
if using a tablet, or under Virtual PC. Enabled by default, except under Virtual
PC.
Enhanced Tablet Response. Some tablet drivers, such as Wacom, delay sending
tablet and keyboard commands when SynthEyes is playing shots. Turning on this
checkbox slows playback slightly to cause the tablet driver to forward data more
frequently.
Export Units. Selects the units (inches, meters, etc) in the exported files. Some units
may be unavailable in some file types, and some file types may not support units
at all.
Exposure Adjustment: increases or decreases the shot exposure by this many f-stops
as it is read in. The main window updates as you change this. Supported only for
certain image formats, such as Cineon and DPX.
First Frame is 1 (otherwise 0). Turn on to cause frame numbers to start at 1 on the
first frame.
Maximum frames added per pass. During solving, limiting the number of frames
added prevents new tentative frames from overwhelming an existing solution.
You can reduce this value if the track is marginal, or expand it for long, reliable
tracks.
Maya Axis Ordering. Selects the axis ordering for Maya file exports.
Match image-sequence frame #’s. See Frame Numbering (Advanced).
Minutes per auto-save. Spinner. If non-zero, SynthEyes will automatically re-save the
file every few minutes, as set by this spinner. The value defaults to one, which
means auto-save is on by default. When auto-save is on, SynthEyes will
always save, rather than asking you if you want to save or discard a changed file.
To turn auto-save off, set the value to zero.
Multi-processing. Drop-down list. Enable or disable SynthEyes's use of multiple
processors, hyper-threading, or cores on your machine. The number in
parentheses for the Enable item shows the number of processors/cores/threads
on your machine. The Single item causes the multiprocessing algorithms to be
used, but only with a single thread, mainly for testing. The “Half” option will use
half of the available cores, which can be helpful when you have another major
task running, such as a render on an 8-core machine.
No middle-mouse button. For use with 2-button mice, trackballs, or Microsoft
Intellipoint software on Mac OSX. When turned on, ALT/Command-Left pans the
viewports and ALT/Command-Right links trackers.
Nudge size. Controls the size of the number-pad nudge operations. This value is in
pixels. Note that control-nudge selects a smaller nudge size; you should not have
to make this value too small—use a convenient value then control-nudge for the
most exacting tweaks.
Place after auto-solve. When checked, the Auto-place algorithm (see the Summary
panel) will run automatically to set up a coordinate system—after and only when
you use the large green AUTO button on the solver panel.
Playbar on toolbar. When checked, the playbar (rewind, end, play, frame forward etc)
is moved from the command panel to a horizontal configuration along the main
toolbar. Usable only on wider monitors.
Prefetch enable. The default setting for whether or not image prefetch is enabled.
Disable if image prefetch overloads your processor, especially if shot imagery is
located on a slow network drive.
Put export filenames on clipboard. When checked (by default), whenever SynthEyes
exports, it puts the name of the output file onto the clipboard, to make it easier to
open in the target application.
Safe #trackers. Spinner. Used to configure a user-controlled desired number of
trackers in the lifetimes panel. If the number is above this limit, the lifetime color
will be white or gray, which is best. Below this limit, but a still acceptable value,
the background is the Safe color, by default a shade of green: the number of
trackers is safe, but not your desired level.
Shadow Level. Spinner. The shadow is dead black, this is an alpha that ranges 0 to 1,
at 1 the shadow has been mixed all the way to black.
Stay Alive. Spinner. Sets the number of frames the search box for a supervised tracker
is displayed after the tracker becomes lost or disabled. If the control is set to
zero, then the search box will never be removed. At larger values, the screen
may become cluttered with disabled trackers.
Start with OpenGL Camera View. When on, SynthEyes uses OpenGL rendering for
the camera view, which is faster on a Mac and when large meshes are loaded in
the scene. When off, SynthEyes uses simpler graphics that are often faster on
Windows, as long as there aren’t any complex meshes. This preference is
examined when you open SynthEyes or change scenes. You can change the
current setting from the View menu. When you change the preference, the
current setting is also changed.
Start with OpenGL 3-D Viewport. Same as for the camera view, but applies to the 3-D
viewports.
Thicker trackers. When check trackers will be 2-pixels wide (instead of 1) in the
camera, perspective, and 3-D views. Turned on by default for, and intended for
use with, higher-resolution displays.
Trails. The number of frames in each direction (earlier and later) shown in the camera
view for trackers and blips.
Undo Levels. The number of operations that are buffered and can be undone. If some
of the operations consume much memory (especially auto-tracking), the actual
limit may be much smaller.
Use software mesh render. This control makes a difference only for camera and 3D
viewports that are not using OpenGL. When on, 3D meshes are rendered using
a SynthEyes-specific internal software renderer. For contemporary multi-core
machines, this will be much faster than the operating system's drawing routines,
and can be faster than OpenGL. Takes effect at startup, after that, see the
Software mesh render item on the View menu.
Wider tracker-panel view. Checkbox. Selects which tracker panel layout is used. The
wider view makes it easier to see the interior contents of a tracker, especially on
high-resolution display. The smaller view is more compact, especially for laptops.
Write .IFL files for sequences. When set, SynthEyes will write an industry- and 3ds
MAX-standard image file list (IFL) file whenever it opens an image sequence.
Subsequently it will refer to that IFL file instead of re-scanning the entire set of
images in order to open the shot. Saves time especially when the sequence is on
a network drive.
Scene Settings
The scene settings, accessed through Edit/Edit Scene Settings, apply to the
current scene (file).
The perspective-window sizing controls are found here. Normally, SynthEyes
bases the perspective-window sizes on the world size of the active camera or object.
The resulting actual value of the size will be shown in the spinner, and no “key” will be
indicated (a red frame around the spinner).
If you change the spinner, a key frame will be indicated (though it does not
animate). After you change a value, and the key frame marker appears, it will no longer
change with the world size. You can reset an individual control to the factory default by
right-clicking the spinner.
There are several buttons that transfer the sizing controls back and forth to the
preferences: there is no separate user interface for these controls on the Preferences
panel. If a value has not been changed, that value will be saved in the preferences, so
that when the preferences are applied (to a new scene, or recalled to the current
scene), unchanged values will be the default factory values, computed from the current
world size.
Important Note: the default sizes are dynamically computed from the current
world size. If you think you need to change the size controls here, especially tracker
size and far clip, this probably indicates you need to adjust your world size instead.
Ambient Color. Left Swatch. Ambient illumination for perspective views. Set initially
from the perspective ambient preference.
Shadow Color. Right Swatch. Color of shadows in the perspective views, if fully
blended. Set initially from the Shadow Color color preference.
Camera Size. 3-D size of the camera icon in the perspective view.
Far Clip. Far clip distance in the perspective view
Inter-ocular. Spinner. Sets the inter-ocular distance (in the unitless numbers used in
SynthEyes). Used when the perspective view is not locked to the camera pair.
Key Mark Size. Size of the key marks on camera/object seed paths.
Light Size. Size of the light icon in the perspective view.
Load from Prefs. Loads the settings from the preferences (this is the same as what
happens when a new scene is created).
Mesh Vertex Size. Size of the vertex markers in the perspective view—in pixels, unlike
the other controls here.
Near Clip. Near clipping plane distance.
Object Size. Size of the moving-object icon in the perspective view.
Orbit Distance. The distance out in front of the camera about which the camera orbits,
on a camera rotation when no object or mesh is selected.
Reset to defaults. The perspective window settings are set to the factory defaults
(which vary with world size). The preferences are not affected.
Save to prefs. The current perspective-view settings are saved to the preferences,
where they will be used for new scenes. Note that unchanged values are flagged,
so that they continue to vary with world size in the new scene.
Stereo. Selector. Sets the desired color for each eye for anaglyph or interlaced stereo
display (as enabled by View/Stereo Display on the perspective view's right-click
menu.) The Luma versions of the anaglyph display produces a black/white (gray-
scale) version of the image, before colorizing it for anaglyph presentation; you
may prefer that for scrutinizing depths, though it is substantially slower to
produce. Note that the mouse is still sensitive based on the currently-active
camera/object, so you need to select the appropriate item in the display. Left-
over-right display is intended for display and preview movies, not interactive
operations, as you will not be able to click on anything in the right spot.
Tracker Size. Size of the tracker icon (triangle) in the perspective view.
Vergence Dist. Spinner. Sets the vergence distance for the stereo camera pair, when
it is not locked to any actual cameras.
The first list box shows a context (see the next section), the second a key, and
the third shows the action assigned to that key (there is a NONE entry also). The Shift,
Control, and Alt (Mac: Command) checkboxes are checked if the corresponding key
must also be down; the panel shown here shows a Select All operation will result from
Control-A in the “Main” context.
Because several keys can be mapped to the same action, if you want to change
Select All from Control-A to Control-T, say, you should set Control-A back to NONE,
and when configuring the Control-T, select the T, then the Control checkbox, and finally
then change the action to Select All.
Time-Saving Hint: after opening any of the drop-down lists (for context, key,
or action), hit a key to move to that part of the list quickly.
The Change to button sets the current key combination to the action shown,
which is the last significant action performed before opening the keyboard manager. In
the example, it would be “Edit Scene Settings.”
Change to makes it easy to set up a key code: perform the action, open the
keyboard manager, select the desired key combination, then hit Change to. The
Change to button may not always pick up a desired action, especially if it is a button—
use the equivalent menu operation instead.
You can quickly remove the action for a key combination using the NONE button.
NOTE: Once you customize the keyboard map, your custom map will
continue to be used by future versions of SynthEyes. You might miss out on
new key mappings. You should check the key map Listing from time to time to
check for new mappings and either reset the keyboard map to the factory
settings and add yours, or add the new ones to your current settings, if you
like. You can use the Preferences as Script export to save and restore your
keyboard map file as needed.
Changes are temporary for this run of SynthEyes unless the Save button is
clicked. The Factory button resets the keyboard assignments to their factory defaults.
Listing shows the current key assignments; see the Default Key Assignments section
below.
Key Contexts
SynthEyes allows keys to have different functions in different places; they are
context-dependent. The contexts include:
The main window/menu
The camera view
Any perspective view
Any 3-D viewport
Any command panel
There is a separate context for each command panel.
In each context, there is a different set of applicable operations, for example, the
perspective window has different navigation modes, whereas trackers can only be
created in the camera window. When you select a context on the keyboard manager
panel, only the available operations in that context will be listed.
Here comes the tricky part: when you hit any key, several different contexts
might apply. SynthEyes checks the different contexts in a particular order, and the first
context that provides an action for that key is the context and action that is applied. In
order, SynthEyes checks
The selected command panel context
The context of the window in which the key was struck
The main window/menu context
The context of the camera window for the active tracker host, if it is visible, even if
the cursor was not in the camera window.
This is a bit complex but should allow you to produce many useful effects. Note
that the 4th rule does have an “action at a distance” flavor that might surprise you on
occasion, though it is generally useful.
You may notice that some operations appear in the main context and the
camera, viewport, or perspective contexts. This is because the operation appears on
the main menu and the corresponding right-click menu. Generally you will want the
main context.
Keys in the command-panel contexts can only be executed when that command-
panel is open. You cannot access a button on the solver panel when the tracker panel is
open, say. The solver panel’s context is not active, so the key will not even be detected,
the solver panel functionality is unavailable when it isn’t open, and changing settings on
hidden panels makes for tricky user interfaces (though there are some actions that
basically do this).
Fine Print
Do not assign a function to plain Z or apostrophe/double-quote. These keys are
used as an extra click-to-place shift key in the camera view, and any Z or ’/” keyboard
operation will be performed over and over while the key is down for click-to-place.
The Reset Zoom action does three somewhat different things: with no shift key, it
resets the camera view so the image fills the view. When the shift key is depressed, it
resets the camera view so that the image and display pixels are 1:1 in the horizontal
direction, ie the image is “full size.” If the control key is pressed, then the camera view is
centered within the area, instead of put at top-left. Consequently, you need to set up
your key assignments so that the fill operation is un-shifted, and the 1:1 operation is
shifted, etc.
The same thing applies to other buttons whose functionality depends on the
mouse button. If you shift-click a button to do something, then the function performed
will still depend on the shift setting of the keyboard accelerator key.
There may be other gotchas scattered through the possible actions; you should
be sure to verify their function in testing before trying them in your big important scene
file. You can check the undo button to verify the function performed, for example.
The “My Layout” action sets the viewport configuration to one named “My Layout”
so that you can quickly access your own favorite layout.
To add a new viewport configuration, do the following. Open the manager, and
select an existing similar configuration in the drop-down list. Hit the Duplicate button,
and give your new configuration a name.
If you created a new “Custom” layout in the main user interface by changing the
panes, and you’d like to keep that layout for future use, you can give it a name here, so
that it is not overwritten by your next “Custom” layout creation.
Tip: In the main user interface, the ‘7’ key automatically selects a layout
called “My Layout” so you can reach it quickly if you use that name.
Inside the view manager, you can resize the viewports as in the main display, by
dragging the borders (gutters). If you hold down shift while dragging a border, you
disconnect that section of the border from the other sections in the same row or column.
Try this on a quad viewport configuration and it will make sense.
If you double-click a viewport, you can change its type. You can split a viewport
into two, either horizontally or vertically, by clicking in it and then the appropriate button,
or delete a viewport. After you delete a viewport, you should usually rearrange the
remaining viewports to avoid leaving a hole in your screen.
When you are done, you can hit OK to return to the main window and use your
new configuration. It will be available whenever you re-open the same scene file.
If you wish to save a set of configurations as preferences, for time you create a
new SynthEyes file, reopen the Viewport Layout Manager, and click the Save All button.
If you need to delete a configuration, you can do that. But you should not delete
the basic Camera, Perspective, etc layouts.
If you would like to return a scene file to your personal preferences, or even back
to the factory defaults, click the Reset/Reload button and you can select which.
NOTE: Once you customize the viewport layouts, your custom layouts will
continue to be used by future versions of SynthEyes. You might miss out on
new layouts that use new viewport types. You might consider resetting the
viewport settings to the Factory defaults from time to time to check for new
views. You can use the Preferences as Script export to save and restore
your layout.ini file as needed. At this time, there's no tool for identifying your
specific changes. The layout.ini file is plain text, so you can use any text
editor to examine it or take pieces of your old file and insert them into the
latest factory-reset file.
The selector at top left selects the script bar being edited; its file name is shown
immediately below the script name, with the list of script buttons listed under that. Use
the New button to create a new script bar; you will enter a name for your script bar, then
select a file name for it within your personal scripts folder. You can also use the Save
As button to duplicate a script with a new name, use Chg. Name to change the human-
readable name (not the file name), or you can Delete a script bar, which will delete the
script bar’s file from disk (but not any of the scripts).
Each button has a short name, shown in the list, in addition to the longer full
script name and file name, both of which are shown when an individual button is
selected in the list. You can double-click a button to change the short name, use the
Move Up and Move Down buttons to change the order, or click Remove to remove a
button from the script bar (this does NOT delete the script from disk).
To add a button to a script bar, select the script name or menu command in the
selector at bottom, then click the add button. You will be able to select or adjust the
short name as the button is added.
Once you have created a script bar, or even while you are working on it, click
Launch to open the script bar.
SynthEyes saves the position of script bars when it closes, and re-opens all open
scripts when it next starts. If you have changed monitor configurations, it is possible for
a script bar to be restored off-screen. If this should happen, click the Find button and the
script bar will appear right there.
Important: XML tag and attribute names are case-sensitive, Lens is not the
same as LENS or lens. And quotes are required for attribute values.
The Info block (or any other unrecognized blocks) are not read; here they are
used to record, in a standard way, details of the file creation by a script.
After that comes a block of data with samples of the distortion table. They must
be sorted by rin and will be spline-interpolated. The last line above says that pixels
2.128 mm from the center will be distorted to appear only 2.12 mm from the lens center.
This is an “absolute” file with radii measured in millimeters from the optic center:
the optional “mm” attribute on the root Lens tag makes it absolute (1), by default it is a
“relative” file(mm=”0”).
Relative files measure radii in terms of a unit that goes from 0 at the center of the
image (when it is properly centered) vertically to 1.0 at the center of the top (or bottom)
edge of the image. Literally, that is the “V” coordinate of trackers; it is resolution- and
aspect-independent.
For both file types, there is the question of how far the table should go, what the
maximum value should be. For an absolute file, this is determined by the maximum
image size of the lens. For a relative file, the maximum value is determined from the
maximum aspect ratio: sqrt(max_aspect*max_aspect + 1). This value is required as an
input to relative-type lni generator scripts, but is essentially arbitrary as long as it is large
enough.
With some distortion profiles, if large-enough radii are fed in, the radius will bend
back on itself, so that increasing the radius decreases the distorted radius. SynthEyes
ignores the superfluous rest of the file when this happens without error.
Two other tags can appear in the file at the top level with Info and DATA: a tag
BPW with value 8.1 would say that the recommended nominal back-plate width is 8.1
mm, and a tag of FLEN with value 10.5 would say that the nominal focal length is 10.5
mm. These values are presented for display only on the image processor’s Lens tab for
the user’s convenience, if the table was generated for a specific camcorder model.
<window
name="toptab|toolbar|unredo|support|selector|panel|playbar|timebar|content|stat
us"
align="fill|left|right|center"
valign="fill|top|bottom|center"
commonmargins
/>
Window fills the current region with the specified kind of SynthEyes element.
<split
side="left|right|top|bottom"
pixels="(pixels)"
frac="(0-100%)"
widthof="(window name)"
heightof="(window name)"
adder="(pixels)"
gutter="(pixels)"
swap="yes|no"
rightpan="yes|no"
commonmargins
>
Split is followed by two child nodes, a left or top, then a right or bottom. The attributes of
the split specify how the overall region is divided into two subregions, which are then
filled with the two child nodes.
<ifelse
layoutcriteria
commonmargins
>
Ifelse is followed by two nodes, one that is used if the criteria are satisfied, on that is
used if it is not. The corresponding node is used to fill the entire region handed to Ifelse.
<empty/>
The current region is left empty.
layoutcriteria:
widerthan="(window name)"
higherthan="(window name)"
by="(margin in pixels)"
minwidth="(minimum width)" --- width of the current region
minheight="(minimum height)"
panel="one|none"
playbar="yes|no" --- whether a playbar is desired
timebar="yes|no" --- whether a time bar is desired
unredo="yes|no" --- whether an undo/redo/save bar is desired
pref="rightpan|toptime" --- whether preferences call for the panel on
the right, or time bar at the top, respectively
room="(room name)"
commonmargins:
margin="(pixels)" --- these decrease the incoming region
top_margin="(pixels)"
bottom_margin="(pixels)"
left_margin="(pixels)"
right_margin="(pixels)"
tag="(tag name)"
hide="(window name)"
hide-tags:
<hide name="(window name)"/>
Acknowledgements
SynthEyes is based in part on various open- and closed- source libraries that
facilitate communication between different applications. All of the contributors’ efforts
are greatly appreciated.
Alembic
TM & © 2009-2015 Lucasfilm Entertainment Company Ltd. or Lucasfilm Ltd. All
rights reserved.
Industrial Light & Magic, ILM and the Bulb and Gear design logo are all
registered trademarks or service marks of Lucasfilm Ltd.
© 2009-2015 Sony Pictures Imageworks Inc. All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this list
of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice, this
list of conditions and the following disclaimer in the documentation and/or other
materials provided with the distribution.
* Neither the name of Industrial Light & Magic nor the names of its contributors
may be used to endorse or promote products derived from this software without specific
prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
DAMAGE.
ALEMBIC ATTACHMENT A - REQUIRED NOTICES FOR DISTRIBUTION
The Alembic Software is distributed along with certain third party components
licensed under various open source software licenses ("Open Source Components"). In
addition to the warranty disclaimers contained in the open source licenses found below,
Industrial Light & Magic, a division of Lucasfilm Entertainment Company Ltd. ("ILM")
makes the following disclaimers regarding the Open Source Components on behalf of
itself, the copyright holders, contributors, and licensors of such Open Source
Components:
MurmurHash3
(Current versions of Murmur3 read: "MurmurHash3 was written by Austin
Appleby, and is placed in the public domain. The author hereby disclaims copyright to
this source code.")
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in the
Software without restriction, including without limitation the rights to use, copy, modify,
merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit
persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE
AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
DNG SDK
DNG SDK 1.4 and XMP SDK Copyright (c) 1999 - 2013, Adobe Systems
Incorporated. All rights reserved. Redistribution and use in source and binary forms,
with or without modification, are permitted provided that the following conditions are
met: * Redistributions of source code must retain the above copyright notice, this list of
conditions and the following disclaimer. * Redistributions in binary form must reproduce
the above copyright notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution. * Neither the name
of Adobe Systems Incorporated, nor the names of its contributors may be used to
endorse or promote products derived from this software without specific prior written
permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
CONTRIBUTORS"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
INCLUDING, BUT NOTLIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FORA PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER ORCONTRIBUTORS
BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,PROCUREMENT
OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, ORPROFITS; OR
BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
OFLIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THISSOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Easy EXIF
Copyright (c) 2010-2015 Mayank Lahiri. All rights reserved (BSD License).
Redistribution and use in source and binary forms, with or without modification, are
permitted provided that the following conditions are met: redistributions of source code
must retain the above copyright notice, this list of conditions and the following
disclaimer; and redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the documentation and/or
other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY
THE COPYRIGHT HOLDERS "AS IS" AND ANY EXPRESS OR IMPLIED
WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE FREEBSD PROJECT OR CONTRIBUTORS
BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY,
OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA,
OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
DAMAGE.
FBX SDK
This software contains Autodesk® FBX® code developed by Autodesk, Inc.
Copyright 2013 Autodesk, Inc. All rights, reserved. Such code is provided “as is” and
Autodesk, Inc. disclaims any and all warranties, whether express or implied, including
without limitation the implied warranties of merchantability, fitness for a particular
purpose or non-infringement of third party rights. In no event shall Autodesk, Inc. be
liable for any direct, indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services; loss of use,
data, or profits; or business interruption) however caused and on any theory of liability,
whether in contract, strict liability, or tort (including negligence or otherwise) arising in
any way out of such code.
JPEG
This software is based in part on the work of the Independent JPEG Group,
http://www.ijg.org.
OpenEXR
Copyright (c) 2006-17, Industrial Light & Magic, a division of Lucasfilm
Entertainment Company Ltd. Portions contributed and copyright held by others as
indicated. All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
Neither the name of Industrial Light & Magic nor the names of any other
contributors to this software may be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
OpenEXR/DWA Patent
Additional IP Rights Grant (Patents) "DreamWorks Lossy Compression" means
the copyrightable works distributed by DreamWorks Animation as part of the OpenEXR
Project. Within the OpenEXR project, DreamWorks Animation hereby grants to you a
perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as
stated in this section) patent license to make, have made, use, import, sell products
which incorporate this IP as part of the OpenEXR standard, transfer, and otherwise run,
modify and propagate the contents of this implementation of DreamWorks Lossy
Compression within the OpenEXR standard, where such license applies only to those
patent claims, both currently owned by DreamWorks Animation and acquired in the
future, licensable by DreamWorks Animation that are necessarily infringed by this
implementation of DreamWorks Lossy Compression. This grant does not include use of
DreamWorks Lossy Compression outside of the OpenEXR standard. This grant does
not include claims that would be infringed only as a consequence of further modification
of this implementation. If you or your agent or exclusive licensee institute or order or
agree to the institution of patent litigation against any entity (including a cross-claim or
counterclaim in a lawsuit) alleging that this implementation of DreamWorks Lossy
Compression or any code incorporated within this implementation of DreamWorks
Lossy Compression constitutes direct or contributory patent infringement, or inducement
of patent infringement, then any patent rights granted to you under this License for this
implementation of DreamWorks Lossy Compression shall terminate as of the date such
litigation is filed.
PNG
Based in part on the LibPNG library, Glenn Randers-Pehrson and various
contributing authors.
RED SDK
The R3D SDK and all included materials (including header files, libraries, sample
code & documentation) are Copyright (C) 2008-2018 RED Digital Cinema. All rights
reserved. All trademarks are the property of their respective owners. This software was
developed using KAKADU software.
TIFF
Based in part on TIFF library, http://www.libtiff.org, Copyright ©1988-1997 Sam
Leffler, and Copyright ©1991-1997 Silicon Graphics, Inc.
ZLIB
Copyright (C) 1995-2013 Jean-loup Gailly and Mark Adler
This software is provided 'as-is', without any express or implied warranty. In no
event will the authors be held liable for any damages arising from the use of this
software.
Permission is granted to anyone to use this software for any purpose, including
commercial applications, and to alter it and redistribute it freely, subject to the following
restrictions:
1. The origin of this software must not be misrepresented; you must not claim
that you wrote the original software. If you use this software in a product, an
acknowledgment in the product documentation would be appreciated but is not
required.
2. Altered source versions must be plainly marked as such, and must not be
misrepresented as being the original software.
3. This notice may not be removed or altered from any source distribution.