If the image sequence you are tracking has multiple moving objects, you can perform object tracking to track these objects relative to the same camera. Since you perform camera tracking first, you can use the camera data generated from the camera tracking analysis. For example, you can perform a camera tracking analysis on the complete scene, then use masks or mattes to perform multiple object tracking passes focusing on various moving objects in the scene. Each result can be converted to separate point clouds or axes, but all results conform to the 3D camera synced to the original camera tracking.
To create an 3D track analysis based on object properties:
Select: | To: |
---|---|
Free 3D Motion | Track an object moving independently from the camera. |
Orbit Around Cam | Track an object rotating around the camera, or far away from the camera. |
Auto Detect Motion | Automatically detect the motion type of the object and track accordingly. For small objects, Auto Detect may not be able to establish the proper motion. In this case, select Free 3D Motion or Orbit Around Cam. |
The GMask Option box is unavailable on Smoke, unless a setup is loaded in Action with an existing Action Gmask node.
The tracking analysis uses an intersection of the constraints, so you may choose to hide or disconnect gmasks if you want to perform a separate object track for each one.
You can see a progress indicator beside the Track button. You can interrupt the analysis and resume it by clicking Track again. After tracking has completed and you press Confirm, the Track button changes to Update, and you can see the 2D tracks and 3D points in your image.
Use the Filter settings to delete lower quality trackers.
To fine-tune the track analysis:
Trackers of lower quality may hinder the accuracy of the camera tracking.
After tracking has occurred, you can set the scale of the tracked object. Since you are tracking a specific object as part of an image, setting the relative scale of the object in relation to the image helps you to position objects in the reconstructed scene when you convert the 3D points into a point cloud or axes.
Once you are satisfied with your fine tuning changes, you can refine or update your Analyzer.
To refine or update the 3D track:
The track analysis uses the current results as a starting point, and refines from this point.
Click Refine again to stop the process once an acceptable pixel error value is reached. The pixel error value is a representation of the distance of the 2D tracks from the computed 3D points.
When you are satisfied with the results of the 3D object tracking analysis, you can convert the selected reconstructed points to a point locators object or actual axes in your scene. The point locators object is useful because you can easily snap objects to the locators. An image that does not deform is the best candidate for the point locators.
To create a point locators object or axes from the 3D object tracking results:
Selected point are converted to a point locators object with a parent axis. Double-click the newly created point locators object to access its menu, where you can change display settings and enable snapping. See Using the Point Locators Object.
Selected points are converted to axes with a parent axis. The axes synchronize to the results of your 3D camera tracking, and any further changes you make to the 3D track are reflected in these axes.
You can attach objects such as surfaces, 3D text, and 3D models to the new point locators or axes to help position them in 3D space.