Share

Managing Clips

These workflows create, copy, and modify video and audio clip nodes. For the most part, Wiretap handles audio clips and video clips in the same way. The main difference is how the clip format is defined.

The following classes are important in these workflows:

  • WireTapNodeHandle
  • WireTapClipFormat
  • WireTapAudioClipFormat


About Clip Nodes and Formats

The following sections contain information regarding clip nodes and formats.

Structure of an IFFFS Wiretap Server Clip Node

On an IFFFS Wiretap Server, a CLIP node is a container for child nodes of the following types.

Node Type Description
HIRES Represents high-resolution video media.
ALPHA_HIRES Represents the alpha of the high-resolution video media.
LOWRES Represents low-resolution video media. Wiretap supports the use of low-resolution proxy versions of video media to increase the speed at which video clips are transferred and displayed.
ALPHA_LOWRES Represents low-resolution alpha of the video media.
SLATE Represents the lowest resolution video media available. If a LOWRES node exists, the SLATE node will be equivalent to the LOWRES node, and otherwise the SLATE node will be equivalent to the hires node.
AUDIOSTREAM Represents an audio clip.
VERSION Represents a version, in the context of multi-version clips. One VERSION node for each version.

For video, the parent CLIP node is normally a shortcut to the HIRES node. A video clip node has at least two child nodes: the HIRES and SLATE nodes. It may also have a LOWRES child node if the clip has proxies. The resolution of a video clip node is stored in its metadata. The proxy version of a clip is stored on the Wiretap server like the high-resolution original.

An audio clip consists of a CLIP node (with zero frames) with an AUDIOSTREAM child node.

Structure of a Wiretap Gateway Server Clip Node

On the Wiretap Gateway server, a CLIP node is a container for child nodes of the following types.

Node Type Description
HIRES Represents highest-resolution or highest-quality video media available for the file type.
LOWRES Represents low-resolution video media for file types such as, Red (.r3d) files that support multiple qualities in the same file.
AUDIOSTREAM Represents an audio clip.
CLIP For particular media types, a CLIP node can act as a container of other CLIP nodes:
  • OpenEXR: Each channel is represented as a CLIP child node.
  • Red (.r3d) files: Each quality is a separate CLIP child node.

Clip Formats

Wiretap supports the same formats as the ones supported by the Flame Family.

The IFFFS Wiretap Server allows the media files to be read in their native format with no conversion by directly accessing the file paths, or as raw RGB by reading through the server.

The WireTapClipFormat class is used to define the format of a video clip node. The WireTapAudioClipFormat class is used to define the format of an audio clip node. When instantiating a new clip node, a clip format object must be supplied as an input parameter. WireTapClipFormat also supplies a large number of constants for specifying industry-standard formats.

For more information on cached clip formats, see:

Frame IDs

Each frame in a clip node has a frame ID. Frame IDs are unique for a particular instance of a particular Wiretap server. These classes/methods give access to the IDs of the frames on a Wiretap server:

  • WireTapFrameId.id() – Returns a string containing the persistent ID of a frame.

  • WireTapNodeHandle.getFrameId() – Returns the ID of a frame associated with a clip node. The frame is specified by its index in the set of frames associated with the clip node.

Limitations on Clip Nodes

The following limitations apply to clip nodes:

  • Like the Creative Finishing applications, Wiretap does not permit overwriting the frames of a clip. Your Wiretap client must create a new clip and write the required frames to it.

  • When creating or destroying a clip node, or setting its metadata, the library (to which the clip belongs) must not be in use or opened for reading and writing by a Creative Finishing application.

Limitations on Audio Clips

  • Multi-channel audio in a single track/stream is not supported on write. To write multi-channel audio, use a separate AUDIOSTREAM child node for each channel.

  • EDL metadata is not supported for audio clips.



Getting Frame ID Strings or the Paths to Frame Files

C++ Python Description
listFrames.C listFrames.py Example of fetching frame IDs from a clip node.
Tool Description
wiretap_get_frames Fetch frame IDs from a clip node.
wiretap_get_num_frames Fetch the number of frames available in a clip node.
wiretap_set_num_frames Change the length of a clip node.
wiretap_is_clip Validate if a given node is a clip node or not.
wiretap_rw_frame Read or write a frame buffer.

Getting Frames into Clips

Frames can be written or linked to a clip in three ways:

  • By writing the frames to the clip – Only cached frames can be written to a clip node. This involves the method writeFrame of WireTapNodeHandle. See Copying Clips.

  • By providing source metadata – This applies to frames that are not cached, and involves getting a source data definition from the Gateway and using it to create a source clip in the IFFFS Wiretap Server. See Soft-importing Clips.

  • By accessing frames directly on the storage device using file paths – This can be done for frames in any standard format. This technique does not actually involve the Wiretap server when writing frames. However, the Wiretap server does supply the paths to the frames. The sample program listFrames.C shows how to obtain frame paths. This technique can improve performance when reading and writing frames. See the FAQ How do I read standard-formatted frames from a network-mounted standard FS?.



Creating Clip Nodes

The method WireTapNodeHandle.createClipNode is used to create video and audio clip nodes.

C++ Python Description
createClip.C createClip.py Example of defining a video clip format, creating a new clip node, and writing frames to it.
createAudio.C createAudio.py Example of defining an audio clip format, creating a new clip node, and writing audio samples to it.
Tool Description
wiretap_create_clip Create a video clip node.
wiretap_create_audio Create an audio clip node.


Getting and Setting Clip Format Metadata

An instance of the WireTapClipFormat class can have metadata associated with it. The clip format metadata is used to describe the media in the clip. Its content is similar to the content of an image file header.

The IFFFS Wiretap Server expects clip node metadata to be in XML format. For detailed information on the format and content of the metadata, see Clip Format Metadata (XML).

Clip format metadata and its metadata tag can be set when constructing an instance of WireTapClipFormat or WireTapAudioClipFormat. The metadata tag must be XML.

The metadata can also be set and retrieved using the setMetaData and getMetaData methods on an instance of WireTapClipFormat or WireTapAudioClipFormat.

Setting metadata and the metadata tag is done as follows:

C++

WireTapStr metadata;
clipFormat.setMetaDataTag("XML");
clipFormat.setMetaData(metadata);

Python

metadata = WireTapStr();
clipFormat.setMetaDataTag("XML")
clipFormat.setMetaData(metadata)

where,

  • XML is a tag that specifies the format of the metadata stream.

  • metadata is a WireTapStr object that contains the XML stream. See Clip Format Metadata (XML).

Getting metadata is done as follows:

C++

clipFormat.metaData();    // this returns a string that contains the XML stream
clipFormat.metaDataTag(); // this returns "XML"

Python

clipFormat.metaData()    # this returns a string that contains the XML stream
clipFormat.metaDataTag() # this returns "XML"


Getting and Setting Metadata on a Clip Node

An instance of the WireTapNodeHandle class whose type is CLIP can have one or more metadata streams associated with it.

The metadata can be set and retrieved by calling the setMetaData and getMetaData methods on the WireTapNodeHandle object representing the clip node.

An Edit Decision List (EDL) is an example of metadata that can be set on a clip node. An EDL describes how the media in a clip are assembled. See Creating a Clip from an EDL Timeline.

Tool Description
wiretap_get_metadata Retrieve a metadata stream from a node.
wiretap_set_metadata Set a metadata stream on a node.
wiretap_get_clip_format Retrieve the storage format of a clip node.


Importing Clips

The IFFFS Wiretap Server allows media files in standard formats (for example, DPX, QuickTime, and so on) to be imported. Importing means referencing rather than copying media to the IFFFS Wiretap Server. Once created, the imported clip will be treated like any other clip, as though it were imported by a user from a Creative Finishing workstation.

Importing frames involves fetching a source data definition from a Wiretap Gateway server and forwarding it in a createClip call to the IFFFS Wiretap Server, as shown in the above program samples.

C++ Python Description
importOpenClips.C importOpenClips.py Example of creating a CLIP node and using an Open Clip located within a directory as the XML data source.
Tool Description
wiretap_get_metadata Retrieve a metadata stream from a node.
wiretap_create_node Create a new node based on metadata.


Creating a Clip from an EDL Timeline

A timeline consists of video elements, audio elements, and transitions placed together chronologically on one or more parallel tracks. An Edit Decision List (EDL) is the standard format used for timelines. A Wiretap client can construct a new clip from frames in several existing clips based on an EDL that specifies frame IDs in those clips.

Wiretap treats an EDL as metadata associated with a clip node. EDL metadata can be get and set on an instance of WireTapNodeHandle whose type is CLIP by calling the getMetadata and setMetaData methods. When setting or getting EDL metadata, DMXEDL must be specified as the metadata format tag.

For detailed information on EDL format, see Clip Node Metadata (EDL).

C++ Python Description
createTimeline.C createTimeline.py Example of creating a sequence from an EDL.

Setting EDL Metadata on a Clip Node (Assembling the Clip)

When you are creating a new clip based on a timeline, the call to setMetaData actually carries out the assembly of the frames in the new clip. The frames are soft-imported or linked to the new clip. The sample createTimeline.C shows how to prepare and set EDL metadata.

Getting EDL Metadata on a Clip Node

When getting EDL metadata, an optional filter parameter can be used. The filter parameter is used to specify the resolution of the frames. It can be set to the following values:

  • high to fetch metadata about the high-resolution frames in the clip

  • low to fetch metadata about the low-resolution frames in the clip

  • all to fetch metadata about both high-resolution and low-resolution frames in the clip

Limitations on Assembling Clips from Timelines

Assembling clips from timelines is limited in the following ways:

  • An EDL cannot be applied to a clip that already contains frames.

  • All source nodes and the resulting new node based on the timeline must be located in the same reel (the parent reel).

  • The tape names in the metadata of the source clip nodes must match the tape names in the EDL.

  • Only cut and dissolve timeline effects are supported.



Copying Clips

There are two methods that you can use to copy a clip.

The first method involves reading the frames of an existing clip and writing them to a new clip. The sample program readFrames.C shows how to read frames, but it does not show writing frames to a new clip.

Here are the steps that would be involved in implementing the full workflow of copying a clip:

  1. Identify the clip node to be copied (the source clip node).

  2. Identify the parent node of the new clip node to be created (the destination clip node).

  3. Copy the clip format of the source clip node.

  4. Create the destination clip node under the parent node using the copied clip format.

  5. Copy the frames from the source clip node to a buffer by calling readFrame on the source clip node.

  6. Copy the frames from the buffer to the destination clip node (by calling writeFrame on the destination clip node).

The second method consists of duplicating the node itself, and is shown in the sample programs duplicateNode.C and duplicateNode.py.

Here are the steps that would be involved in implementing the full workflow of duplicating a clip:

  1. Identify the clip node to be copied (the source clip node).

  2. Identify the parent node of the new clip node to be created (the destination clip node).

  3. Use the duplicateNode method to create the duplicated copy under the parent node.

See also:

C++ Python Description
readFrames.C readFrames.py Example of reading video frames from the server.
duplicateNode.C duplicateNode.py Example of creating a node based on another node's attributes.
Tool Description
wiretap_rw_frame Read or write a given frame in a clip node.
wiretap_client_tool Multi-purpose tool that can be used to copy a clip node to another.

Was this information helpful?