Camera information in relation to DirectX and OpenGL draw APIs

Conventions

DirectX and OpenGL have different camera conventions.

Viewport 2.0 remains consistent with the rest of Maya by using a right-handed coordinate system when setting the camera projection matrix. However, the near and far clip planes map to the [0,1] range in Z.

Thus, when querying for:

The world space camera vectors can be queried by passing the kViewDirection (direction), kViewUp (up), and kViewRight (right) enums to the MFrameContext::getTuple() method. These three vectors form the basis of a right-handed coordinate system.

MDrawContext also provides access to the active camera. When querying its properties via the MFnCamera interface, the same consistent camera vector values can be queried. If object space vectors are returned, then the values will show a consistent right-handed coordinate system with a Z- view direction.

Some useful methods that can be used for querying include:

Clipping planes

With the Legacy Default Viewport (Viewport 1), it is possible to directly set supplemental clip planes in OpenGL when the active camera belongs to a camera set.

With Viewport 2.0, it is not advisable to directly query the device, as fixed-function user clip planes may not be available for certain draw APIs. As such, the Viewport 1 behaviour is disabled by default for Viewport 2.0, though it can be accessed via an environment variable if needed when running OpenGL (ENABLE_DEFAULT_VIEWPORT_CAMERA_SETS).

A draw API agnostic option to get world space clip planes is shown below. The camera is taken from the MDrawContext, but can be for any camera shape. Note the use of the unnormalized methods to query clip planes.

MDrawContext context;

// Get the current camera from the context
MFnCamera activeCamera(context.getCurrentCameraPath());

// Get relative near and far clip values with respect to the camera position
// You should ignore any override values set when using a
// camera set; therefore, do not use the nearClippingPlane() and farClippingPlane() 
// methods on MFnCamera.
double nearD = activeCamera.unnormalizedNearClippingPlane();
double farD = activeCamera.unnormalizedFarClippingPlane();

// Get world space camera information
MPoint eyePoint = activeCamera.eyePoint(MSpace::kWorld);
MVector viewDirection = activeCamera.viewDirection(MSpace::kWorld); // Positive value

double dist = eyePoint[0]*viewDirection[0] + 
              eyePoint[1]*viewDirection[1] + 
              eyePoint[2]*viewDirection[2]; 

// Compute near clip plane facing away from the camera direction (for OpenGL)
double distNear = -1.0 * (dist + nearD); 
MVector OpenGL_NearClipPlaneVector( 	
       viewDirection[0], viewDirection[1], viewDirection[2], distNear);

// Compute far clip plane facing towards the camera direction (for OpenGL)
double distFar = dist + farD;
MVector OpenGL_FarClipPlaneVector(
       -viewDirection[0], -viewDirection[1], -viewDirection[2], distFar);

The equivalent code if only using MDrawContext would be as follows:

// Query the camera coordinate system
MDoubleArray vPos = context.getTuple(MHWRender::MFrameContext::kViewPosition);
MDoubleArray vDir = context.getTuple(MHWRender::MFrameContext::kViewDirection);
MDoubleArray vNear = context.getTuple(MHWRender::MFrameContext::kViewUnnormlizedNearClipValue);
MDoubleArray vFar = context.getTuple(MHWRender::MFrameContext::kViewUnnormalizedFarClipValue);

// Compute the world space planes for the near and far clip planes
// The near plane is pointing away from the camera and is thus pointing in the
// negative direction, while the far plane is pointing towards the camera (positive 
// direction.
double distW = vPos[0]*vDir[0] + 
               vPos[1]*vDir[1] + 
               vPos[2]*vDir[2]; 

// Near clip plane faces away from the camera
double distNearW = -1.0 * (distW + vNear[0]); 
MVector OpenGL_NearPlane(vDir[0], vDir[1], vDir[2], distNearW);

// Far clip plane faces torwards the camera
double distFarW = distW + vFar[0];
MVector OpenGL_NearPlane(-vDir[0], -vDir[1], -vDir[2], distFarW);	

Depth Priority

Shaders may use depth priority to offset the geometric position along the view direction. Within the shader code, the computation is relative to the actual device’s depth range, and thus differ for OpenGL versus DirectX. Assuming a uniform parameter called depthPriority has been set appropriately, the following sample code performs the offset in DirectX and OpenGL:

// DX range is [0,1] or 1 in size.
float4 iPcPriority( float3 pm, float depthPriority, float4x4 worldViewProjectionC )
{ 
    float4 P = mul( float4(pm,1), worldViewProjectionC ); 
    P.z -= P.w * depthPriority; 
    return P; 
}
 
// OpenGL range is [-1,1] or 2 in size
vec4 iPcPriority( vec3 pm, float depthPriority, mat4 worldViewProjectionC )
{
    vec4 P = worldViewProjectionC * vec4(pm,1.0f);
    P.z -= P.w * 2.0 * depthPriority;
    return P;
}

Selection

Selection uses the same information as is used for drawing.

Screen Orientation

It is worth noting that, for fat lines and points (generated using geometry shaders), or any screen aligned geometry, that a counter-clockwise winding order indicates front-facing geometry. This is consistent for both DirectX and OpenGL.

For example, in the following DirectX code, the quad points are defined as follows:

static const float4 cQuadPts[4] = {  
    float4( -1.0,  1.0, 0, 0 ),  
    float4( -1.0, -1.0, 0, 0 ),  
    float4(  1.0,  1.0, 0, 0 ),  
    float4(  1.0, -1.0, 0, 0 )};

The geometry shader expansion traverses through the points in forward order, resulting in a triangle strip with the triangles each having a counter-clockwise orientation:

void point2ScreenQuad( geometryInS inputs[1], float2 pointSz, float2 screenSize,
     float4x4 viewProjInverse, float depthPriorityUnit, bool orthographic, float DPThresholdInView, 
     inout TriangleStream<geometryInS> outStream  ) 
{
    geometryInS outS  = inputs[0]; 
    float size = max(0, max(pointSz.x, pointSz.y)); 
    float dpScale = orthographic ? 1.0f : -DPThresholdInView; 
    float dp = 2.0f * size * cDepthPriorityUnit * dpScale; 
    float4 sizeInZ = float4(pointSz.xy  / screenSize.xy, 0, 0) * outS.Pc.w;  
    [unroll] for( int i = 0; i < 4; ++i ) { 
        outS.Pc = inputs[0].Pc + sizeInZ * cQuadPts[i]; 
        outS.Pc.z = inputs[0].Pc.z - dp; 
        outStream.Append( outS ); 
    } 
    outStream.RestartStrip(); 
}

Examples

NOTE:

Fragments can be examined using either of the following methods:

  • the ogs –xml <fragmentName> command with the fragment name as the <fragmentname> argument
  • via MFragmentManager::getFragmentXML() API interface. The fragmentDumper sample plug-in implements the dumpFragment command which can also be used to examine fragments using the -fn option.