Share

VR Camera - Arnold User Guide

This is a generic VR camera implementation. It features two cameras offset by a distance with a specific 360 degrees projection. Its main advantages are:

  • It works with any scene component that can be rendered in an offline renderer: meshes, hair, particles, volumetrics and complex shading networks.
  • An easy learning curve to begin to create VR content. Just add the VR camera to your existing project.
  • Modest hardware requisites to reproduce the content. Any platform that can reproduce a video with the required projection is fine to experience the generated content.
  • Content that is easy to distribute either as a video file or via video streaming. It can be reproduced using dedicated software or an app, or with web standards like WebGL, WebVR. It also works with Google and Facebook 360 3D videos.

Limitations

  • Poles: By default, poles will show very evident artifacts. This requires that you adjust the stereoscopic effect for each scene and smooth it near the poles, thus diminishing the stereoscopic effect.
  • Tilt: Due to the way the stereoscopic effect is done, tilting your head will destroy the stereoscopic perception.
  • Parallax: When you move your head along any axis, there is a change in the viewpoint that the offline VR scene can’t take into account. This can diminish the immersion of the experience since we can only react to head rotations.

Mode

There are four mode options available so that you can get the result that better adapts to your pipeline. They are as follows:

  1. Side by Side
  2. Over Under
  3. Left Eye
  4. Right Eye

Projection

Depending on the selected projection, and options, each sample will correspond to a ray direction so that all of the space around the camera is completely covered. Choose between lat-long, cube map (6x1), cube map (3x2).

Latlong Projection

Cubemap (6x1)

Cubemap (3x2)

The 3×2 Cube Map has the advantage of better aspect ratio images.

Eye Separation

Defines the separation between the right and the left camera, required to achieve the stereoscopic effect. The camera origin position is updated for each sample and displaced from the center perpendicularly to the ray direction. Doing this per sample level and not per pixel creates a better result than using two physical cameras. Here is a picture explaining this:

Eye to Neck

The horizontal distance from the neck to the eye.

Top Merge Mode

These parameters define the merging function of the sky. Usually, a Cosine function (Cos) will be smoother and less prone to artifacts. Choose between None, Cosine or Shader.

Top Merge Angle

Defines the angle in degrees from where the merge starts to take effect in the sky. The nearer the angle to the pole (0 degrees top or 180 degrees bottom), the bigger stereoscopic effect you will see below it, but the most probable artifacts will appear at the poles.

Below you can see the difference between a start top angle from 0 to 80 using a cos merging function:

Bottom Merge Mode

These parameters define the merging function of the floor. Usually, a Cosine function (Cos) will be smoother and less prone to artifacts. Choose between None, Cosine or Shader.

Bottom Merge Angle

Defines the angle in degrees from where the merge starts to take effect on the floor. The nearer the angle to the pole (0 at the bottom, 180 at the top), the bigger stereoscopic effect you will see below it, but the most probable artifacts will appear at the poles.

If the Bottom Merge Angle is above the Top Merge Angle, it will be clamped to the Top Merge Angle.

Merge Shader

This is used when Merge Mode is set to Shader. It can be used to improve control of smoothing the poles. For example, if you have to integrate 3D with real life footage from cameras that have a very specific pole merging. Without Merge Shader, you only have generic pole merging. Black in the shader, results in no merge at all and white is completely merged.

Example of a ramp shader used to smooth the poles


Position

The position of the camera.

Look At

The point at which the camera is pointing.

Up

The up vector of the camera.

Matrix

Matrix to define the position and orientation of the camera.

Near Clip

The near clipping plane of the camera's renderable area.

Far Clip

The far clipping plane of the camera's renderable area.

Shutter Start

Defines when the camera shutter is open. Rays will have a time randomly assigned between Shutter Start and Shutter End. It is recommended that this time be frame-relative, that is if the current frame (set on the global options) is 1001, then Shutter Start can be -0.25 to signify a shutter opening one-quarter frame before the actual frame marker passes, while Shutter End would be 0.25 to close the shutter one-quarter frame after the frame passes. This would be centered motion blur. Other typical shutter values would be from 0.0 to 0.5 (lead-out motion blur), or even -0.5 to 0.0 (lead-in motion blur).

Note:

Note that the renderer does not impose any strict requirements on what the units or absolute values of the times it is given are, provided they are all consistent with each other, thus both frame-relative and absolute shutter times and motion intervals are legal provided they are self-consistent. However, it is generally recommended that times be frame-relative for simplicity.

Info: The shutter range of the camera can be defined by changing the Shutter Start and Shutter End parameters. The value range should use the same time reference as the motion times. The default Shutter Start of 0 and Shutter End of 1 means a full camera shutter range equivalent to the default motion blur range. A smaller range (0.0-0.5) will decrease the effective shutter aperture time and only show the first half of the motion.

Shutter End

Defines when the camera shutter is closed. Please see shutter_start for a detailed description of shutter semantics. Shutter End must be larger than or equal to Shutter Start. If they are the same, this will ensure no motion blur is generated.

Note:

Note that the renderer does not impose any strict requirements on what the units or absolute values of the times it is given are, provided they are all consistent with each other, thus both frame-relative and absolute shutter times and motion intervals are legal provided they are self-consistent. However, it is generally recommended that times be frame-relative for simplicity.

Shutter Type

The filtering applied to time samples. By default, this is a box filter, with all time samples having the same weight. A Triangle Filter (or "tent") is also available which produces smoother trails.

Arnold supports custom shutter shapes with the shutter curve camera parameter. You can define as many points as required. Coordinates increase from 0 (corresponding to the Shutter Start) to 1 (corresponding to the Shutter End). Values in the vertical axis must be non-negative, and it is not recommended to enter values above 1. The values are linearly interpolated between each point. In the examples below, you can see the effect different curve shapes have on the motion blur trail of a sphere that has been key-framed moving from left to right.

Rolling Shutter

Rolling Shutter is used to simulate the type of rolling shutter effect seen in footage shot with digital cameras that use CMOS-based sensors such as Blackmagics, Alexas, REDs, and even iPhones. This method is implemented by rolling (moving) the shutter across the camera area instead of the entire image area all at the same time.

The Rolling Shutter direction specifies the direction that the rolling shutter takes place. The default is Off and can be set to Top (top to bottom being the most common scanning direction), Bottom, Left or Right.

Interesting effects can be achieved when combining motion blur Length with Rolling Shutter:

Motion blur Length from 0 to 2

Filtermap

Weights the camera sample by a scalar amount defined by the shader linked to the Filtermap. This shader will use as an input, u,v coordinates in image-space coords (0,1) and x,y in pixel coordinates. This allows you to darken certain regions of the image, perfect to simulate vignetting effects.

There is an optimization in place where if the filter returns pure black then the camera ray is not fired. This can help in cases such as when rendering with the Fisheye Camera where, depending on its autocrop setting, parts of the frame trace no rays at all.

Circular ramp mapped to the camera's Filtermap to create a vignette effect

Handedness

Chooses the 'right' handed or 'left' handed coordinate system.

Screen Window Min

This defines the 2d window in the projection plane that will be rendered. If set to its default (-1,-1) (1,1) the frame will exactly match with the defined region, after taking aspect_ratio into account so that there is no distortion. These should be set if you want to stretch, squash, or zoom to a particular area in an image.

Screen Window Max

This defines the 2d window in the projection plane that will be rendered. If set to its default (-1,-1) (1,1) the frame will exactly match with the defined region, after taking aspect_ratio into account so that there is no distortion. These should be set if you want to stretch, squash, or zoom to a particular area in an image.

Exposure

Simulates the effect of camera exposure (in a non-physical way). Increasing this parameter by a value of one gives you one stop up (doubles the brightness).

Was this information helpful?