Go to: Synopsis. Return value. Keywords. Flags. Python examples.
cacheEvaluator([cacheFillMode=string], [cacheFillOrder=string], [cacheInvalidate=timerange], [cacheName=string], [cachedFrames=boolean], [cachingPoints=boolean], [creationParameters=boolean], [delegateEvaluation=boolean], [dynamicsAsyncRefresh=boolean], [dynamicsSupportActive=boolean], [dynamicsSupportEnabled=boolean], [flushCache=string], [flushCacheRange=[timerange, boolean]], [flushCacheSync=boolean], [flushCacheWait=boolean], [hybridCacheMode=string], [layeredEvaluationActive=boolean], [layeredEvaluationCachingPoints=boolean], [layeredEvaluationEnabled=boolean], [listCacheNames=boolean], [listCachedNodes=boolean], [listValueNames=boolean], [newAction=string], [newActionParam=string], [newFilter=string], [newFilterParam=string], [newRule=string], [newRuleParam=string], [pauseInvalidation=boolean], [preventFrameSkip=boolean], [resetRules=boolean], [resourceUsage=boolean], [resumeInvalidation=boolean], [safeMode=boolean], [safeModeMessages=boolean], [safeModeTriggered=boolean], [valueName=string], [waitForCache=float])
Note: Strings representing object names and arguments must be separated by commas. This is not depicted in the synopsis.
cacheEvaluator is NOT undoable, queryable, and editable.
This command controls caching configuration. It allows interaction with the
caching system.
Caching configuration is done through a set of rules. Most rules are composed
of a "filter", which is the test to be perform in order to determine if the rule
should be applied, and an "action", which is the effect that the rule application
should have on nodes that satisfy the criteria.
A caching mode is therefore a set of rules that determines which nodes are
being cached. This mode can be serialized to a JSON string using the
"creationParameters" flag in query mode.
Built-in Cache Configuration Modes
A few cache configuration rules, filters and actions are provided in order to
support the built-in default caching modes.
Built-in Filters
-
- evaluationCacheNodes
- This filter matches all nodes for which the node type is in the default supported
list for evaluation cache. See the code example below for the current list of
supported types.
-
- nodeTypes
- This filter matches all nodes for which the node type is in the list provided with
the newFilterParam flag. The parameter string is a list of comma-separated
node types, prefixed with either '-' or '+'. The filter will go through the list
in order and will stop if the node is of the given node type (or any derived node type).
If the prefix is '-', the filter will return that the node did not match and stop
processing. If the prefix is '+', the filter will return a match and stop processing.
-
- downstreamNodeTypes
- This filter matches all nodes for which the node type is in the list provided with
the newFilterParam flag if they also have immediate downstream nodes of the right node type(s).
The parameter string is in the form "type=+type1,+type2 downstreamTypes=+type3,+type3",
where the list of node types uses the same semantic as the nodeTypes filter.
-
- vp2CacheNodes
- This filter matches all nodes for which the node type is in the list of node types
supported by VP2 caching. Enabling VP2 caching for other node types will have no effect.
See the comments in the code example below for the current list of supported types.
-
- vp2FallbackNodes
- This filter matches all nodes for which VP2 caching should revert to evaluation
caching because of unsupported features. Namely, it matches nodes for which VP2
caching is enabled but which have animated visibility or animated topology
(potentially changing number of vertices, edges, faces, etc.). It also matches nodes
with static geometry, i.e. for which nothing needs to be cached in VP2 format for
each frame.
-
- hybridCache
- This filter matches all nodes already grabbed by the deformer evaluator, according
to a parameter controlling which meshes are considered. The newFilterParam
can take the following values for the "mode" parameter: "mode=disabled" turns this filter into a no-op that always
return false, "mode=smp" will return true for nodes that are part of a deformer evaluator
cluster with at least one mesh using Smooth Mesh Preview, "mode=all" will return true for
all nodes that are part of a deformer evaluator cluster, "mode=usePreference" will use
the "Hybrid cache" preference from the Cached Playback preferences. The newFilterParam
also accepts the "allowAnimatedInput" parameter with the following values: "allowAnimatedInput=0"
(the default value) will not consider the mesh if it is part of a cluster for which one of the
input geometry is animated. Such setups lose the benefits of hybrid cache, since the input
geometry needs to be cached in order to be fed to the GPU deformer, but since the geometry is
animated, it takes about the same amount of memory as caching the output. "allowAnimatedInput=1"
removes this filtering and allows meshes to be grabbed regardless of whether the input is animated
or not.
Built-in Actions
-
- enableEvaluationCache
- This action enables evaluation cache on the matched node. If the evaluation cache
is already enabled, it has no effect.
-
- disableEvaluationCache
- This action disables evaluation cache on the matched node. If the evaluation cache
is already disabled, it has no effect.
-
- disableAllCache
- This action disables all types of caching on the matched node.
-
- enableVP2Cache
- This action enables VP2 cache on the matched node. "useHardware=1" can be
passed to the newActionParam in order to enable the VP2 hardware cache,
while "useHardware=0" will use the VP2 software cache.
This action disables the evaluation cache (to only keep VP2 cache)
on the matched node to save memory. Specifying "useEvaluation=1"
to turn on evaluation cache (along with VP2 cache). For example, NURBS surfaces
keep the evaluation cache to make the selection highlighting more efficient.
This action also automatically applies the fallback to evaluation cache rule for
safety if required. This fallback will be applied if the node matches the criteria
from the "vp2FallbackNodes" filter and will then apply the action from the
"fallbackFromVP2CacheToEvaluationCache" action. "fallback=0" can be used to
disable this fallback, but doing so can cause correctness issues,instabilities and
even crashes. "useEvaluation=0" does not affect the fallback behavior.
To use multiple parameters, separate them with ' ' like
"useHardware=1 fallback=0 useEvaluation=1".
-
- disableVP2Cache
- This action disables the VP2 cache on the matched node. If the VP2 cache
is already disabled, it has no effect.
-
- fallbackFromVP2CacheToEvaluationCache
- This action disables the VP2 cache on the matched node and enables the evaluation
cache instead.
-
- delegateEvaluation
- This action enables delegate evaluation on the matched nodes and all nodes in
the same cluster. This means that these nodes will not be cached and will be evaluated
at every frame. As a result, nodes set to delegate to evaluation must not have any downstream
node that is not also set to delegate evaluation. Also, all input (i.e. direct parent or upstream)
nodes to the delegate evaluation nodes will be added as caching points to make sure the data is ready
for the downstream nodes to be evaluated.
Built-in Rules
-
- evaluationCache
- This rule has the same effect as using the evaluationCacheNodes filter
with the enableEvaluationCache action.
-
- customEvaluators
- This rule ensures proper behavior for nodes claimed by a custom evaluator of
a higher priority than the cache evaluator. First, it makes sure that caching is
disabled for nodes claimed by a custom evaluator of a higher priority. Secondly,
it makes sure input nodes to clusters belonging to a custom evaluator of a higher
priority are marked as evaluation caching points if the evaluator needs data. This
prevents pull evaluation from happening when the evaluator is evaluated.
Note that any combination of cache configuration rules other than the default modes
is considered unsupported and to be used at one's own risk. The default modes
are "Evaluation cache", "VP2 software cache" and "VP2 hardware cache". The sets
of rules used to enable each mode are listed in the code examples.
Querying Cache Configuration Values
In order to get a cache configuration value for a given node or list of nodes,
the cacheName flag can be used in query mode. Without any additional
parameters, this query is the same as if the valueName flag was set to
"active", i.e. it queries whether the given cache is active or not.
If the queried node is not a caching point, there will be no caching
configuration information associated with it and the query will return an
empty string (which is basically the same as all cache modes being inactive).
If the queried node is a caching point, the returned string will be the
requested information from the given cache. For example, querying the
"active" value can return "0" or "1".
string | The state of whether the memory limit has been reached or not ('out', 'okay', 'low', or 'unlimited' with the 'resourceUsage' flag) |
boolean | The state of whether the safe mode is enabled (with the 'safeMode' flag) |
boolean | The state of whether the safe mode was triggered (with the 'safeModeTriggered' flag) |
boolean | The state of whether prevent frame skipping is enabled (with the 'preventFrameSkip' flag) |
boolean | The state of whether cache in background was calculated (with the 'waitForCache' flag) |
string[] | The available cache names (with the 'listCacheNames' flag) |
string | The list of nodes currently cached by the cache evaluator (with the 'listCachedNodes' flag). |
string[] | The available value names (with the 'listValueNames' flag) |
string[] | The parameter value for the requested node(s) (with the 'cacheName' flag) |
string[] | The state of whether delegate evaluation is enabled for the requested node(s) (with the 'delegateEvaluation' flag) |
string | The creation parameters for the current mode as a JSON array (with the 'creationParameters' flag) |
string[] | The list of nodes marked as caching point (with the 'cachingPoints' flag) |
string[] | The list of nodes forced as caching points because of layered evaluation (with the 'layeredEvaluationCachingPoints' flag) |
int[] | The list of frames being cached (with the 'cachedFrames' flag) |
string | The current cache fill mode (with the 'cacheFillMode' flag) |
string | The current cache fill order (with the 'cacheFillOrder' flag) |
string | The list of all the safe mode messages (with the 'safeModeMessages' flag) |
string | The current hybrid cache mode (with the 'hybridCacheMode' flag) |
In query mode, return type is based on queried flag.
Caching
cacheFillMode, cacheFillOrder, cacheInvalidate, cacheName, cachedFrames, cachingPoints, creationParameters, delegateEvaluation, dynamicsAsyncRefresh, dynamicsSupportActive, dynamicsSupportEnabled, flushCache, flushCacheRange, flushCacheSync, flushCacheWait, hybridCacheMode, layeredEvaluationActive, layeredEvaluationCachingPoints, layeredEvaluationEnabled, listCacheNames, listCachedNodes, listValueNames, newAction, newActionParam, newFilter, newFilterParam, newRule, newRuleParam, pauseInvalidation, preventFrameSkip, resetRules, resourceUsage, resumeInvalidation, safeMode, safeModeMessages, safeModeTriggered, valueName, waitForCache
Long name (short name) |
Argument types |
Properties |
cacheFillMode(cfm)
|
string
|
|
|
Specifies the cache fill mode. Valid values are: "syncOnly" to fill cache
during playback, "syncAsync" to cache during playback and in background,
and "asyncOnly" to fill cache only in background. Query returns current mode.
|
|
cacheFillOrder(cfo)
|
string
|
|
|
Specifies in which order the cache fills the timeline. Valid values are:
"forward" to fill cache in forward direction, "backward" to fill cache backwards,
"bidirectional" to fill cache in forward and backward directions simultaneously,
and "forwardFromBegin" to fill cache in forward direction from animation start.
Query returns current cache fill mode.
|
|
cacheInvalidate(ci)
|
timerange
|
|
|
Specifies the frame range in which cache should be invalidated. The range
should be specified as a pair of positive integers.
Usage examples:
- -ci "10:20"{Python equivalent: ('10','20')} means all frames
in the range from 10 to 20, inclusive, in the current time unit.
Omitting one end of a range means using either end of the animation range
(or both), as in the following examples:
- -ci "10:" means all frames from time 10 (in the current time unit)
onwards to the maximum time in the animation range (on the timeline).
- -ci ":10" means all frames from the minimum time on the animation range
(on the timeline) up to (and including) time 10 (in the current time unit).
- -ci ":" is a short form to specify all frames, from minimum to
maximum time on the current animation range.
|
|
cacheName(cn)
|
string
|
|
|
Specifies the name of the cache from which to query a value.
In query mode, this flag needs a value.
|
|
cachedFrames(cfs)
|
boolean
|
|
|
Get the list of frames with valid cache data.
The result is an integer array containing multiple triplets of (cache-status, begin-frame, end-frame)
For example,
The result is an array of 9 integers [(0b01, 1, 3), (0b10, 7, 10), (0b11, 13, 15)].
In MEL, the result is typed as "int[9]".
In Python, the result is typed as "Tuple[int,int,int][3]".
The result suggests frames 1:3 (1,2,3), 7:10 (7,8,9,19), and 13:15 (13,14,15) are cached.
No other frames contain valid cache data.
The cache-status numbers are always 1 if "layeredEvaluationActive" is false.
The cache-status can be one of {1,2,3}, when "layeredEvaluationActive" is true.
It represents whether the frame is valid on animation cache or dynamics cache, the encoding is:
- 1 (0b01) : only animation cache is valid
- 2 (0b10) : only dynamics cache is valid
- 3 (0b11) : both animation and dynamics cache are valid
In the above example, it suggests:
frames 1:3 are only valid in animation cache.
frames 7:10 are only valid in dynamics cache.
frames 13:15 are valid in both and considered as 'fully-cached'.
|
|
cachingPoints(cps)
|
boolean
|
|
|
Get list of nodes marked as caching points, i.e. nodes with at least one
type of cache active.
|
|
creationParameters(cp)
|
boolean
|
|
|
Get the current mode creation parameters. The result is a JSON string which
represents an array with an element for each rule. Each element is an
association between the parameter name and its value when creating the rule.
|
|
delegateEvaluation(de)
|
boolean
|
|
|
Returns whether the specified node(s) are delegating to evaluation.
|
|
dynamicsAsyncRefresh(dar)
|
boolean
|
|
|
Enable / Disable Asynchronous Refresh in Dynamics Support mode.
Traditionally, edits related to the simulation system require the user to re-playback the scene to see the result.
When Asynchronous Refresh is active, Maya will process the simulation in the background and refresh the viewport once the result is ready.
Note, the automatic refresh will not happen if the frame contains temproary edits. For example, an object is moved without setting the keyframe.
|
|
dynamicsSupportActive(dsa)
|
boolean
|
|
|
Query if the Dynamics Support mode is active.
Dynamics Support mode is used to support Physics Simulation, such as Hair, or Fluid.
It will be activated if such nodes are detected in the scene, and "enableDynamicsSupport" is set to true.
When Dynamics Support mode is active, you will notice the following behavior:
- Dynamics nodes will be frozen for uncached frame
- A separate dynamics cache line will appear on the Time Slider
- Dynamics cache starts after the animation cache was filled
- Dynamics cache only fills in the background
- Dynamics cache always fills forward from the beginning
- Dynamics cache evaluation may refresh foreground dynamics nodes (see the flag "dynamicsAsyncRefresh")
|
|
dynamicsSupportEnabled(dse)
|
boolean
|
|
|
Specifies if Dynamics nodes are allowed to participate in Cached Playback
When disabled, Dynamics nodes will trigger "Safe mode" and prevent caching.
When enabled, Dynamics nodes will participate in caching and trigger "Dynamics support mode".
For more information check flag "dynamicsSupportActive".
|
|
flushCache(fc)
|
string
|
|
|
Specifies to flush the current cache. Valid values are: "keep" to store the existing
cache as backup, and "destroy" to delete the current cache.
|
|
flushCacheRange(fcr)
|
[timerange, boolean]
|
|
|
Specifies the frame range in which cache should be flushed. By default it will
destroy the cache - if the 'flushCache' is also set then it will define what
to do with the cache range being flushed.
The range should be specified as a pair of positive integers and a boolean.
Usage examples:
- -flushCacheRange "10:20" on {Python equivalent: ('10','20',True)}
means all frames in the range from 10 to 20, inclusive, in the current time unit.
- -flushCacheRange "12:18" off {Python equivalent: ('12','18',False)}
means all frames before 12 and after 18, not inclusive, in the current time unit.
Omitting one end of a range means using either end of the animation range
(or both), as in the following examples:
- -flushCacheRange "10:" on means all frames from time 10 (in the current time unit)
onwards to the maximum time in the animation range (on the timeline).
- -flushCacheRange ":10" on means all frames from the minimum time on the animation range
(on the timeline) up to (and including) time 10 (in the current time unit).
- -flushCacheRange ":" on is a short form to specify all frames, from minimum to
maximum time on the current animation range.
|
|
flushCacheSync(fcs)
|
boolean
|
|
|
Specifies how to flush the cache: synchronously or asynchronously. True for synchronous, False for asynchronous.
|
|
flushCacheWait(fcw)
|
boolean
|
|
|
Wait for the cache destruction to be completed.
|
|
hybridCacheMode(hcm)
|
string
|
|
|
Specifies the hybrid cache mode. Valid values are: "disabled", not to use
hybrid cache; "smp", to evaluate on the GPU meshes with GPU-supported deformation
stacks if they use Smooth Mesh Preview (instead of caching them);
"all", to evaluate on the GPU all meshes with PU-supported deformation stacks
(instead of caching them). Query returns current mode.
|
|
layeredEvaluationActive(lea)
|
boolean
|
|
|
Query if the Layered Evaluation mode is active.
When Layered Evaluation is active, the background cache fill process will be split into multiple passes for different contents (evaluation nodes).
These contents are referred as different 'evaluation layers', representing different level of details (LoD) in animation evaluation.
For example:
- The first layer contains regular animations like a character motion.
- The second layer contains dynamics simulations like a character's hair and cloth.
Maya will create separated cache and cache fill pass for each of the layers.
Additional cache bars will be added to the Time Slider UI to represent these layers.
The background cache fill pass for each of the layer will start in order.
In the above example, two passes of background cache fill will be observed.
In the first pass of background-cache-fill or playback-fill, only Character motions will be evaluated and filled, Hair and Clothes are frozen in-place.
After the cache for first layer have been filled for all the frames,
the second pass of cache fill will start to simulate Hairs and Clothes physics and fill the cache for the 2nd layer.
Once the cache for the 2nd layer is filled for a frame, users can scrub the timeline to view the fully updated effects.
Note, when layered evaluation is active, any foreground playback or manipulation will only evaluate the first evaluation layer,
and all the FX contents will be frozen in the viewport until the background simulation is complete.
For example, when rotating a character’s head, its hair will not follow in real time.
If the flag "dynamicsAsyncRefresh" is enabled, the FX contents will be updated automatically after simulation cached up. Please refer to the flag for more detail.
|
|
layeredEvaluationCachingPoints(lec)
|
boolean
|
|
|
Get the list of nodes marked as caching points because of layered evaluation.
|
|
layeredEvaluationEnabled(lee)
|
boolean
|
|
|
Enable / Disable Layered Evaluation in Dynamics Support mode.
Refering to flags "dynamicsSupportActive" and "layeredEvaluationActive" for details about layered evaluation enabled behavior.
This flag is provided to support plugin developers for testing purpose.
Disabling this option in production is not recommended.
When disabled, dynamics nodes will share the same cache with regular animation.
Allows dynamics nodes to be evaluated and stored to cache in the foreground.
Background "cacheFillOrder" option will be locked to "forwardFromBegin".
When used with cacheFillMode="syncOnly", it can also be used to support legacy dynamics nodes that cannot evaluate in the background.
|
|
listCacheNames(lcn)
|
boolean
|
|
|
Return the list of existing cache names.
|
|
listCachedNodes(lcd)
|
boolean
|
|
|
Returns the list of cached nodes.
|
|
listValueNames(lvn)
|
boolean
|
|
|
Return the list of value names that can be queried for the given cache.
|
|
newAction(na)
|
string
|
|
|
Specifies the name of the new action to create in the new filter/action rule.
|
|
newActionParam(nap)
|
string
|
|
|
Specifies the parameter string to pass to the new action to create in the new filter/action rule.
|
|
newFilter(nf)
|
string
|
|
|
Specifies the name of the new filter to create in the new filter/action rule.
|
|
newFilterParam(nfp)
|
string
|
|
|
Specifies the parameter string to pass to the new filter to create in the new filter/action rule.
|
|
newRule(nr)
|
string
|
|
|
Specifies the name of the new rule to create.
|
|
newRuleParam(nrp)
|
string
|
|
|
Specifies the parameter string to pass to the new rule to create.
|
|
pauseInvalidation(pi)
|
boolean
|
|
|
Pause all incoming invalidation of the cache. Work in symmetry with resumeInvalidation flag.
PauseInvalidation can be called several time, useful in nesting situation. The same number of resume need to be called to resume the invalidation.
If queried it will return how much time caching is paused, 0 means it is resumed.
|
|
preventFrameSkip(pfs)
|
boolean
|
|
|
Specifies if frame skipping is enabled. Following behavior is seen when frame
skipping is enabled, and playback is set to play in real-time.
- If cache can't be filled at real-time frame rate, frames will NOT be skipped.
- Once all frames have been looped over(and therefore all frames are cached), and if
playing back from cache still can't be done at real-time frame rate; frames WILL be skipped.
- If memory limit is reached before all frames are cached, frames WILL be skipped.
- If cache is invalidated will playing(like flushing it), frames will NOT
be skipped(until the cache is full again).
|
|
resetRules(rr)
|
boolean
|
|
|
Reset the cache configuration rules to an empty set of rules.
|
|
resourceUsage(ru)
|
boolean
|
|
|
Returns the current state of the resource usage as a string. 'unlimited' = the resource limits
are being ignored, 'out' = the memory limit has been reached, 'low' = the memory usage is at
90% of the specified limit, 'okay' = memory usage is under 90% of the specified limit.
|
|
resumeInvalidation(ri)
|
boolean
|
|
|
Resume all incoming invalidation of the cache. Work in symmetry with pauseInvalidation flag.
PauseInvalidation can be called several time, useful in nesting situation. The same number of resume need to be called to resume the invalidation.
If queried it will return true if cache is resumed, false otherwise.
|
|
safeMode(sf)
|
boolean
|
|
|
Turns safe mode on or off. In query mode, it returns the status of the safe mode for cache evaluator.
|
|
safeModeMessages(sfm)
|
boolean
|
|
|
Prints the safe mode messages to the console.
|
|
safeModeTriggered(sft)
|
boolean
|
|
|
Returns if the safe mode was triggered for cache evaluator.
|
|
valueName(vn)
|
string
|
|
|
Specifies the name of the parameter for which to query the value.
In query mode, this flag needs a value.
|
|
waitForCache(wfc)
|
float
|
|
|
Specifies to wait for cache to fill in background, with [Time to wait in seconds] timeout.
|
|
Flag can appear in Create mode of command
|
Flag can appear in Edit mode of command
|
Flag can appear in Query mode of command
|
Flag can have multiple arguments, passed either as a tuple or a list.
|
import maya.cmds as cmds
import maya.cmds as cmds
# Enable evaluation cache.
cmds.cacheEvaluator(resetRules=True)
cmds.cacheEvaluator(newFilter="evaluationCacheNodes", newAction="enableEvaluationCache")
cmds.cacheEvaluator(newRule="customEvaluators")
# Enable VP2 software cache.
cmds.cacheEvaluator(resetRules=True)
# VP2 cache only works on a subset of types (mesh, nurbsCurve, nurbsSurface, bezierCurve),
# so we still enable evaluation cache for other types.
cmds.cacheEvaluator(newFilter="evaluationCacheNodes", newAction="enableEvaluationCache")
# Enabling VP2 cache will disable evaluation cache on the supported types.
cmds.cacheEvaluator(newFilter="vp2CacheNodes", newAction="enableVP2Cache", newActionParam="useHardware=0")
# Custom evaluators of a higher priority than the caching evaluator
# may need additional caching points around its boundary to evaluate properly.
cmds.cacheEvaluator(newRule="customEvaluators")
# Enable VP2 hardware cache.
cmds.cacheEvaluator(resetRules=True)
# VP2 cache only works on a subset of types (mesh, nurbsCurve, nurbsSurface, bezierCurve),
# so we still enable evaluation cache for other types.
cmds.cacheEvaluator(newFilter="evaluationCacheNodes", newAction="enableEvaluationCache")
# Enabling VP2 cache will disable evaluation cache on the supported types.
# Note that using the vp2CacheNodes filter is equivalent to using the
# "nodeTypes" filter with the right types specified as the "newFilterParam"
# string, i.e. "types=+mesh,+nurbsCurve,+bezierCurve,+nurbsSurface".
cmds.cacheEvaluator(newFilter="vp2CacheNodes", newAction="enableVP2Cache", newActionParam="useHardware=1")
# Custom evaluators of a higher priority than the caching evaluator
# may need additional caching points around its boundary to evaluate properly.
cmds.cacheEvaluator(newRule="customEvaluators")
# Enable evaluation cache using explicit node types.
cmds.cacheEvaluator(resetRules=True)
cmds.cacheEvaluator(newFilter='nodeTypes', newFilterParam='types=-constraint,+transform,+mesh,+nurbsCurve,+bezierCurve,+nurbsSurface,+subdiv,+lattice,+baseLattice,+cMuscleDebug,+cMuscleDirection,+cMuscleDisplace,+cMuscleDisplay,+cMuscleFalloff,+cMuscleKeepOut,+cMuscleObject,+cMuscleSmartCollide,+cMuscleSpline,+cMuscleSurfAttach,-THlocatorShape,+locator,+light,+camera,+imagePlane,+clusterHandle,+deformFunc,+hwShader,+pfxGeometry,+follicle', newAction='enableEvaluationCache')
cmds.cacheEvaluator(resetRules=True)
cmds.cacheEvaluator(newFilter="evaluationCacheNodes", newAction="enableEvaluationCache")
cmds.cacheEvaluator(query=True, creationParameters=True)
# Result: [
{
"newAction": "enableEvaluationCache",
"newFilter": "evaluationCacheNodes"
}
] #
# Query current cache fill mode.
cmds.cacheEvaluator(query=True, cacheFillMode=True)
# Result: syncAsync #
# Set new cache fill mode. Options are: 'asyncOnly', 'syncOnly', 'syncAsync'.
cmds.cacheEvaluator(cacheFillMode = 'syncAsync')
# Query current cache fill order.
cmds.cacheEvaluator(query=True, cacheFillOrder=True)
# Result: bidirectional #
# Set new cache fill order. Options are: 'forward', 'backward', 'bidirectional', 'forwardFromBegin'.
cmds.cacheEvaluator(cacheFillOrder='forward')
# Enable the Dynamics (Simulation) Support
# Outcome: Scenes with Dynamics nodes will be cached in 'Dynamics Support Mode'
# Caching HUD will indicates the state as 'On (Dynamics Mode)'
cmds.cacheEvaluator(dynamicsSupportEnabled=True)
# Query if Dynamics Support Mode is active (Dynamics nodes present in the scene)
cmds.cacheEvaluator(query=True, dynamicsSupportEnabled=True)
# Disable the Dynamics Support
# Outcome: Safe mode being triggered and cached playback is disabled when Dynamics nodes present.
cmds.cacheEvaluator(dynamicsSupportEnabled=False)
# Query or set the Dynamics-Async-Refresh option
cmds.cacheEvaluator(q=True,dynamicsAsyncRefresh=False)
cmds.cacheEvaluator(dynamicsAsyncRefresh=True)
# Query current hybrid cache mode.
cmds.cacheEvaluator(query=True, hybridCacheMode=True)
# Result: disabled #
# Set new hybrid cache mode. Options are: 'disabled', 'smp', 'all'.
cmds.cacheEvaluator(hybridCacheMode = 'smp')
# Invalidate cache for all the frames in range from 10 to 20, inclusive, in current time unit.
cmds.cacheEvaluator(cacheInvalidate=('10','20'))
cmds.cacheEvaluator(cacheInvalidate=('10:20',))
# Invalidate cache for all the frames in range from 10 onwards to the maximum time in the animation range, in current time unit.
cmds.cacheEvaluator(cacheInvalidate=('10:',))
# Invalidate cache for all the frames in range from minimum time on the animation range up to (and including) to 10, in current time unit.
cmds.cacheEvaluator(cacheInvalidate=(':10',))
# Invalidate cache from minimum to maximum time on the current animation range.
cmds.cacheEvaluator(cacheInvalidate=(':',))
# Check whether or not evaluation cache is active on a given node.
cmds.cacheEvaluator("myNode", query=True, cacheName="evaluation")
# Result: [u'1'] #
cmds.cacheEvaluator("myNode", query=True, cacheName="evaluation", valueName="active")
# Result: [u'1'] #
# Check whether or not VP2 cache is active, and using hardware cache.
cmds.cacheEvaluator("myNode", query=True, cacheName="VP2")
# Result: [u'1'] #
cmds.cacheEvaluator("myNode", query=True, cacheName="VP2", valueName="active")
# Result: [u'1'] #
cmds.cacheEvaluator("myNode", query=True, cacheName="VP2", valueName="useHardware")
# Result: [u'1'] #
# Check whether or not delegate evaluation is active on a given node.
cmds.cacheEvaluator("myNode", query=True, delegateEvaluation=True)
# Result: [u'0'] #
cmds.cacheEvaluator(query=True, cachingPoints=True)
# Result: [u'nurbsCone1', u'nurbsConeShape1'] #
# Flush current cache. "keep" and "destroy" flags can be used to store or destroy the existing cache.
cmds.cacheEvaluator(flushCache='destroy')
# Result: destroy #
# Query the cache evaluator's flush synchronization mode.
cmds.cacheEvaluator(query=True, flushCacheSync=True)
# Result: 0 #
# Set the cache evaluator's flush synchronization mode. Valid values are: True for synchronous, False for asynchronous.
cmds.cacheEvaluator(flushCacheSync=True)
# Wait for cache destruction.
cmds.cacheEvaluator(flushCacheWait=True)
# Check the available types of cache.
cmds.cacheEvaluator(query=True, listCacheNames=True)
# Result: [u'evaluation', u'VP2'] #
# Query the list of cached nodes.
cmds.cacheEvaluator(query=True, listCachedNodes=True)
# Result: nurbsSphere1,nurbsSphereShape1 #
# Check the available values that can be queried for available caches.
cmds.cacheEvaluator(query=True, cacheName="evaluation", listValueNames=True)
# Result: [u'active'] #
cmds.cacheEvaluator(query=True, cacheName="VP2", listValueNames=True)
# Result: [u'active', u'useHardware'] #
# Query if prevent-frame-skipping is on.
cmds.cacheEvaluator(query=True, preventFrameSkip=True)
# Result: 1 #
# Set prevent-frame-skipping to on.
cmds.cacheEvaluator(preventFrameSkip=True)
# Query if the cache invalidation is paused. Returns how many times invalidation is paused.
cmds.cacheEvaluator(query=True, pauseInvalidation=True)
# Result: 0 #
# Pause cache invalidation.
cmds.cacheEvaluator(pauseInvalidation=True)
# Resume cache invalidation.
cmds.cacheEvaluator(resumeInvalidation=True)
# Query whether or not the resource limit has been reached.
cmds.cacheEvaluator(query=True, resourceUsage=True)
# Result: okay #
# Turn safe mode state for evaluator on.
cmds.cacheEvaluator(safeMode=True)
# Query the safe mode state for evaluator.
cmds.cacheEvaluator(query=True, safeMode=True)
# Result: 1 #
# If safe mode was triggered return the safe mode messages
cmds.cacheEvaluator(query=True, safeModeMessages=True)
# Result: #
# Check if safe mode was triggered
cmds.cacheEvaluator(query=True, safeModeTriggered=True)
# Result: 0 #
# Wait for 10 seconds for cache to fill in background
cmds.cacheEvaluator(waitForCache=10)
# Result: True #
# Save the current caching mode.
cacheModeString = cmds.cacheEvaluator(query=True, creationParameters=True)
useEval = True
if useEval:
# The return string can be evaluated as regular Python code to get an array
# of dictionaries describing the rule.
cacheMode = eval(cacheModeString)
else:
# json.loads can also be used to parse that string. However, it creates
# unicode strings as keys which cannot be used as argument names when
# unpacking the dictionary.
import json
jsonCacheMode = json.loads(cacheModeString)
cacheMode = []
for rule in jsonCacheMode:
newRule = { key.encode('ascii'): value for key, value in rule.items() }
cacheMode.append(newRule)
# Restore previous cache mode.
cmds.cacheEvaluator(resetRules=True)
for rule in cacheMode:
cmds.cacheEvaluator(**rule)
# Use the CacheEvaluatorManager to get/set modes.
from maya.plugin.evaluator.CacheEvaluatorManager import CacheEvaluatorManager
manager = CacheEvaluatorManager()
currentMode = manager.cache_mode
from maya.plugin.evaluator.CacheEvaluatorManager import CACHE_STANDARD_MODE_VP2_HW, CACHE_STANDARD_MODE_VP2_SW, CACHE_STANDARD_MODE_EVAL
# Enable evaluation cache.
manager.cache_mode = CACHE_STANDARD_MODE_EVAL
# Enable VP2 software cache.
manager.cache_mode = CACHE_STANDARD_MODE_VP2_SW
# Enable VP2 hardware cache.
manager.cache_mode = CACHE_STANDARD_MODE_VP2_HW
# Restore the previous mode.
manager.cache_mode = currentMode