The Channel Operator Sensor Input defines the following specific
attributes. For common attributes, see Channel Operator Common Attributes.
Channel Operator Attributes
Vision mode
Defines what will be extracted from the Sensor Locators. Available values
are:
Angular Occupation (Ratio): the output will be the
ratio of occupied space in the angular fuzz vision space
Distance Occupation (Ratio): the output will be the
ratio of occupied space in the distance fuzz vision space
Angular and Distance Occupation (Ratio): the output
will be the ratio of occupied space in the fuzz vision space (will
return 1 when the vision is fully occupied, 0 when it's completely
empty)
Maximum Angular Clearance (Angle): the output value
will be the angle (in degrees) for the maximum clearance direction in
the fuzz vision space (ie. the direction in which the entity can see the
farthest empty space)
Minimum Angular Clearance (Angle): the output value
will be the angle (in degrees) for the minimum clearance direction in
the fuzz vision space (ie. the direction in which the entity can see the
closest obstacle)
Spherical Occupation (Ratio): in this mode, the
vision is a projection of the perception voxels onto a region of a
sphere. The output is the ratio (mean value) of the region.
Maximum Clearance (Vector): in this mode, the
vision is a projection of the perception voxels onto a region of a
sphere, transformed into a distance map (the pixels around the voxel
projections fade the farther they get). The output is a unit vector,
which is the direction in world coordinates towards the clearance point
on the sphere - the point with the lowest value on the distance map
(farthest from any obstacle).
Minimum Clearance (Vector): same as in Maximum
Clearance (Vector) mode, but the direction is towards the point with the
highest value on the distance map (towards the closest obstacle)
!Ratio
In Angular Occupation / Distance Occupation / Angular and Distance
Occupation / Spherical Occupation modes, this option will change the
output value to 1-(occupied ratio), which means it will be 1 when the
vision is empty, and 0 when it's full rather than the opposite
Angle
These parameters configure the zone in which checking
for obstacle in the vision (represented as green zones in the Visual
Feedbacks).
In Angular Occupation Vision Mode:
Angle[1] and Angle[2] define the range at which obstacles are taken into
account.
Angle[0] is used to defines a slope between Angle[0] and Angle[1] in
which obstacles will only be partly taken into account at the beginning
of the range.
Angle[3] is used to defines a slope between Angle[2] and Angle[3] in
which obstacles will only be partly taken into account at the end of the
range.
In Distance Occupation Vision Mode:
Width[0] and Depth[1] define the range at which obstacles are taken into
account.
Depth[0] is used to defines a slope between Depth[0] and Width[0] in
which obstacles will only be partly taken into account at the beginning
of the range.
Width[1] is used to defines a slope between Width[1] and Depth[0] in
which obstacles will only be partly taken into account at the end of the
range.
In Angular and Distance Occupation / Maximum Angular Clearance /
Minimum Angular Clearance modes:
The maximum value amongst Width and Height x Max Distance Factor defines
the radius of the radar vision (here: 6mu)
Depth[0] defines the minimum distance for the fuzz vision space (here:
1mu)
Depth[1] defines the maximum distance for the fuzz vision space(here:
5mu)
Angle[0] defines the minimum angle (right side) of the fuzz vision space
at its minimum distance (here: -60° at 1ma)
Angle[1] defines the minimum angle (right side) of the fuzz vision space
at its maximum distance (here: -30° at 5ma)
Angle[2] defines the maximum angle (left side) of the fuzz vision space
at its maximum distance space (here: 30° at 5ma)
Angle[3] defines the maximum angle (left side) of the fuzz vision space
at its minimum distance (here: 60° at 1ma)
In Spherical Occupation / Maximum Clearance / Minimum Clearance
modes:
Depth[0] defines the minimum distance for perceived obstacles
Depth[1] defines the maximum distance for perceived obstacles
The four Angle parameters define the limits for two angles that describe
a Spherical Coordinate System :
Angle[0] and Angle[1] are the minimum and maximum Phi (Φ), between 0 and
360° (or -180 and 180°), which is the angle between the up axis and the
direction projected on the up and side plane D'
Angle[1] and Angle[2] the minimum and maximum Theta (θ), between 0 and
180°, which is the angle between the front axis and the direction D of
the perceived object
Depth
Width
Max Distance Factor
Factor which will be multiplied by the greatest Depth or Width value
to define the maximum distance at which computing the vision
Distance Weight
Allows the vision pixels to have a value between 0 and 1 according
to the distance to the perceived obstacle - entities that are farther
away are less taken into account, as in the figure below - w is the
distance weight, alpha (α) is the pixel value, Dmin and Dmax are the
values in the Depth paramete.
The Distance Weight is between 0 and 1. At 0 the distance is not taken
into account and the pixels have the same alpha.
Relative Velocity Weight
This weight takes into account the velocity of the perceived object
relative to the entity. This is projected along the front axis. The
relative velocity is negative when the objects are seen as coming
towards the entity and positive if the objects are seen as going away
from the entity. In the figure below, rV is the relative velocity, S is
the sum of the speeds of the entity and the perceived object and w the
Relative Velocity Weight. The more the perceived object goes the same
way as the entity, the lower its pixel alpha value. This allows lower
the importance of objects that go in the same direction or away from the
entity because there is no need to avoid them. If the Relative Velocity
Weight is 0 the relative velocity is not taken into account and the
pixels have the same alpha. The weight can be greater than 1, for
example at a value of 2 the alpha will be 0 when rV is 0, which means
that when a perceived object goes the same way and at the same speed or
faster than the entity it will be invisible (thus not avoided).
When changing any of those value, the display of the Sensor Input will
be updated automatically.
Sensor Attributes
Use All Input Sensors
Check to use all the available input sensor (ie. currently started by an Activate Sensor Behavior) on the Entity
Sensors
If "Use All Input Sensors" is not checked, this list allows to determine which Sensor Locators to use. Note that any Sensor listed here should also be enabled using the Activate Sensor Behavior to be effectively used.
Additional Inputs
These inputs are only taken into account in Maximum Clearance (Vector)
and Minimum Clearance (Vector) modes.
previous ChOps [0]
This input gives the world direction to choose when there are no obstacles. This allows the ChOps sensor input to use the local target of a Steer behavior for example, like going towards a poptool, a mesh or following a curve.
previous ChOps [1]
When there are obstacles, the clearance will actually be the closest point to the input direction within a tolerance to the absolute minimum (or maximum) alpha. This input gives this tolerance. The default value is 0.05. This allows to pick directions that are closer to the goal (given by the first input).