Share

layer_linear

Performs the linear transformation for a layer in a neural network - the foundation of the feedforward mechanism in neural networks. After this linear operation, an activation function (like ReLU, sigmoid, etc.) is typically applied to introduce non-linearity.

Equation

For input vector in_features, layer weights W, and biases b, the layer operation computes:

out_features = W * in_features + b

Inputs

in_features

The input feature vector to the layer.

W

The weights associated with the layer.

b

The biases associated with the layer.

Outputs

out_features

The result of the linear transformation for the given layer.

Was this information helpful?