motornet.nets.losses#

This module implements custom tensorflow.python.keras.losses.Loss objects that are useful for motor control.

Note

There are a couple naming conventions that this module employs:

  • Regularizer indicates the penalization is applied to the value of the model’s output, not the error between the model’s output and a user-fed label.

  • xDx indicates that both the value and the derivative of the passed input is penalized, with the derivative loss scaled by a used-defined deriv_weight scalar compared to the value.

class motornet_tf.nets.losses.ClippedPositionLoss(target_size: float, name: str = 'position', reduction='auto')#

Bases: LossFunctionWrapper

Applies a L1 penalty to positional error between the model’s output positional state x and a user-fed label position y:

xp, _ = np.split(x, 2, axis=-1)  # remove velocity from the positional state
yp, _ = np.split(y, 2, axis=-1)
loss = np.reduce_mean(np.abs(xp - yp))

If the radial distance to the desired position is less than a user-defined radius (target size), the loss is clipped to be 0.

Note

The positional error does not include velocity, hence the use of np.split to extract position from the state array.

Parameters:
  • target_sizeFloat, the radius around the desired position within which the position loss is clipped to 0.

  • nameString, the name (label) to give to the compounded loss object. This is used to print, plot, and save losses during training.

  • reduction – The reduction method used. The default value is tensorflow.python.keras.utils.losses_utils.ReductionV2.AUTO. See the Tensoflow documentation for more details.

class motornet_tf.nets.losses.CompoundedLoss(losses: list | tuple, loss_weights: list | tuple, name: str = 'compounded_loss', reduction='auto')#

Bases: LossFunctionWrapper

Wraps several losses into a single loss object, creating a composite loss that sums the loss value of each subloss. Each subloss’s contribution can be weighted by a constant scalar value.

Parameters:
  • lossesList or tuple of loss objects.

  • loss_weightsList or tuple of float scalars indicating the weight of the corresponding loss in the losses argument provided above.

  • nameString, the name (label) to give to the compounded loss object. This is used to print, plot, and save losses during training.

  • reduction – The reduction method used. The default value is tensorflow.python.keras.utils.losses_utils.ReductionV2.AUTO. See the Tensorflow documentation for more details.

class motornet_tf.nets.losses.L2ActivationL1MuscleVelIndLoss(max_iso_force: float, activation_weight: float, deriv_weight: float, name: str = 'l2_activation_muscle_vel', reduction='auto')#

Bases: LossFunctionWrapper

Applies a L2 penalty to muscle activation and an L1 penalty to muscle velocity. Must be applied to the muscle state output state. The L2 penalty on muscle activation is normalized by the maximum isometric force of each muscle. If a is the normalized muscle activation and dx the muscle velocity, then the penalty would evaluate at:

loss = activation_weight * tf.reduce_mean(a ** 2) + deriv_weight * tf.reduce_mean(tf.abs(dx))
Parameters:
  • max_iso_forceFloat or list, the maximum isometric force of each muscle in the order they are declared in the motornet.plants.plants.Plant object class or subclass.

  • activation_weightFloat, the weight of the activation’s penalty compared to the value itself.

  • deriv_weightFloat, the weight of the derivative’s penalty compared to the value itself.

  • nameString, the name (label) to give to the compounded loss object. This is used to print, plot, and save losses during training.

  • reduction – The reduction method used. The default value is tensorflow.python.keras.utils.losses_utils.ReductionV2.AUTO. See the Tensoflow documentation for more details.

class motornet_tf.nets.losses.L2ActivationLoss(max_iso_force, name: str = 'l2_activation', reduction='auto')#

Bases: LossFunctionWrapper

Applies a L2 penalty to muscle activation. Must be applied to the muscle state output state. The L2 penalty is normalized by the maximum isometric force of each muscle.

Parameters:
  • max_iso_forceFloat or list, the maximum isometric force of each muscle in the order they are declared in the motornet.plants.plants.Plant object class or subclass.

  • nameString, the name (label) to give to the compounded loss object. This is used to print, plot, and save losses during training.

  • reduction – The reduction method used. The default value is tensorflow.python.keras.utils.losses_utils.ReductionV2.AUTO. See the Tensorflow documentation for more details.

class motornet_tf.nets.losses.L2ActivationMuscleVelLoss(max_iso_force: float, deriv_weight: float, name: str = 'l2_activation_muscle_vel', reduction='auto')#

Bases: LossFunctionWrapper

Applies a L2 penalty to muscle activation and muscle velocity. Must be applied to the muscle state output state. The L2 penalty on muscle activation is normalized by the maximum isometric force of each muscle. If a is the normalized muscle activation and dx the muscle velocity, then the penalty would evaluate at:

loss = tf.reduce_mean(a ** 2) + deriv_weight * tf.reduce_mean(dx ** 2)
Parameters:
  • max_iso_forceFloat or list, the maximum isometric force of each muscle in the order they are declared in the motornet.plants.plants.Plant object class or subclass.

  • deriv_weightFloat, the weight of the derivative’s penalty compared to the value itself.

  • nameString, the name (label) to give to the compounded loss object. This is used to print, plot, and save losses during training.

  • reduction – The reduction method used. The default value is tensorflow.python.keras.utils.losses_utils.ReductionV2.AUTO. See the Tensoflow documentation for more details.

class motornet_tf.nets.losses.L2Regularizer(name: str = 'l2_regularizer', reduction='auto')#

Bases: LossFunctionWrapper

Applies a L2 penalty to the model’s output values. For instance, if we label an output value x, then the penalty would evaluate at:

loss = np.reduce_mean(x ** 2)
Parameters:
  • nameString, the name (label) to give to the compounded loss object. This is used to print, plot, and save losses during training.

  • reduction – The reduction method used. The default value is tensorflow.python.keras.utils.losses_utils.ReductionV2.AUTO. See the Tensoflow documentation for more details.

class motornet_tf.nets.losses.L2xDxActivationLoss(max_iso_force: float, deriv_weight: float, dt: float, name: str = 'l2_xdx_activation', reduction='auto')#

Bases: LossFunctionWrapper

Applies a L2 penalty to muscle activation and its derivative. Must be applied to the muscle state output state. The L2 penalty is normalized by the maximum isometric force of each muscle. If we label normalized muscle activation a, and its derivative da, then the penalty would evaluate at:

loss = np.reduce_mean(a ** 2) + deriv_weight * np.reduce_mean(da ** 2)
Parameters:
  • max_iso_forceFloat or list, the maximum isometric force of each muscle in the order they are declared in the motornet.plants.plants.Plant object class or subclass.

  • deriv_weightFloat, the weight of the derivative’s penalty compared to the value itself.

  • dtFloat, the size of a single timestep. This is used to calculate derivatives using Euler’s method.

  • nameString, the name (label) to give to the compounded loss object. This is used to print, plot, and save losses during training.

  • reduction – The reduction method used. The default value is tensorflow.python.keras.utils.losses_utils.ReductionV2.AUTO. See the Tensoflow documentation for more details.

class motornet_tf.nets.losses.L2xDxRegularizer(deriv_weight: float, dt: float, name: str = 'gru_regularizer', reduction='auto')#

Bases: LossFunctionWrapper

Applies a L2 penalty to the model’s output values and value derivatives. For instance, if we label an output value x, and its derivative dx, then the penalty would evaluate at:

loss = np.reduce_mean(x ** 2) + deriv_weight * np.reduce_mean(dx ** 2)
Parameters:
  • deriv_weightFloat, the weight of the derivative’s penalty compared to the value itself.

  • dtFloat, the size of a single timestep. This is used to calculate derivatives using Euler’s method.

  • nameString, the name (label) to give to the compounded loss object. This is used to print, plot, and save losses during training.

  • reduction – The reduction method used. The default value is tensorflow.python.keras.utils.losses_utils.ReductionV2.AUTO. See the Tensoflow documentation for more details.

class motornet_tf.nets.losses.PositionLoss(name: str = 'position', reduction='auto')#

Bases: LossFunctionWrapper

Applies a L1 penalty to positional error between the model’s output positional state x and a user-fed label position y:

xp, _ = np.split(x, 2, axis=-1)  # remove velocity from the positional state
yp, _ = np.split(y, 2, axis=-1)
loss = np.reduce_mean(np.abs(xp - yp))

Note

The positional error does not include velocity, hence the use of np.split to extract position from the state array.

Parameters:
  • nameString, the name (label) to give to the compounded loss object. This is used to print, plot, and save losses during training.

  • reduction – The reduction method used. The default value is tensorflow.python.keras.utils.losses_utils.ReductionV2.AUTO. See the Tensoflow documentation for more details.

class motornet_tf.nets.losses.RecurrentActivityRegularizer(network, recurrent_weight: float, activity_weight: float, name: str = 'recurrent_activity', reduction='auto')#

Bases: LossFunctionWrapper

Applies a L2 penalty to the model’s recurrent activity and hidden activity values. For instance, if we label the recurrent activity f, and the hidden activity h, then the penalty would evaluate at:

loss = recurrent_weight * f + activity_weight * np.reduce_mean(h ** 2)

The variable f was calculated according to the recurrent regularization method proposed in [1] but adapted for GRU units.

References

[1] Sussillo, D., Churchland, M., Kaufman, M. et al. A neural network that finds a naturalistic solution for the production of muscle activity. Nat Neurosci 18, 1025–1033 (2015). https://doi.org/10.1038/nn.4042

Parameters:
  • networkmotornet.nets.layers.Network object class or subclass. The Network object must be the one being trained. This is used to fetch the weight values of the layer indexed 1.

  • recurrent_weightFloat, the weight of the penalization for the recurrent activity values.

  • activity_weightFloat, the weight of the penalization for the hidden activity values.

  • nameString, the name (label) to give to the compounded loss object. This is used to print, plot, and save losses during training.

  • reduction – The reduction method used. The default value is tensorflow.python.keras.utils.losses_utils.ReductionV2.AUTO. See the Tensorflow documentation for more details.