Skip to content

Classes

DeepProMP

moppy.deep_promp.deep_promp.DeepProMP(name: str, encoder: EncoderDeepProMP, decoder: DecoderDeepProMP, save_path: str = './deep_promp/output/', log_to_tensorboard: bool = False, learning_rate: float = 0.005, epochs: int = 100, beta: float = 0.01)

Bases: moppy.interfaces.movement_primitive.MovementPrimitive

Parameters

  • name (str): Name of the model.
  • encoder (EncoderDeepProMP): Encoder component for DeepProMP.
  • decoder (DecoderDeepProMP): Decoder component for DeepProMP.
  • save_path (str, optional): Path to save the output files. Defaults to ./deep_promp/output/.
  • log_to_tensorboard (bool, optional): Whether to log training to TensorBoard. Defaults to False.
  • learning_rate (float, optional): Learning rate for the optimizer. Defaults to 0.005.
  • epochs (int, optional): Number of epochs for training. Defaults to 100.
  • beta (float, optional): Regularization parameter for the model. Defaults to 0.01.

Functions

kl_annealing_scheduler()

The kl_annealing_scheduler is a static method used to adjust the Kullback-Leibler (KL) divergence term during training, commonly used in variational models. This annealing process helps in gradually increasing the weight of the KL divergence term, preventing the model from relying too heavily on the prior too early during training. The scheduler operates over multiple cycles and gradually saturates based on the provided saturation_point.

Method Signature

kl_annealing_scheduler(current_epoch: int, n_cycles: int = 4, max_epoch: int = 1000, saturation_point: float = 0.5)

Parameters

  • current_epoch (int): The current epoch during training.
  • n_cycles (int, optional): Number of cycles to repeat the annealing process. Defaults to 4.
  • max_epoch (int, optional): Maximum number of training epochs. Defaults to 1000.
  • saturation_point (float, optional): The point at which the annealing process saturates (i.e., reaches its maximum). Defaults to 0.5.

Returns

  • tau (float): A value between 0 and 1, representing the current weight of the KL divergence term.

Example

# Using the KL annealing scheduler in training
kl_weight = DeepProMP.kl_annealing_scheduler(current_epoch=50)

gauss_kl()

The gauss_kl method calculates the Kullback-Leibler (KL) divergence between a given Gaussian distribution (with parameters mu_q and std_q) and a standard Gaussian distribution. This divergence is a measure of how one probability distribution diverges from a second, reference distribution (in this case, the standard Gaussian). It's a key component of variational inference models.

Method Signature

gauss_kl(mu_q: torch.Tensor, std_q: torch.Tensor) -> torch.Tensor

Parameters

  • mu_q (torch.Tensor): The mean of the approximate posterior distribution.
  • std_q (torch.Tensor): The standard deviation of the approximate posterior distribution.

Returns

  • kl_divergence (torch.Tensor): The mean KL divergence between the approximate posterior and the standard Gaussian distribution.

Example

# Calculate KL divergence for a Gaussian distribution with given mean and std deviation
kl_div = DeepProMP.gauss_kl(mu_q=mu_tensor, std_q=std_tensor)

calculate_elbo()

The calculate_elbo method computes the Evidence Lower Bound (ELBO), which is the objective function used to train variational models like DeepProMP. The ELBO consists of two main components: the reconstruction loss (typically Mean Squared Error) and the KL divergence, weighted by a factor beta. The goal of ELBO is to balance between fitting the data and keeping the posterior distribution close to the prior.

Method Signature

calculate_elbo(y_pred: torch.Tensor, y_star: torch.Tensor, mu: torch.Tensor, sigma: torch.Tensor, beta: float = 1.0) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]

Parameters

  • y_pred (torch.Tensor): The predicted output from the model.
  • y_star (torch.Tensor): The ground truth target values.
  • mu (torch.Tensor): The mean of the approximate posterior distribution.
  • sigma (torch.Tensor): The standard deviation of the approximate posterior distribution.
  • beta (float, optional): The weight applied to the KL divergence term in the ELBO. Defaults to 1.0.

Returns

  • elbo (torch.Tensor): The total ELBO loss, combining the reconstruction loss and KL divergence.
  • mse (torch.Tensor): The Mean Squared Error reconstruction loss.
  • kl (torch.Tensor): The KL divergence between the approximate posterior and the prior.

Example

# Calculate ELBO during training
elbo_loss, mse_loss, kl_div = DeepProMP.calculate_elbo(y_pred=pred, y_star=target, mu=mu_tensor, sigma=std_tensor)

train()

The train method is responsible for training the DeepProMP model using the provided trajectories. The training process is guided by the Evidence Lower Bound (ELBO) as the loss function. It leverages the Adam optimizer for updating model parameters and supports KL annealing for better regularization during training. The method divides the data into training and validation sets, logs metrics, and saves the model and losses after training.

Method Signature

train(trajectories: List[Trajectory], kl_annealing: bool = True, beta: float = None, learning_rate: float = None, epochs: int = None) -> None

Parameters

  • trajectories (List[Trajectory]): A list of trajectory data used for training the model.
  • kl_annealing (bool, optional): If True, applies KL annealing during training. Defaults to True.
  • beta (float, optional): The regularization weight for the KL divergence term. If not specified, uses the default value in the class.
  • learning_rate (float, optional): The learning rate for the optimizer. If not specified, uses the default value in the class.
  • epochs (int, optional): The number of training epochs. If not specified, uses the default value in the class.

Returns

  • None

Description

The train method performs the following steps:

  1. Optionally adjusts the beta, learning_rate, and epochs values based on the provided arguments.
  2. Divides the input trajectories into training and validation sets.
  3. Initializes the Adam optimizer to update both the encoder and decoder parameters.
  4. For each epoch:
    • Loops over the training set, computes the ELBO loss (a combination of Mean Squared Error and KL divergence), and updates the model.
    • If KL annealing is enabled, it applies a scheduling function to gradually increase the weight of the KL divergence term over training.
    • Logs the metrics for each epoch.
    • Validates the model on the validation set.
  5. After training is completed:
    • Saves the training losses, validation losses, and models.
    • Generates loss plots.

Example

# Train the DeepProMP model with a list of trajectories
model = DeepProMP(name="example_model", encoder=encoder, decoder=decoder)
model.train(trajectories=trajectories_list, kl_annealing=True, beta=0.01, learning_rate=0.001, epochs=100)

validate()

The validate method evaluates the performance of the trained DeepProMP model on a set of trajectories. It computes the Mean Squared Error (MSE) between the predicted (decoded) output and the actual trajectory data for each trajectory in the validation set. The average loss over all validation trajectories is returned as the validation loss.

Method Signature

validate(trajectories: List[Trajectory]) -> float

Parameters

  • trajectories (List[Trajectory]): A list of trajectory data used for validation.

Returns

  • validation_loss (float): The average Mean Squared Error (MSE) loss over all validation trajectories.

Description

The validate method performs the following steps:

  1. For each trajectory in the validation set:
    • Passes the trajectory through the encoder to obtain the posterior distribution parameters (mu and sigma).
    • Samples the latent variable z from the posterior.
    • Decodes the latent variable at each time point to reconstruct the trajectory.
    • Computes the Mean Squared Error (MSE) between the reconstructed trajectory and the ground truth trajectory.
  2. Averages the MSE loss over all validation trajectories.
  3. Returns the average validation loss.

Example

# Validate the DeepProMP model on a list of trajectories
validation_loss = model.validate(trajectories=validation_trajectories_list)
print(f"Validation Loss: {validation_loss}")

save_models()

The save_models method saves the encoder and decoder models to the specified path. If no path is provided, it uses the default save path.

Method Signature

save_models(save_path: str = None) -> None

Parameters:

  • save_path (str, optional): The path where the models should be saved. If None, the default save_path is used.

Example:

# Save models to the default path
model.save_models()

# Save models to a custom path
model.save_models(save_path='./custom_path/')

save_losses()

The save_losses method saves the losses (training, validation, KL divergence, MSE) to the specified path. If no path is provided, it uses the default save path.

Method Signature

save_losses(save_path: str = None) -> None

Parameters

  • save_path (str, optional): The path where the losses should be saved. If None, the default save_path is used.

Description

This method saves the following loss values to disk:

  • Validation loss (validation_loss.pth)
  • KL divergence loss (kl_loss.pth)
  • Mean Squared Error loss (mse_loss.pth)
  • Training loss (train_loss.pth)

For each type of loss, the method saves the values using the torch.save() function.

Example

# Save losses to the default path
model.save_losses()

# Save losses to a custom path
model.save_losses(save_path='./custom_path/')

plot_values()

The plot_values method plots the provided values and saves the plot to the specified path. If no path is provided, it uses the default save path.

Method Signature

plot_values(values: List[List], file_name: str, plot_title: str = "Plot", path: str = None) -> None

Parameters

  • values (List[List]): The values to be plotted. Each inner list represents a line in the plot.
  • file_name (str): The name of the file where the plot will be saved.
  • plot_title (str, optional): The title of the plot. Defaults to "Plot".
  • path (str, optional): The path where the plot should be saved. If None, the default save_path is used.

Example

# Plot the values and save to the default path
model.plot_values(values=[[1, 2, 3], [4, 5, 6]], file_name='example_plot.png', plot_title="Example Plot")

# Plot the values and save to a custom path
model.plot_values(values=[[1, 2, 3], [4, 5, 6]], file_name='example_plot.png', plot_title="Example Plot", path='./custom_path/')

DecoderDeepProMP

moppy.deep_promp.decoder_deep_pro_mp.DeepProMP(self, latent_variable_dimension: int, hidden_neurons: List[int], trajectory_state_class: Type[TrajectoryState] = JointConfiguration, activation_function: Type[nn.Module] = nn.ReLU, activation_function_params: dict = {})

Bases: moppy.interfaces.movement_primitive.LatentDecoder

The DecoderDeepProMP class implements a latent decoder architecture, extending LatentDecoder and nn.Module. It is designed to decode a latent variable into a trajectory state.

Parameters

  • latent_variable_dimension (int): The dimension of the latent variable.
  • hidden_neurons (List[int]): A list of integers representing the number of neurons in each hidden layer.
  • trajectory_state_class (Type[TrajectoryState]): The class of the trajectory state (default: JointConfiguration).
  • activation_function (Type[nn.Module]): The activation function to be used in the network (default: nn.ReLU).
  • activation_function_params (dict): Parameters for the activation function.

Raises:

  • TypeError: If trajectory_state_class is not a subclass of TrajectoryState.
  • ValueError: If latent_variable_dimension is less than or equal to 0, if the number of neurons is less than 2, or if any neuron count is not greater than 0.

Functions

load_from_save_file()

Loads a model from a file and returns a DecoderDeepProMP instance. It uses a the savefile created by save_decoder.

Method Signature

load_from_save_file(cls, path: str = '', file: str = "decoder_deep_pro_mp.pth") -> 'DecoderDeepProMP'

Parameters:

  • path (str): The path to the directory containing the model file.
  • file (str): The name of the file to load (default: "decoder_deep_pro_mp.pth").

Returns:

  • DecoderDeepProMP: An instance of DecoderDeepProMP.

create_layers()

Creates the layers of the decoder network based on the specified number of neurons.

Returns:

  • List[nn.Module]: A list of layers for the neural network.

__init_weights()

Initializes the weights and biases of the network using Xavier initialization.

Method Signature

__init_weights(self, m) -> None

Parameters:

  • m: The layer to initialize.

decode_from_latent_variable()

Overrides: moppy.interfaces.latent_decoder.LatentDecoder.decode_from_latent_variable

Decodes a latent variable into a tensor representing the trajectory state.

Method Signature

decode_from_latent_variable(self, latent_variable: torch.Tensor, time: Union[torch.Tensor, float]) -> torch.Tensor

Parameters:

  • latent_variable (torch.Tensor): The latent variable to decode.
  • time (Union[torch.Tensor, float]): The normalized time value.

Returns:

  • torch.Tensor: The decoded trajectory state.

save_decoder()

Saves the decoder model to a file, including the state dictionary and configuration.

Method Signature

save_decoder(self, path: str = '', filename: str = "decoder_deep_pro_mp.pth")

Parameters:

  • path (str): The directory to save the model.
  • filename (str): The name of the file to save (default: "decoder_deep_pro_mp.pth").

save_model()

Saves only the model's state dictionary.

Method Signature

save_model(self, path: str = '', filename: str = "decoder_model_deep_pro_mp.pth")

Parameters:

  • path (str): The directory to save the model.
  • filename (str): The name of the file to save (default: "decoder_model_deep_pro_mp.pth").

load_model()

Loads the model's state dictionary (net.state_dict()) from a file. The save_file can be created by created by save_model.

Method Signature

load_model(self, path: str = '', filename: str = "decoder_model_deep_pro_mp.pth")

Parameters:

  • path (str): The directory containing the model file.
  • filename (str): The name of the file to load (default: "decoder_model_deep_pro_mp.pth").

forward()

Defines the forward pass of the decoder.

Method Signature

forward(self, latent_variable: torch.Tensor, time: Union[torch.Tensor, float])

Parameters:

  • latent_variable (torch.Tensor): The latent variable to decode.
  • time (Union[torch.Tensor, float]): The normalized time value.

Returns:

  • torch.Tensor: The decoded trajectory state.

EncoderDeepProMP

moppy.deep_promp.encoder_deep_pro_mp.EncoderDeepProMP(self, latent_variable_dimension: int, hidden_neurons: List[int], trajectory_state_class: Type[TrajectoryState] = JointConfiguration, activation_function: Type[nn.Module] = nn.ReLU, activation_function_params: dict = {})

Bases: moppy.interfaces.movement_primitive.LatentEncoder

The EncoderDeepProMP class implements a latent encoder architecture, extending LatentEncoder and nn.Module. It is designed to encode a trajectory into a latent variable.

Parameters

  • latent_variable_dimension (int): The dimension of the latent variable.
  • hidden_neurons (List[int]): A list of integers representing the number of neurons in each hidden layer.
  • trajectory_state_class (Type[TrajectoryState]): The class of the trajectory state (default: JointConfiguration).
  • activation_function (Type[nn.Module]): The activation function to be used in the network (default: nn.ReLU).
  • activation_function_params (dict): Parameters for the activation function.

Raises:

  • TypeError: If trajectory_state_class is not a subclass of TrajectoryState.
  • ValueError: If latent_variable_dimension is less than or equal to 0, if the number of neurons is less than 2, or if any neuron count is not greater than 0.

Functions

load_from_save_file()

Loads a model from a file and returns an EncoderDeepProMP instance. It uses a save file created by save_encoder.

Method Signature

load_from_save_file(cls, path: str = '', file: str = "encoder_deep_pro_mp.pth") -> 'EncoderDeepProMP'

Parameters:

  • path (str): The path to the directory containing the model file.
  • file (str): The name of the file to load (default: "encoder_deep_pro_mp.pth").

Returns:

  • EncoderDeepProMP: An instance of EncoderDeepProMP.

create_layers()

Creates the layers of the encoder network based on the specified number of neurons.

Returns:

  • List[nn.Module]: A list of layers for the neural network.

__init_weights()

Initializes the weights and biases of the network using Xavier initialization.

Method Signature

__init_weights(self, m) -> None

Parameters:

  • m: The layer to initialize.

encode_to_latent_variable()

Encodes a trajectory into a mu and sigma (both with dimensions of latent_variable_dimension).

Method Signature

encode_to_latent_variable(self, trajectory: Trajectory) -> tuple[Tensor, Tensor]

Parameters:

  • trajectory (Trajectory): The trajectory to encode.

Returns:

  • tuple[Tensor, Tensor]: The resulting mu and sigma tensors, each with size latent_variable_dimension.

sample_latent_variable()

Samples a latent variable z from a normal distribution specified by mu and sigma.

Method Signature

sample_latent_variable(self, mu: torch.Tensor, sigma: torch.Tensor, percentage_of_standard_deviation=None) -> torch.Tensor

Parameters:

  • mu (torch.Tensor): The mean tensor.
  • sigma (torch.Tensor): The standard deviation tensor.
  • percentage_of_standard_deviation (Optional[float]): A percentage to scale the standard deviation (optional).

Returns:

  • torch.Tensor: The sampled latent variable.

sample_latent_variables()

Samples multiple latent variables from given mu and sigma tensors.

Method Signature

sample_latent_variables(self, mu: torch.Tensor, sigma: torch.Tensor, size: int = 1) -> torch.Tensor

Parameters:

  • mu (torch.Tensor): The mean tensor.
  • sigma (torch.Tensor): The standard deviation tensor.
  • size (int): The number of samples to generate (default: 1).

Returns:

  • torch.Tensor: A tensor containing the sampled latent variables.

bayesian_aggregation()

Performs Bayesian aggregation on mu_points and sigma_points to calculate aggregated mu and sigma.

Method Signature

bayesian_aggregation(self, mu_points: torch.Tensor, sigma_points: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor]

Parameters:

  • mu_points (torch.Tensor): Tensor of mu points.
  • sigma_points (torch.Tensor): Tensor of sigma points.

Returns:

  • tuple[torch.Tensor, torch.Tensor]: Aggregated mu_z and sigma_z_sq tensors.

save_encoder()

Saves the encoder model to a file, including the state dictionary and configuration.

Method Signature

save_encoder(self, path: str = '', filename: str = "encoder_deep_pro_mp.pth")

Parameters:

  • path (str): The directory to save the model.
  • filename (str): The name of the file to save (default: "encoder_deep_pro_mp.pth").

save_model()

Saves only the model's state dictionary.

Method Signature

save_model(self, path: str = '', filename: str = "encoder_model_deep_pro_mp.pth")

Parameters:

  • path (str): The directory to save the model.
  • filename (str): The name of the file to save (default: "encoder_model_deep_pro_mp.pth").

load_model()

Loads the model's state dictionary from a file.

Method Signature

load_model(self, path: str = '', filename: str = "encoder_model_deep_pro_mp.pth")

Parameters:

  • path (str): The directory containing the model file.
  • filename (str): The name of the file to load (default: "encoder_model_deep_pro_mp.pth").

forward()

Defines the forward pass of the encoder.

Method Signature

forward(self, trajectory: Trajectory) -> tuple[Tensor, Tensor]

Parameters:

  • trajectory (Trajectory): The trajectory to encode.

Returns:

  • tuple[Tensor, Tensor]: The encoded mu and sigma tensors.