Gaussian model

Gaussian models run continuous-domain stochastic policies.



skrl provides a Python mixin (GaussianMixin) to assist in the creation of these types of models, allowing users to have full control over the function approximator definitions and architectures. Note that the use of this mixin must comply with the following rules:

  • The definition of multiple inheritance must always include the Model base class at the end.

  • The Model base class constructor must be invoked before the mixins constructor.

Warning

For models in JAX/Flax it is imperative to define all parameters (except observation_space, action_space and device) with default values to avoid errors (TypeError: __init__() missing N required positional argument) during initialization.

In addition, it is necessary to initialize the model’s state_dict (via the init_state_dict method) after its instantiation to avoid errors (AttributeError: object has no attribute "state_dict". If "state_dict" is defined in '.setup()', remember these fields are only accessible from inside 'init' or 'apply') during its use.

class GaussianModel(GaussianMixin, Model):
    def __init__(self, observation_space, action_space, device=None,
                 clip_actions=False, clip_log_std=True, min_log_std=-20, max_log_std=2, reduction="sum"):
        Model.__init__(self, observation_space, action_space, device)
        GaussianMixin.__init__(self, clip_actions, clip_log_std, min_log_std, max_log_std, reduction)

Concept

Gaussian model Gaussian model

Usage

  • Multi-Layer Perceptron (MLP)

  • Convolutional Neural Network (CNN)

  • Recurrent Neural Network (RNN)

  • Gated Recurrent Unit RNN (GRU)

  • Long Short-Term Memory RNN (LSTM)

../../_images/model_gaussian_mlp-light.svg ../../_images/model_gaussian_mlp-dark.svg
import torch
import torch.nn as nn

from skrl.models.torch import Model, GaussianMixin


# define the model
class MLP(GaussianMixin, Model):
    def __init__(self, observation_space, action_space, device,
                 clip_actions=False, clip_log_std=True, min_log_std=-20, max_log_std=2, reduction="sum"):
        Model.__init__(self, observation_space, action_space, device)
        GaussianMixin.__init__(self, clip_actions, clip_log_std, min_log_std, max_log_std, reduction)

        self.net = nn.Sequential(nn.Linear(self.num_observations, 64),
                                 nn.ReLU(),
                                 nn.Linear(64, 32),
                                 nn.ReLU(),
                                 nn.Linear(32, self.num_actions),
                                 nn.Tanh())

        self.log_std_parameter = nn.Parameter(torch.zeros(self.num_actions))

    def compute(self, inputs, role):
        return self.net(inputs["states"]), self.log_std_parameter, {}


# instantiate the model (assumes there is a wrapped environment: env)
policy = MLP(observation_space=env.observation_space,
             action_space=env.action_space,
             device=env.device,
             clip_actions=True,
             clip_log_std=True,
             min_log_std=-20,
             max_log_std=2,
             reduction="sum")

API (PyTorch)

class skrl.models.torch.gaussian.GaussianMixin(clip_actions: bool = False, clip_log_std: bool = True, min_log_std: float = -20, max_log_std: float = 2, reduction: str = 'sum', role: str = '')

Bases: object

__init__(clip_actions: bool = False, clip_log_std: bool = True, min_log_std: float = -20, max_log_std: float = 2, reduction: str = 'sum', role: str = '') None

Gaussian mixin model (stochastic model)

Parameters:
  • clip_actions (bool, optional) – Flag to indicate whether the actions should be clipped to the action space (default: False)

  • clip_log_std (bool, optional) – Flag to indicate whether the log standard deviations should be clipped (default: True)

  • min_log_std (float, optional) – Minimum value of the log standard deviation if clip_log_std is True (default: -20)

  • max_log_std (float, optional) – Maximum value of the log standard deviation if clip_log_std is True (default: 2)

  • reduction (str, optional) – Reduction method for returning the log probability density function: (default: "sum"). Supported values are "mean", "sum", "prod" and "none". If “none", the log probability density function is returned as a tensor of shape (num_samples, num_actions) instead of (num_samples, 1)

  • role (str, optional) – Role play by the model (default: "")

Raises:

ValueError – If the reduction method is not valid

Example:

# define the model
>>> import torch
>>> import torch.nn as nn
>>> from skrl.models.torch import Model, GaussianMixin
>>>
>>> class Policy(GaussianMixin, Model):
...     def __init__(self, observation_space, action_space, device="cuda:0",
...                  clip_actions=False, clip_log_std=True, min_log_std=-20, max_log_std=2, reduction="sum"):
...         Model.__init__(self, observation_space, action_space, device)
...         GaussianMixin.__init__(self, clip_actions, clip_log_std, min_log_std, max_log_std, reduction)
...
...         self.net = nn.Sequential(nn.Linear(self.num_observations, 32),
...                                  nn.ELU(),
...                                  nn.Linear(32, 32),
...                                  nn.ELU(),
...                                  nn.Linear(32, self.num_actions))
...         self.log_std_parameter = nn.Parameter(torch.zeros(self.num_actions))
...
...     def compute(self, inputs, role):
...         return self.net(inputs["states"]), self.log_std_parameter, {}
...
>>> # given an observation_space: gym.spaces.Box with shape (60,)
>>> # and an action_space: gym.spaces.Box with shape (8,)
>>> model = Policy(observation_space, action_space)
>>>
>>> print(model)
Policy(
  (net): Sequential(
    (0): Linear(in_features=60, out_features=32, bias=True)
    (1): ELU(alpha=1.0)
    (2): Linear(in_features=32, out_features=32, bias=True)
    (3): ELU(alpha=1.0)
    (4): Linear(in_features=32, out_features=8, bias=True)
  )
)
act(inputs: Mapping[str, torch.Tensor | Any], role: str = '') Tuple[torch.Tensor, torch.Tensor | None, Mapping[str, torch.Tensor | Any]]

Act stochastically in response to the state of the environment

Parameters:
  • inputs (dict where the values are typically torch.Tensor) –

    Model inputs. The most common keys are:

    • "states": state of the environment used to make the decision

    • "taken_actions": actions taken by the policy for the given states

  • role (str, optional) – Role play by the model (default: "")

Returns:

Model output. The first component is the action to be taken by the agent. The second component is the log of the probability density function. The third component is a dictionary containing the mean actions "mean_actions" and extra output values

Return type:

tuple of torch.Tensor, torch.Tensor or None, and dict

Example:

>>> # given a batch of sample states with shape (4096, 60)
>>> actions, log_prob, outputs = model.act({"states": states})
>>> print(actions.shape, log_prob.shape, outputs["mean_actions"].shape)
torch.Size([4096, 8]) torch.Size([4096, 1]) torch.Size([4096, 8])
distribution(role: str = '') torch.distributions.Normal

Get the current distribution of the model

Returns:

Distribution of the model

Return type:

torch.distributions.Normal

Parameters:

role (str, optional) – Role play by the model (default: "")

Example:

>>> distribution = model.distribution()
>>> print(distribution)
Normal(loc: torch.Size([4096, 8]), scale: torch.Size([4096, 8]))
get_entropy(role: str = '') torch.Tensor

Compute and return the entropy of the model

Returns:

Entropy of the model

Return type:

torch.Tensor

Parameters:

role (str, optional) – Role play by the model (default: "")

Example:

>>> entropy = model.get_entropy()
>>> print(entropy.shape)
torch.Size([4096, 8])
get_log_std(role: str = '') torch.Tensor

Return the log standard deviation of the model

Returns:

Log standard deviation of the model

Return type:

torch.Tensor

Parameters:

role (str, optional) – Role play by the model (default: "")

Example:

>>> log_std = model.get_log_std()
>>> print(log_std.shape)
torch.Size([4096, 8])

API (JAX)

class skrl.models.jax.gaussian.GaussianMixin(clip_actions: bool = False, clip_log_std: bool = True, min_log_std: float = -20, max_log_std: float = 2, reduction: str = 'sum', role: str = '')

Bases: object

__init__(clip_actions: bool = False, clip_log_std: bool = True, min_log_std: float = -20, max_log_std: float = 2, reduction: str = 'sum', role: str = '') None

Gaussian mixin model (stochastic model)

Parameters:
  • clip_actions (bool, optional) – Flag to indicate whether the actions should be clipped to the action space (default: False)

  • clip_log_std (bool, optional) – Flag to indicate whether the log standard deviations should be clipped (default: True)

  • min_log_std (float, optional) – Minimum value of the log standard deviation if clip_log_std is True (default: -20)

  • max_log_std (float, optional) – Maximum value of the log standard deviation if clip_log_std is True (default: 2)

  • reduction (str, optional) – Reduction method for returning the log probability density function: (default: "sum"). Supported values are "mean", "sum", "prod" and "none". If “none", the log probability density function is returned as a tensor of shape (num_samples, num_actions) instead of (num_samples, 1)

  • role (str, optional) – Role play by the model (default: "")

Raises:

ValueError – If the reduction method is not valid

Example:

# define the model
>>> import flax.linen as nn
>>> from skrl.models.jax import Model, GaussianMixin
>>>
>>> class Policy(GaussianMixin, Model):
...     def __init__(self, observation_space, action_space, device=None,
...                  clip_actions=False, clip_log_std=True, min_log_std=-20, max_log_std=2, reduction="sum", **kwargs):
...         Model.__init__(self, observation_space, action_space, device, **kwargs)
...         GaussianMixin.__init__(self, clip_actions, clip_log_std, min_log_std, max_log_std, reduction)
...
...     def setup(self):
...         self.layer_1 = nn.Dense(32)
...         self.layer_2 = nn.Dense(32)
...         self.layer_3 = nn.Dense(self.num_actions)
...
...         self.log_std_parameter = self.param("log_std_parameter", lambda _: jnp.zeros(self.num_actions))
...
...     def __call__(self, inputs, role):
...         x = nn.elu(self.layer_1(inputs["states"]))
...         x = nn.elu(self.layer_2(x))
...         return self.layer_3(x), self.log_std_parameter, {}
...
>>> # given an observation_space: gym.spaces.Box with shape (60,)
>>> # and an action_space: gym.spaces.Box with shape (8,)
>>> model = Policy(observation_space, action_space)
>>>
>>> print(model)
Policy(
    # attributes
    observation_space = Box(-1.0, 1.0, (60,), float32)
    action_space = Box(-1.0, 1.0, (8,), float32)
    device = StreamExecutorGpuDevice(id=0, process_index=0, slice_index=0)
)
act(inputs: Mapping[str, ndarray | jax.Array | Any], role: str = '', params: jax.Array | None = None) Tuple[jax.Array, jax.Array | None, Mapping[str, jax.Array | Any]]

Act stochastically in response to the state of the environment

Parameters:
  • inputs (dict where the values are typically np.ndarray or jax.Array) –

    Model inputs. The most common keys are:

    • "states": state of the environment used to make the decision

    • "taken_actions": actions taken by the policy for the given states

  • role (str, optional) – Role play by the model (default: "")

  • params (jnp.array) – Parameters used to compute the output (default: None). If None, internal parameters will be used

Returns:

Model output. The first component is the action to be taken by the agent. The second component is the log of the probability density function. The third component is a dictionary containing the mean actions "mean_actions" and extra output values

Return type:

tuple of jax.Array, jax.Array or None, and dict

Example:

>>> # given a batch of sample states with shape (4096, 60)
>>> actions, log_prob, outputs = model.act({"states": states})
>>> print(actions.shape, log_prob.shape, outputs["mean_actions"].shape)
(4096, 8) (4096, 1) (4096, 8)
get_entropy(stddev: jax.Array, role: str = '') jax.Array

Compute and return the entropy of the model

Parameters:

role (str, optional) – Role play by the model (default: "")

Returns:

Entropy of the model

Return type:

jax.Array

Example:

# given a standard deviation array: stddev
>>> entropy = model.get_entropy(stddev)
>>> print(entropy.shape)
(4096, 8)