Preprocessors

Implemented preprocessors

Basic usage

The preprocessors usage is defined in each agent’s configuration dictionary.

The preprocessor class is set under the "<variable>_preprocessor" key and its arguments are set under the "<variable>_preprocessor_kwargs" key as a keyword argument dictionary. The following examples show how to set the preprocessors for an agent:

# import the preprocessor class
from skrl.resources.preprocessors.torch import RunningStandardScaler

cfg = DEFAULT_CONFIG.copy()
cfg["state_preprocessor"] = RunningStandardScaler
cfg["state_preprocessor_kwargs"] = {"size": env.observation_space, "device": device}
cfg["value_preprocessor"] = RunningStandardScaler
cfg["value_preprocessor_kwargs"] = {"size": 1, "device": device}

Running standard scaler

Algorithm implementation

Main notation/symbols:
- mean (\(\bar{x}\)), standard deviation (\(\sigma\)), variance (\(\sigma^2\))
- running mean (\(\bar{x}_t\)), running variance (\(\sigma^2_t\))

Standardization by centering and scaling

\(\text{clip}((x - \bar{x}_t) / (\sqrt{\sigma^2} \;+\) epsilon \(), -c, c) \qquad\) with \(c\) as clip_threshold

Scale back the data to the original representation (inverse transform)

\(\sqrt{\sigma^2_t} \; \text{clip}(x, -c, c) + \bar{x}_t \qquad\) with \(c\) as clip_threshold

Update the running mean and variance (See parallel algorithm)

\(\delta \leftarrow x - \bar{x}_t\)
\(n_T \leftarrow n_t + n\)
\(M2 \leftarrow (\sigma^2_t n_t) + (\sigma^2 n) + \delta^2 \dfrac{n_t n}{n_T}\)
# update internal variables
\(\bar{x}_t \leftarrow \bar{x}_t + \delta \dfrac{n}{n_T}\)
\(\sigma^2_t \leftarrow \dfrac{M2}{n_T}\)
\(n_t \leftarrow n_T\)

API

class skrl.resources.preprocessors.torch.running_standard_scaler.RunningStandardScaler(size: Union[int, Tuple[int], gym.spaces.space.Space, gymnasium.spaces.space.Space], epsilon: float = 1e-08, clip_threshold: float = 5.0, device: Optional[Union[str, torch.device]] = None)
__init__(size: Union[int, Tuple[int], gym.spaces.space.Space, gymnasium.spaces.space.Space], epsilon: float = 1e-08, clip_threshold: float = 5.0, device: Optional[Union[str, torch.device]] = None) None

Standardize the input data by removing the mean and scaling by the standard deviation

The implementation is adapted from the rl_games library (https://github.com/Denys88/rl_games/blob/master/rl_games/algos_torch/running_mean_std.py)

Example:

>>> running_standard_scaler = RunningStandardScaler(size=2)
>>> data = torch.rand(3, 2)  # tensor of shape (N, 2)
>>> running_standard_scaler(data)
tensor([[0.1954, 0.3356],
        [0.9719, 0.4163],
        [0.8540, 0.1982]])
Parameters
  • size (int, tuple or list of integers, gym.Space, or gymnasium.Space) – Size of the input space

  • epsilon (float) – Small number to avoid division by zero (default: 1e-8)

  • clip_threshold (float) – Threshold to clip the data (default: 5.0)

  • device (str or torch.device, optional) – Device on which a torch tensor is or will be allocated (default: None). If None, the device will be either "cuda:0" if available or "cpu"

forward(x: torch.Tensor, train: bool = False, inverse: bool = False, no_grad: bool = True) torch.Tensor

Forward pass of the standardizer

Example:

>>> x = torch.rand(3, 2, device="cuda:0")
>>> running_standard_scaler(x)
tensor([[0.6933, 0.1905],
        [0.3806, 0.3162],
        [0.1140, 0.0272]], device='cuda:0')

>>> running_standard_scaler(x, train=True)
tensor([[ 0.8681, -0.6731],
        [ 0.0560, -0.3684],
        [-0.6360, -1.0690]], device='cuda:0')

>>> running_standard_scaler(x, inverse=True)
tensor([[0.6260, 0.5468],
        [0.5056, 0.5987],
        [0.4029, 0.4795]], device='cuda:0')
Parameters
  • x (torch.Tensor) – Input tensor

  • train (bool, optional) – Whether to train the standardizer (default: False)

  • inverse (bool, optional) – Whether to inverse the standardizer to scale back the data (default: False)

  • no_grad (bool, optional) – Whether to disable the gradient computation (default: True)