Base class

Note

This is the base class for all the other classes in this module. It provides the basic functionality for the other classes. It is not intended to be used directly.

Basic inheritance usage

 1from typing import Union, Tuple, List
 2
 3import torch
 4
 5from skrl.memories.torch import Memory    # from .base import Memory
 6
 7
 8class CustomMemory(Memory):
 9    def __init__(self, memory_size: int, num_envs: int = 1, device: Union[str, torch.device] = "cuda:0") -> None:
10        """
11        :param memory_size: Maximum number of elements in the first dimension of each internal storage
12        :type memory_size: int
13        :param num_envs: Number of parallel environments (default: 1)
14        :type num_envs: int, optional
15        :param device: Device on which a torch tensor is or will be allocated (default: "cuda:0")
16        :type device: str or torch.device, optional
17        """
18        super().__init__(memory_size, num_envs, device)
19
20    def sample(self, names: Tuple[str], batch_size: int, mini_batches: int = 1) -> List[List[torch.Tensor]]:
21        """Sample a batch from memory
22
23        :param names: Tensors names from which to obtain the samples
24        :type names: tuple or list of strings
25        :param batch_size: Number of element to sample
26        :type batch_size: int
27        :param mini_batches: Number of mini-batches to sample (default: 1)
28        :type mini_batches: int, optional
29
30        :return: Sampled data from tensors sorted according to their position in the list of names.
31                 The sampled tensors will have the following shape: (batch size, data size)
32        :rtype: list of torch.Tensor list
33        """
34        # ================================
35        # - sample a batch from memory.
36        #   It is possible to generate only the sampling indexes and call self.sample_by_index(...)
37        # ================================

API

class skrl.memories.torch.base.Memory(memory_size: int, num_envs: int = 1, device: Optional[Union[str, torch.device]] = None, export: bool = False, export_format: str = 'pt', export_directory: str = '')

Bases: object

__init__(memory_size: int, num_envs: int = 1, device: Optional[Union[str, torch.device]] = None, export: bool = False, export_format: str = 'pt', export_directory: str = '') None

Base class representing a memory with circular buffers

Buffers are torch tensors with shape (memory size, number of environments, data size). Circular buffers are implemented with two integers: a memory index and an environment index

Parameters
  • memory_size (int) – Maximum number of elements in the first dimension of each internal storage

  • num_envs (int, optional) – Number of parallel environments (default: 1)

  • device (str or torch.device, optional) – Device on which a torch tensor is or will be allocated (default: None). If None, the device will be either "cuda:0" if available or "cpu"

  • export (bool, optional) – Export the memory to a file (default: False). If True, the memory will be exported when the memory is filled

  • export_format (str, optional) – Export format (default: “pt”). Supported formats: torch (pt), numpy (np), comma separated values (csv)

  • export_directory (str, optional) – Directory where the memory will be exported (default: “”). If empty, the agent’s experiment directory will be used

Raises

ValueError – The export format is not supported

__len__() int

Compute and return the current (valid) size of the memory

The valid size is calculated as the memory_size * num_envs if the memory is full (filled). Otherwise, the memory_index * num_envs + env_index is returned

Returns

Valid size

Return type

int

add_samples(**tensors: torch.Tensor) None

Record samples in memory

Samples should be a tensor with 2-components shape (number of environments, data size). All tensors must be of the same shape

According to the number of environments, the following classification is made:

  • one environment: Store a single sample (tensors with one dimension) and increment the environment index (second index) by one

  • number of environments less than num_envs: Store the samples and increment the environment index (second index) by the number of the environments

  • number of environments equals num_envs: Store the samples and increment the memory index (first index) by one

Parameters

tensors (dict) – Sampled data as key-value arguments where the keys are the names of the tensors to be modified. Non-existing tensors will be skipped

Raises

ValueError – No tensors were provided or the tensors have incompatible shapes

create_tensor(name: str, size: Union[int, Tuple[int], gym.spaces.space.Space, gymnasium.spaces.space.Space], dtype: Optional[torch.dtype] = None, keep_dimensions: bool = False) bool

Create a new internal tensor in memory

The tensor will have a 3-components shape (memory size, number of environments, size). The internal representation will use _tensor_<name> as the name of the class property

Parameters
  • name (str) – Tensor name (the name has to follow the python PEP 8 style)

  • size (int, tuple or list of integers, gym.Space, or gymnasium.Space) – Number of elements in the last dimension (effective data size). The product of the elements will be computed for sequences or gym/gymnasium spaces

  • dtype (torch.dtype or None, optional) – Data type (torch.dtype). If None, the global default torch data type will be used (default)

  • keep_dimensions (bool) – Whether or not to keep the dimensions defined through the size parameter (default: False)

Raises

ValueError – The tensor name exists already but the size or dtype are different

Returns

True if the tensor was created, otherwise False

Return type

bool

get_sampling_indexes() Union[tuple, numpy.ndarray, torch.Tensor]

Get the last indexes used for sampling

Returns

Last sampling indexes

Return type

tuple or list, numpy.ndarray or torch.Tensor

get_tensor_by_name(name: str, keepdim: bool = True) torch.Tensor

Get a tensor by its name

Parameters
  • name (str) – Name of the tensor to retrieve

  • keepdim (bool, optional) – Keep the tensor’s shape (memory size, number of environments, size) (default: True) If False, the returned tensor will have a shape of (memory size * number of environments, size)

Raises

KeyError – The tensor does not exist

Returns

Tensor

Return type

torch.Tensor

get_tensor_names() Tuple[str]

Get the name of the internal tensors in alphabetical order

Returns

Tensor names without internal prefix (_tensor_)

Return type

tuple of strings

load(path: str) None

Load the memory from a file

Supported formats: - PyTorch (pt) - NumPy (npz) - Comma-separated values (csv)

Parameters

path (str) – Path to the file where the memory will be loaded

Raises

ValueError – If the format is not supported

reset() None

Reset the memory by cleaning internal indexes and flags

Old data will be retained until overwritten, but access through the available methods will not be guaranteed

Default values of the internal indexes and flags

  • filled: False

  • env_index: 0

  • memory_index: 0

sample(names: Tuple[str], batch_size: int, mini_batches: int = 1, sequence_length: int = 1) List[List[torch.Tensor]]

Data sampling method to be implemented by the inheriting classes

Parameters
  • names (tuple or list of strings) – Tensors names from which to obtain the samples

  • batch_size (int) – Number of element to sample

  • mini_batches (int, optional) – Number of mini-batches to sample (default: 1)

  • sequence_length (int, optional) – Length of each sequence (default: 1)

Raises

NotImplementedError – The method has not been implemented

Returns

Sampled data from tensors sorted according to their position in the list of names. The sampled tensors will have the following shape: (batch size, data size)

Return type

list of torch.Tensor list

sample_all(names: Tuple[str], mini_batches: int = 1, sequence_length: int = 1) List[List[torch.Tensor]]

Sample all data from memory

Parameters
  • names (tuple or list of strings) – Tensors names from which to obtain the samples

  • mini_batches (int, optional) – Number of mini-batches to sample (default: 1)

  • sequence_length (int, optional) – Length of each sequence (default: 1)

Returns

Sampled data from memory. The sampled tensors will have the following shape: (memory size * number of environments, data size)

Return type

list of torch.Tensor list

sample_by_index(names: Tuple[str], indexes: Union[tuple, numpy.ndarray, torch.Tensor], mini_batches: int = 1) List[List[torch.Tensor]]

Sample data from memory according to their indexes

Parameters
  • names (tuple or list of strings) – Tensors names from which to obtain the samples

  • indexes (tuple or list, numpy.ndarray or torch.Tensor) – Indexes used for sampling

  • mini_batches (int, optional) – Number of mini-batches to sample (default: 1)

Returns

Sampled data from tensors sorted according to their position in the list of names. The sampled tensors will have the following shape: (number of indexes, data size)

Return type

list of torch.Tensor list

save(directory: str = '', format: str = 'pt') None

Save the memory to a file

Supported formats:

  • PyTorch (pt)

  • NumPy (npz)

  • Comma-separated values (csv)

Parameters
  • directory (str) – Path to the folder where the memory will be saved. If not provided, the directory defined in the constructor will be used

  • format (str, optional) – Format of the file where the memory will be saved (default: “pt”)

Raises

ValueError – If the format is not supported

set_tensor_by_name(name: str, tensor: torch.Tensor) None

Set a tensor by its name

Parameters
  • name (str) – Name of the tensor to set

  • tensor (torch.Tensor) – Tensor to set

Raises

KeyError – The tensor does not exist

share_memory() None

Share the tensors between processes