File post-processing#
Utilities for processing files generated during training/evaluation.
Exported memories#
This library provides an implementation for quickly loading exported memory files to inspect their contents in future post-processing steps. See the section Library utilities (skrl.utils module) for a real use case
Usage#
1from skrl.utils import postprocessing
2
3
4# assuming there is a directory called "memories" with Torch files in it
5memory_iterator = postprocessing.MemoryFileIterator("memories/*.pt")
6for filename, data in memory_iterator:
7 filename # str: basename of the current file
8 data # dict: keys are the names of the memory tensors in the file.
9 # Tensor shapes are (memory size, number of envs, specific content size)
10
11 # example of simple usage:
12 # print the filenames of all memories and their tensor shapes
13 print("\nfilename:", filename)
14 print(" |-- states:", data['states'].shape)
15 print(" |-- actions:", data['actions'].shape)
16 print(" |-- rewards:", data['rewards'].shape)
17 print(" |-- next_states:", data['next_states'].shape)
18 print(" |-- dones:", data['dones'].shape)
1from skrl.utils import postprocessing
2
3
4# assuming there is a directory called "memories" with NumPy files in it
5memory_iterator = postprocessing.MemoryFileIterator("memories/*.npz")
6for filename, data in memory_iterator:
7 filename # str: basename of the current file
8 data # dict: keys are the names of the memory arrays in the file.
9 # Array shapes are (memory size, number of envs, specific content size)
10
11 # example of simple usage:
12 # print the filenames of all memories and their array shapes
13 print("\nfilename:", filename)
14 print(" |-- states:", data['states'].shape)
15 print(" |-- actions:", data['actions'].shape)
16 print(" |-- rewards:", data['rewards'].shape)
17 print(" |-- next_states:", data['next_states'].shape)
18 print(" |-- dones:", data['dones'].shape)
1from skrl.utils import postprocessing
2
3
4# assuming there is a directory called "memories" with CSV files in it
5memory_iterator = postprocessing.MemoryFileIterator("memories/*.csv")
6for filename, data in memory_iterator:
7 filename # str: basename of the current file
8 data # dict: keys are the names of the memory list of lists extracted from the file.
9 # List lengths are (memory size * number of envs) and
10 # sublist lengths are (specific content size)
11
12 # example of simple usage:
13 # print the filenames of all memories and their list lengths
14 print("\nfilename:", filename)
15 print(" |-- states:", len(data['states']))
16 print(" |-- actions:", len(data['actions']))
17 print(" |-- rewards:", len(data['rewards']))
18 print(" |-- next_states:", len(data['next_states']))
19 print(" |-- dones:", len(data['dones']))
API#
- class skrl.utils.postprocessing.MemoryFileIterator(pathname: str)#
Bases:
object
- __init__(pathname: str) None #
Python iterator for loading data from exported memories
The iterator will load the next memory file in the list of path names. The output of the iterator is a tuple of the filename and the memory data where the memory data is a dictionary of torch.Tensor (PyTorch), numpy.ndarray (NumPy) or lists (CSV) depending on the format and the keys of the dictionary are the names of the variables
Supported formats:
PyTorch (pt)
NumPy (npz)
Comma-separated values (csv)
Expected output shapes:
PyTorch: (memory_size, num_envs, data_size)
NumPy: (memory_size, num_envs, data_size)
Comma-separated values: (memory_size * num_envs, data_size)
- __iter__() MemoryFileIterator #
Return self to make iterable
- _format_csv() Tuple[str, dict] #
Load CSV file from file
- Returns:
Tuple of file name and data
- Return type:
Tensorboard files#
This library provides an implementation for quickly loading Tensorboard files to inspect their contents in future post-processing steps. See the section Library utilities (skrl.utils module) for a real use case
Requirements#
This utility requires the TensorFlow package to be installed to load and parse Tensorboard files:
pip install tensorflow
Usage#
1from skrl.utils import postprocessing
2
3
4# assuming there is a directory called "runs" with experiments and Tensorboard files in it
5tensorboard_iterator = postprocessing.TensorboardFileIterator("runs/*/events.out.tfevents.*", \
6 tags=["Reward / Total reward (mean)"])
7for dirname, data in tensorboard_iterator:
8 dirname # str: path of the directory (experiment name) containing the Tensorboard file
9 data # dict: keys are the tags, values are lists of [step, value] pairs
10
11 # example of simple usage:
12 # print the directory name and the value length for the "Reward / Total reward (mean)" tag
13 print("\ndirname:", dirname)
14 for tag, values in data.items():
15 print(" |-- tag:", tag)
16 print(" | |-- value length:", len(values))
API#
- class skrl.utils.postprocessing.TensorboardFileIterator(pathname: str, tags: str | List[str])#
Bases:
object
- __init__(pathname: str, tags: str | List[str]) None #
Python iterator for loading data from Tensorboard files
The iterator will load the next Tensorboard file in the list of path names. The iterator’s output is a tuple of the directory name and the Tensorboard variables selected by the tags. The Tensorboard data is returned as a dictionary with the tag as the key and a list of steps and values as the value
- __iter__() TensorboardFileIterator #
Return self to make iterable