File post-processing¶
Utilities for processing files generated during training/evaluation.
Exported memories¶
This library provides an implementation for quickly loading exported memory files to inspect their contents in future post-processing steps. See the section Library utilities (skrl.utils module) for a real use case
Usage¶
from skrl.utils import postprocessing
# assuming there is a directory called "memories" with Torch files in it
memory_iterator = postprocessing.MemoryFileIterator("memories/*.pt")
for filename, data in memory_iterator:
filename # str: basename of the current file
data # dict: keys are the names of the memory tensors in the file.
# Tensor shapes are (memory size, number of envs, specific content size)
# example of simple usage:
# print the filenames of all memories and their tensor shapes
print("\nfilename:", filename)
print(" |-- states:", data['states'].shape)
print(" |-- actions:", data['actions'].shape)
print(" |-- rewards:", data['rewards'].shape)
print(" |-- next_states:", data['next_states'].shape)
print(" |-- dones:", data['dones'].shape)
from skrl.utils import postprocessing
# assuming there is a directory called "memories" with NumPy files in it
memory_iterator = postprocessing.MemoryFileIterator("memories/*.npz")
for filename, data in memory_iterator:
filename # str: basename of the current file
data # dict: keys are the names of the memory arrays in the file.
# Array shapes are (memory size, number of envs, specific content size)
# example of simple usage:
# print the filenames of all memories and their array shapes
print("\nfilename:", filename)
print(" |-- states:", data['states'].shape)
print(" |-- actions:", data['actions'].shape)
print(" |-- rewards:", data['rewards'].shape)
print(" |-- next_states:", data['next_states'].shape)
print(" |-- dones:", data['dones'].shape)
from skrl.utils import postprocessing
# assuming there is a directory called "memories" with CSV files in it
memory_iterator = postprocessing.MemoryFileIterator("memories/*.csv")
for filename, data in memory_iterator:
filename # str: basename of the current file
data # dict: keys are the names of the memory list of lists extracted from the file.
# List lengths are (memory size * number of envs) and
# sublist lengths are (specific content size)
# example of simple usage:
# print the filenames of all memories and their list lengths
print("\nfilename:", filename)
print(" |-- states:", len(data['states']))
print(" |-- actions:", len(data['actions']))
print(" |-- rewards:", len(data['rewards']))
print(" |-- next_states:", len(data['next_states']))
print(" |-- dones:", len(data['dones']))
API¶
- class skrl.utils.postprocessing.MemoryFileIterator(pathname: str)¶
Bases:
object
Python iterator for loading data from exported memories
The iterator will load the next memory file in the list of path names. The output of the iterator is a tuple of the filename and the memory data where the memory data is a dictionary of torch.Tensor (PyTorch), numpy.ndarray (NumPy) or lists (CSV) depending on the format and the keys of the dictionary are the names of the variables
Supported formats:
PyTorch (pt)
NumPy (npz)
Comma-separated values (csv)
Expected output shapes:
PyTorch: (memory_size, num_envs, data_size)
NumPy: (memory_size, num_envs, data_size)
Comma-separated values: (memory_size * num_envs, data_size)
- Parameters:
pathname (str) – String containing a path specification for the exported memories. Python glob method is used to find all files matching the path specification
- __iter__() MemoryFileIterator ¶
Return self to make iterable
- _format_csv() Tuple[str, dict] ¶
Load CSV file from file
- Returns:
Tuple of file name and data
- Return type:
Tensorboard files¶
This library provides an implementation for quickly loading Tensorboard files to inspect their contents in future post-processing steps. See the section Library utilities (skrl.utils module) for a real use case
Requirements¶
This utility requires the TensorFlow package to be installed to load and parse Tensorboard files:
pip install tensorflow
Usage¶
from skrl.utils import postprocessing
# assuming there is a directory called "runs" with experiments and Tensorboard files in it
tensorboard_iterator = postprocessing.TensorboardFileIterator("runs/*/events.out.tfevents.*", \
tags=["Reward / Total reward (mean)"])
for dirname, data in tensorboard_iterator:
dirname # str: path of the directory (experiment name) containing the Tensorboard file
data # dict: keys are the tags, values are lists of [step, value] pairs
# example of simple usage:
# print the directory name and the value length for the "Reward / Total reward (mean)" tag
print("\ndirname:", dirname)
for tag, values in data.items():
print(" |-- tag:", tag)
print(" | |-- value length:", len(values))
API¶
- class skrl.utils.postprocessing.TensorboardFileIterator(pathname: str, tags: str | List[str])¶
Bases:
object
Python iterator for loading data from Tensorboard files
The iterator will load the next Tensorboard file in the list of path names. The iterator’s output is a tuple of the directory name and the Tensorboard variables selected by the tags. The Tensorboard data is returned as a dictionary with the tag as the key and a list of steps and values as the value
- Parameters:
- __iter__() TensorboardFileIterator ¶
Return self to make iterable