Parallel trainer
Concept
Basic usage
Note
Each process adds a GPU memory overhead (~1GB, although it can be much higher) due to PyTorch’s CUDA kernels. See PyTorch Issue #12873 for more details
Note
At the moment, only simultaneous training and evaluation of agents with local memory (no memory sharing) is implemented
1from skrl.trainers.torch import ParallelTrainer
2
3# asuming there is an environment called 'env'
4# and an agent or a list of agents called 'agents'
5
6# create a sequential trainer
7cfg = {"timesteps": 50000, "headless": False}
8trainer = ParallelTrainer(env=env, agents=agents, cfg=cfg)
9
10# train the agent(s)
11trainer.train()
12
13# evaluate the agent(s)
14trainer.eval()
Configuration
- skrl.trainers.torch.parallel.PARALLEL_TRAINER_DEFAULT_CONFIG
1PARALLEL_TRAINER_DEFAULT_CONFIG = {
2 "timesteps": 100000, # number of timesteps to train for
3 "headless": False, # whether to use headless mode (no rendering)
4 "disable_progressbar": False, # whether to disable the progressbar. If None, disable on non-TTY
5}