Trust Region Policy Optimization (TRPO)

TRPO is a model-free, stochastic on-policy policy gradient algorithm that deploys an iterative procedure to optimize the policy, with guaranteed monotonic improvement

Paper: Trust Region Policy Optimization

Algorithm

For each iteration do
\(\bullet \;\) Collect, in a rollout memory, a set of states \(s\), actions \(a\), rewards \(r\), dones \(d\), log probabilities \(logp\) and values \(V\) on policy using \(\pi_\theta\) and \(V_\phi\)
\(\bullet \;\) Estimate returns \(R\) and advantages \(A\) using Generalized Advantage Estimation (GAE(\(\lambda\))) from the collected data [\(r, d, V\)]
\(\bullet \;\) Compute the surrogate objective (policy loss) gradient \(g\) and the Hessian \(H\) of \(KL\) divergence with respect to the policy parameters \(\theta\)
\(\bullet \;\) Compute the search direction \(\; x \approx H^{-1}g \;\) using the conjugate gradient method
\(\bullet \;\) Compute the maximal (full) step length \(\; \beta = \sqrt{\dfrac{2 \delta}{x^T H x}} x \;\) where \(\delta\) is the desired (maximum) \(KL\) divergence and \(\; \sqrt{\frac{2 \delta}{x^T H x}} \;\) is the step size
\(\bullet \;\) Perform a backtracking line search with exponential decay to find the final policy update \(\; \theta_{new} = \theta + \alpha \; \beta \;\) ensuring improvement of the surrogate objective and satisfaction of the \(KL\) divergence constraint
\(\bullet \;\) Update the value function \(V_\phi\) using the computed returns \(R\)

Algorithm implementation

Learning algorithm (_update(...))

compute_gae(...)
def \(\;f_{GAE} (r, d, V, V_{_{last}}') \;\rightarrow\; R, A:\)
\(adv \leftarrow 0\)
\(A \leftarrow \text{zeros}(r)\)
# advantages computation
FOR each reverse iteration \(i\) up to the number of rows in \(r\) DO
IF \(i\) is not the last row of \(r\) THEN
\(V_i' = V_{i+1}\)
ELSE
\(V_i' \leftarrow V_{_{last}}'\)
\(adv \leftarrow r_i - V_i \, +\) discount_factor \(\neg d_i \; (V_i' \, -\) lambda \(adv)\)
\(A_i \leftarrow adv\)
# returns computation
\(R \leftarrow A + V\)
# normalize advantages
\(A \leftarrow \dfrac{A - \bar{A}}{A_\sigma + 10^{-8}}\)
surrogate_loss(...)
def \(\;f_{Loss} (\pi_\theta, s, a, logp, A) \;\rightarrow\; L_{\pi_\theta}:\)
\(logp' \leftarrow \pi_\theta(s, a)\)
\(L_{\pi_\theta} \leftarrow \frac{1}{N} \sum_{i=1}^N A \; e^{(logp' - logp)}\)
conjugate_gradient(...) (See conjugate gradient method)
def \(\;f_{CG} (\pi_\theta, s, b) \;\rightarrow\; x:\)
\(x \leftarrow \text{zeros}(b)\)
\(r \leftarrow b\)
\(p \leftarrow b\)
\(rr_{old} \leftarrow r \cdot r\)
FOR each iteration up to conjugate_gradient_steps DO
\(\alpha \leftarrow \dfrac{rr_{old}}{p \cdot f_{Ax}(\pi_\theta, s, b)}\)
\(x \leftarrow x + \alpha \; p\)
\(r \leftarrow r - \alpha \; f_{Ax}(\pi_\theta, s)\)
\(rr_{new} \leftarrow r \cdot r\)
IF \(rr_{new} <\) residual tolerance THEN
BREAK LOOP
\(p \leftarrow r + \dfrac{rr_{new}}{rr_{old}} \; p\)
\(rr_{old} \leftarrow rr_{new}\)
fisher_vector_product(...) (See fisher vector product in TRPO)
def \(\;f_{Ax} (\pi_\theta, s, v) \;\rightarrow\; hv:\)
\(kl \leftarrow f_{KL}(\pi_\theta, \pi_\theta, s)\)
\(g_{kl} \leftarrow \nabla_\theta kl\)
\(g_{kl_{flat}} \leftarrow \text{flatten}(g_{kl})\)
\(g_{hv} \leftarrow \nabla_\theta (g_{kl_{flat}} \; v)\)
\(g_{hv_{flat}} \leftarrow \text{flatten}(g_{hv})\)
\(hv \leftarrow g_{hv_{flat}} +\) damping \(v\)
def \(\;f_{KL} (\pi_{\theta 1}, \pi_{\theta 2}, s) \;\rightarrow\; kl:\)
\(\mu_1, \log\sigma_1 \leftarrow \pi_{\theta 1}(s)\)
\(\mu_2, \log\sigma_2 \leftarrow \pi_{\theta 2}(s)\)
\(kl \leftarrow \log\sigma_1 - \log\sigma_2 + \frac{1}{2} \dfrac{(e^{\log\sigma_1})^2 + (\mu_1 - \mu_2)^2}{(e^{\log\sigma_2})^2} - \frac{1}{2}\)
\(kl \leftarrow \frac{1}{N} \sum_{i=1}^N \, (\sum_{dim} kl)\)
# compute returns and advantages
\(V_{_{last}}' \leftarrow V_\phi(s')\)
\(R, A \leftarrow f_{GAE}(r, d, V, V_{_{last}}')\)
# sample all from memory
[[\(s, a, logp, A\)]] \(\leftarrow\) states, actions, log_prob, advantages
# compute policy loss gradient
\(L_{\pi_\theta} \leftarrow f_{Loss}(\pi_\theta, s, a, logp, A)\)
\(g \leftarrow \nabla_{\theta} L_{\pi_\theta}\)
\(g_{_{flat}} \leftarrow \text{flatten}(g)\)
# compute the search direction using the conjugate gradient algorithm
\(search_{direction} \leftarrow f_{CG}(\pi_\theta, s, g_{_{flat}})\)
# compute step size and full step
\(xHx \leftarrow search_{direction} \; f_{Ax}(\pi_\theta, s, search_{direction})\)
\(step_{size} \leftarrow \sqrt{\dfrac{2 \, \delta}{xHx}} \qquad\) with \(\; \delta\) as max_kl_divergence
\(\beta \leftarrow step_{size} \; search_{direction}\)
# backtracking line search
\(flag_{restore} \leftarrow \text{True}\)
\(\pi_{\theta_{backup}} \leftarrow \pi_\theta\)
\(\theta \leftarrow \text{get_parameters}(\pi_\theta)\)
\(I_{expected} \leftarrow g_{_{flat}} \; \beta\)
FOR \(\alpha \leftarrow (0.5\) step_fraction \()^i \;\) with \(i = 0, 1, 2, ...\) up to max_backtrack_steps DO
\(\theta_{new} \leftarrow \theta + \alpha \; \beta\)
\(\pi_\theta \leftarrow \text{set_parameters}(\theta_{new})\)
\(I_{expected} \leftarrow \alpha \; I_{expected}\)
\(kl \leftarrow f_{KL}(\pi_{\theta_{backup}}, \pi_\theta, s)\)
\(L \leftarrow f_{Loss}(\pi_\theta, s, a, logp, A)\)
IF \(kl < \delta\) AND \(\dfrac{L - L_{\pi_\theta}}{I_{expected}} >\) accept_ratio THEN
\(flag_{restore} \leftarrow \text{False}\)
BREAK LOOP
IF \(flag_{restore}\) THEN
\(\pi_\theta \leftarrow \pi_{\theta_{backup}}\)
# sample mini-batches from memory
[[\(s, R\)]] \(\leftarrow\) states, returns
# learning epochs
FOR each learning epoch up to learning_epochs DO
# mini-batches loop
FOR each mini-batch [\(s, R\)] up to mini_batches DO
# compute value loss
\(V' \leftarrow V_\phi(s)\)
\(L_{V_\phi} \leftarrow\) value_loss_scale \(\frac{1}{N} \sum_{i=1}^N (R - V')^2\)
# optimization step (value)
reset \(\text{optimizer}_\phi\)
\(\nabla_{\phi} L_{V_\phi}\)
\(\text{clip}(\lVert \nabla_{\phi} \rVert)\) with grad_norm_clip
step \(\text{optimizer}_\phi\)
# update learning rate
IF there is a learning_rate_scheduler THEN
step \(\text{scheduler}_\phi(\text{optimizer}_\phi)\)

Configuration and hyperparameters

skrl.agents.torch.trpo.trpo.TRPO_DEFAULT_CONFIG
 1TRPO_DEFAULT_CONFIG = {
 2    "rollouts": 16,                 # number of rollouts before updating
 3    "learning_epochs": 8,           # number of learning epochs during each update
 4    "mini_batches": 2,              # number of mini batches during each learning epoch
 5
 6    "discount_factor": 0.99,        # discount factor (gamma)
 7    "lambda": 0.95,                 # TD(lambda) coefficient (lam) for computing returns and advantages
 8
 9    "value_learning_rate": 1e-3,    # value learning rate
10    "learning_rate_scheduler": None,        # learning rate scheduler class (see torch.optim.lr_scheduler)
11    "learning_rate_scheduler_kwargs": {},   # learning rate scheduler's kwargs (e.g. {"step_size": 1e-3})
12
13    "state_preprocessor": None,             # state preprocessor class (see skrl.resources.preprocessors)
14    "state_preprocessor_kwargs": {},        # state preprocessor's kwargs (e.g. {"size": env.observation_space})
15    "value_preprocessor": None,             # value preprocessor class (see skrl.resources.preprocessors)
16    "value_preprocessor_kwargs": {},        # value preprocessor's kwargs (e.g. {"size": 1})
17
18    "random_timesteps": 0,          # random exploration steps
19    "learning_starts": 0,           # learning starts after this many steps
20
21    "grad_norm_clip": 0.5,          # clipping coefficient for the norm of the gradients
22    "value_loss_scale": 1.0,        # value loss scaling factor
23
24    "damping": 0.1,                     # damping coefficient for computing the Hessian-vector product
25    "max_kl_divergence": 0.01,          # maximum KL divergence between old and new policy
26    "conjugate_gradient_steps": 10,     # maximum number of iterations for the conjugate gradient algorithm
27    "max_backtrack_steps": 10,          # maximum number of backtracking steps during line search
28    "accept_ratio": 0.5,                # accept ratio for the line search loss improvement
29    "step_fraction": 1.0,               # fraction of the step size for the line search
30
31    "rewards_shaper": None,         # rewards shaping function: Callable(reward, timestep, timesteps) -> reward
32
33    "experiment": {
34        "directory": "",            # experiment's parent directory
35        "experiment_name": "",      # experiment name
36        "write_interval": 250,      # TensorBoard writing interval (timesteps)
37
38        "checkpoint_interval": 1000,        # interval for checkpoints (timesteps)
39        "store_separately": False,          # whether to store checkpoints separately
40
41        "wandb": False,             # whether to use Weights & Biases
42        "wandb_kwargs": {}          # wandb kwargs (see https://docs.wandb.ai/ref/python/init)
43    }
44}

Spaces and models

The implementation supports the following Gym spaces / Gymnasium spaces

Gym/Gymnasium spaces

Observation

Action

Discrete

\(\square\)

\(\square\)

Box

\(\blacksquare\)

\(\blacksquare\)

Dict

\(\blacksquare\)

\(\square\)

The implementation uses 1 stochastic and 1 deterministic function approximator. These function approximators (models) must be collected in a dictionary and passed to the constructor of the class under the argument models

Notation

Concept

Key

Input shape

Output shape

Type

\(\pi_\theta(s)\)

Policy

"policy"

observation

action

Gaussian / MultivariateGaussian

\(V_\phi(s)\)

Value

"value"

observation

1

Deterministic

Support for advanced features is described in the next table

Feature

Support and remarks

Shared model

-

RNN support

RNN, LSTM, GRU and any other variant

API