Roughly speaking, Reinforcement Learning methods can be classified into value-based methods and policy-based methods. Value-based methods learn value function from the experience, and agent chooses an action that has the highest value (e.g., a^=argmaxaQ(s,a)). Policy-based methods learn policy π(a∣s) directly from the experience. They can naturally handle stochastic policy since π(a∣s) is defined to be the distribution over actions, and can also work with continuous action space which is tricky for value-based methods due to its maximization over continuous action space.
Policy gradient method is one of the policy-based methods. This method assumes the policy is differentiable, and iteratively calculate the update of parameters using its gradient.
Why/When do we want Policy Gradient?
There are several cases where policy gradient would have an advantage than other methods.
When the policy is easier to model than value function.
For example, in the game of Breakout (Atari), it might not be easy to predict the score from the display, however a simple policy that just follow the ball might work well.
When the optimal policy is not deterministic.
For example, in the game of rock-paper-scissors, the optimal policy is to choose the hand at uniformly random.
Value based RL methods cannot learn this stochastic policy, since the action with highest state-action value is always selected.
When the action space is continuous.
There is no natural way to handle continuous action for value based methods, because taking the maximum of a value function over all possible actions is intractable. On the other hand, policy gradient methods directly model the policy and therefore it can naturally handle continuous action space.
Basics of Policy Gradient
In this post, we focus on non-discounted (i.e., γ=1), episodic cases (the episode terminates in finite steps. cf. continual case / life-long learning, etc.), but the following explanation and proof applies to continual or discounted case with a bit of modifications.
As the name suggests, policy gradient method updates policy parameters by gradient ascent.
We assume the policy is differentiable1.
Now, we define a function J(θ) that represents the performance of the policy.
In episodic cases, it can be defined as the expected returns of following the policy πθ starting from the initial state s0, which is indeed the definition of value function.
In the following part, we will calculate the gradient of J(θ). Once we get the gradient, the update of the parameters will be straightforward:
θ←θ+α∇θJ(θ)
, where α is learning rate.
How to calculate ∇J(θ)
All we need is just to take the derivative of J(θ) w.r.t. θ. Why don't we just calculate it?
Well, it's not that easy. It is easy to calcluate how the distribution over actions πθ(a∣s) would change when we slightly modify θ, however, as we saw in equation (1), J(θ) also depends on the environment (i.e., how much more reward you would get by following the perturbed policy).
Due to the lack of knowledge of the environment, there is no obvious way to calculate the gradient.
But surprisingly, Policy Gradient Theorem2 shows the gradient is written out in the following beautiful way.
,where μ(s) is on-policy distribution which represents the time spent in a state s.
Note that Qπ(s,a) is the total episodic reward, which can be estimated with samples in practice.
In (Sutton et al., 2000), which presented the proof, authors commented about this equation as:
their [sic] are no terms of the form ∂θ∂dπθ(s): the effect of policy changes on the distribution of states does not appear. This is convenient for approximating the gradient by sampling.
We will put the proof in the end of this post. For here, we show the simpler proof for One-step MDP, where the episode terminates immediately after choosing an action.
I hope this provides an intuition for the equation above.
Let d(s) be the probability distribution over the initial states. Since we only consider one-step,
One can see the correspondence between this and equation (1).
Interpretation of the equation ∇πθ(a∣s) is the direction in which the probability to choose action a increases. And it is weighted by its value for each action.
Thus, the policy will be updated so that it chooses an action that the agent has experienced a large reward.
On top of that, the outer sum weighs it according to the probability to visit a state s.
This means that the update for frequently visited states will have more importance.
Another form of Policy Gradient Theorem
Equation (2) is also written in the following form.