Thanks Hado van Hasselt for the great work.
Introduction
In the problems of sequential decision making in continuous domains with delayed reward signals, the main purpose for the algorithms is to learn how to choose actions from an infinitely large action space to optimize a noisy delayed cumulative reward signal in an infinitely large state space, where even the outcome of a single action can be stochastic.
Here we assume that a model of environment is not known. Analytically computing a good policy from a continuous model can be infeasible, we focus on methods that explicitly update a representation of a value function, a policy or both here. In addition, we focus mainly on the problem of control, which means we want to find action-selection polices that yield high returns, as opposed to the problem of prediction, which aims to estimate the value of a given policy.
Some possible valuable books are given below: (we suppose that you are familiar with Sutton)
- Szepesvari, C.: Algorithms for reinforcement learning. Synthesis Lectures on Artificial Intelligence and Machine Learning 4(1), 1-103(2010).
- Busoniu, L., Babuska, R., De Schutter, B., Ernst, D.: Reinforcement learning and dynamic programming using function approximators. CRC Press, Boca Raton (2010).
Function Approximation
In this section, we mostly limit ourselves to the general functional form of the approximators and general methods to update the parameters. Non-parametric approaches, such as kernel-based methods are not included. Turn to the following references for more details if you are interested in kernel based methods:
- Ormoneit, D., Sen, S.: Kernel-based reinforcement learning. Machine Learning 49(2), 161-178 (2002).
Linear Function Approximation
A linear function is a simple parametric function that depends on the feature vector. For instance, consider a value-approximation algorithm where the value function is approximated by:
A common method to find features for a linear function approximator divides the continuous state space into separate segments and attaches one feature to each segment, ex. tile coding. However, one potential problem with discretizing methods such as tile coding is that the resulting function that maps state into features is not injective, i.e.
Some of the issues mentioned above can be tackled by suing so-called fuzzy sets. A fuzzy set is a generalization of normal sets to fuzzy membership. A state may belong partially to the set defined by feature
Non-linear Function Approximation
In a parametric non-linear function approximator, the function that should be optimized is represented by some predetermined parameterized function. For instance, for value-based algorithms we may have:
In general, a non-linear function approximator may approximate an unknown function with better accuracy than a linear function approximator that uses the same input features. A drawback of non-linear function approximation is that less convergence guarantees can be given.
Updating Parameters
Gradient Descent
A gradient descent update follows the direction of the negative gradient of some parameterized function that we want to minimize. The gradient of a parameterized function is a vector in parameter space that points in the direction in which the function decreases. Because the gradient only describes the local shape of the function, this algorithm can end up in a local minimum. For instance, using an error measure such as temporal-difference or a prediction error, i.e.
Update parameter:
If the parameter space is a curved space, it is more appropriate to use
which is known as the natural gradient. In general, the best choice for matrix
Gradient-Free Optimization
Gradient free methods are useful when the function that is optimized is not differentiable or when it is expected that many local optima exist. There are many general global methods for optimization exist, including evolutionary algorithms. Details about evolutionary algorithms are beyond the scope of this note.
Approximate Reinforcement Learning
Value Approximation
In value-approximation algorithms, experience samples are used to update a value function that gives an approximation of the current or the optimal policy. Many reinforcement learning algorithms fall into this category. Important differences between algorithms within this category is whether they are on-policy or off-policy and whether they update online or offline. Online algorithms are sometimes more sample-efficient in control problems.
In order to update a value with gradient descent, we must choose some measure of error that we can minimize. This measure is often referred to as objective function. We generalize standard temporal-difference learning to a gradient update on the parameters of a function approximator. The tabular TD-Learning update is
with
However, if
Anyway, we can interpret
This implies the parameters can be updated as
Similarly, updates for action-value algorithms are
Parameters updated with equations above may diverge when off-policy updates are used. This holds for any temporal-difference method with
- Maei, H.R., Sutton, R.S.: GQ (λ ): A general gradient algorithm for temporal-difference prediction learning with eligibility traces. In: Proceedings of the Third Conference On Artificial General Intelligence (AGI-2010), pp. 91–96. Atlantis Press, Lugano (2010).
- Sutton, R.S., Maei, H.R., Precup, D., Bhatnagar, S., Silver, D., Szepesv´ari, C., Wiewiora, E.: Fast gradient-descent methods for temporal-difference learning with linear function approximation. In: Proceedings of the 26th Annual International Conference on Machine Learning (ICML 2009), pp. 993–1000. ACM (2009).
It is nontrivial to extend the standard online temporal difference algorithms to continuous action spaces. The algorithms in the next section are usually much better suited for use in problems with continuous actions.
Policy Approximation
If the action space is continuous finding the greedy action in each state can be nontrivial and time-consuming.
Policy Gradient Algorithms
The idea of policy-gradient algorithms is to update the policy with gradient ascent on the cumulative expected value
As a practical alternative, we can use stochastic gradient descent:
Such procedures can be at best hope to find a local optimum, because they use a gradient of a value function that is usually not convex with respect to the policy parameters. Define the trajectory as
Moreover, since only the policy term depend on
This only holds if the policy is stochastic. In most cases, this is not a big problem, for stochastic polices are needed anyway to ensure sufficient exploration.
There are two examples of stochastic polices for policy gradient algorithms:
- Boltzmann Exploration
- Gaussian Exploration
We need to sample the expected cumulative discounted reward to get the gradient. For instance, if the task is episodic we can take a Monte Carlo sample that gives the cumulative (possibly discounted) reward for each episode:
Actor Critic Algorithms
The variance of the estimate of
Actor critic algorithms typically use a temporal difference algorithm to update
where
Cacla (continuous actor-critic learning-automation) is special AC algorithm using an error in action space rather than in parameter or policy space and it uses the sign of the temporal difference error rather than its size. During learning, it is assumed that there is exploration. As in many other actor-critic algorithms, if the temporal-difference error
An update to the actor only occurs when the temporal difference error is positive. As an extreme case, consider an actor that already outputs the optimal action in each state for some deterministic MDP. For most exploring actions, the temporal-difference error is then negative. If the actor would be updated away from such an action, its output would almost certainly no longer be optimal.
This is an important difference between Cacla and policy-gradient methods: Cacla only updates its actor when actual improvements have been observed. This avoids slow learning when there are plateaus in the value space and the temporal difference errors are small. Intuitively, it makes sense that the distance to a promising action
Here is a simple Cacla algorithm:
More details about actor critic algorithms? Refer to :
- van Hasselt, H.P.,Wiering, M.A.: Using continuous action spaces to solve discrete problems. In: Proceedings of the International Joint Conference on Neural Networks (IJCNN 2009), pp. 1149–1156 (2009).
- http://blog.csdn.net/philthinker/article/details/71104095