Sarsa(λ) and Q(λ) in Tabular Case

简介: ‘Thanks R. S. Sutton and A. G. Barto for their great work in Reinforcement Learning: An Introduction.Eligibility Traces in Prediction ProblemsIn the backward view of TD(λ)TD(\lambda), t

‘Thanks R. S. Sutton and A. G. Barto for their great work in Reinforcement Learning: An Introduction.

Eligibility Traces in Prediction Problems

In the backward view of TD(λ), there is a memory variable associated with each state, its eligibility trace. The eligibility trace for state s at time t is a random variable denoted Zt(s)R+. On each step, the eligibility traces for all states decay by γλ, and the eligibility trace for the one state visited on the step is incremented by 1:

Zt(s)={γλZt1(s)γλZt1(s)+1sSts=St

for all nonterminal states s, where γ is the discount rate and λ is the λ-return 1 parameter or trace-decay parameter. This kind of eligibility trace is called an accumulating trace. The global TD error signal triggers proportional updates to all recently visited states, as signaled by their nonzero traces:
ΔVt(s)=αδtZt(s),sS

where
δt=Rt+1+γVt(St+1)Vt(St)

A complete prediction algorithm for on-line TD(λ) is given as follows:
  1. Initialize V(s) arbitrarily.
  2. Repeat for each episode:
    1. Initialize Z(s)=0 for all sS.
    2. Repeat for each step of episode:
      1. A action given by π for S.
      2. Observe reward R and the next state S.
      3. δR+γV(S)V(S)
      4. Z(S)Z(S)+1
      5. For all sS:
        1. V(s)V(s)+αδZ(s)
        2. Z(s)γλZ(s)
      6. SS
    3. Until S is terminal.

Sarsa(λ)

The main idea of control is simply to learn action values rather than state values. The idea in Sarsa(λ) is to apply the TD(λ) prediction method to state-action pairs. Let Zt(s,a) denote the trace for state-action pair s,a. Substitute state-action variables for state variables, i.e.

Qt+1(s,a)=Qt(s,a)+αδtZt(s,a),s,a

where
δt=Rt+1+γQt(St+1,At+1)Qt(St,At)

and
Zt(s,a)={γλZt1(s,a)+1γλZt1(s,a)s=St,a=Atotherwise for all s,a

The complete Sarsa( λ) algorithm is given as follows:
  1. Initialize Q(s,a) arbitrarily for all sS,aA(s)
  2. Repeat for each episode:
    1. Z(s,a)=0 for all sS,aA(s)
    2. Repeat for each step of episode:
      1. Take action A, observe R, S.
      2. Choose A from S using policy derived from Q.
      3. δR+γQ(S,A)Q(S,A)
      4. Z(S,A)Z(S,A)+1
      5. For all sS,aA(s):
        1. Q(s,a)Q(s,a)+αδZ(s,a)
        2. Z(s,a)γλZ(s,a)
      6. SS,AA
    3. Unitl S is terminal.

Q(λ)

There are two different methods have been proposed that combine eligibility traces and Q-Learning: the Watkins’s Q(λ) and Peng’s Q(λ). Here we focus on Watkin’s Q(λ) only and give a brief description of Peng’s Q(λ).

Unlike TD(λ) and Sarsa(λ), Watkins’s Q(λ) does not look ahead all the way to the end of the episode in its backup. It only looks ahead as far as the next exploratory action. For example, suppose the first action, At+1, is exploratory. Watkins’s Q(λ) would still do the one-step update of Qt(St,At) toward Rt+1+γmaxaQ(St+1,a). In general, if At+n is the first exploratory action, then the longest backup is toward

Rt+1+γRt+1++γn1Rt+n+γnmaxaQ(St+n,a)

where we assume off-line updating.

The mechanistic or backward view of Watkins’s Q(λ) is also very simple. Eligibility traces are used just as in Sarsa(λ), except that they are set to zero whenever an exploratory action is taken. That is to say. First, the traces for all state-action pairs are either decayed by γλ or, if an exploratory action was taken, set to 0. Second, the trace corresponding to the current state and action is incremented by 1, i.e.

Zt(s,a)=IsStIaAt+γλZt1(s,a)0 if Qt1(St,At)=maxaQt1(St,a) otherwise

where Ixy is an identity indicator function, equal to 1 if x=y and 0 otherwise. The rest of the algorithm is defined by
Qt+1(s,a)=Qt(s,a)+αδtZt(s,a),sS,aA(s)

where
δt=Rt+1+γmaxaQt(St+1,a)Qt(St,At)

The complete algorithm is given below:
  1. Initialize Q(s,a) arbitrarily, for all sS,aA(s).
  2. Repeat for each episode:
    1. Z(s,a)=0 for all sS,aA(s)
    2. Repeat for each step of episode:
      1. Take action A, observe R, S
      2. Choose A from S using policy derived from Q.
      3. AargmaxaQ(S,a), ifA ties for the max, then AA
      4. δR+γQ(S,A)Q(S,A)
      5. Z(S,A)Z(S,A)+1
      6. For all sS,aA(s):
        1. Q(s,a)Q(s,a)+αδZ(s,a)
        2. If A=A, then Z(s,a)γλZ(s,a), else Z(s,a)0.
      7. SS,AA
    3. Until S is terminal

Unfortunately, cutting off traces every time an exploratory action is taken loses much of the advantage of using eligibility traces. Peng’s Q(λ) is an alternate version of Q(λ) meant to remedy this, which can be thought of as a hybrid of Sarsa(λ) and Watkins’s Q(λ). In Peng’s Q(λ), there is no distinction between exploratory and greedy actions. Each component beckup is over many steps of actual experiences, and all but the last are capped by a final maximization over actions. For a fixed nongreedy policy, Qt converges to neither qπ nor q, but to some hybrid of the two. However, if the policy is gradually made more greedy, then the method may still converge to q.

Replacing Traces

In some cases significantly better performance can be obtained by using a slightly modified kind of trace known as replacing trace:

Zt(s)={γλZt1(s)1sSts=St

There are several possible ways to generalize replacing eligibility traces for use in control methods. In some cases, the state-action traces are updated by the following:
Zt(s,a)=10γλZt1(s,a)s=St,a=Ats=St,aAtsSt

  1. λ-return: Gλt=(1λ)n=1λn1G(n)t
相关文章
|
2月前
|
JavaScript
angular之Input和Output
angular之Input和Output
|
4月前
|
Python
Regular Expression
【7月更文挑战第1天】
33 1
UVa1584 - Circular Sequence
UVa1584 - Circular Sequence
36 0
|
搜索推荐 索引
Term Suggester 中 suggest_mode 的三种模式missing、popular、always 的区别
Term Suggester 中 suggest_mode 的三种模式missing、popular、always 的区别
|
存储 前端开发 数据库
Angular-checked方法使用
Angular-checked方法使用
112 0
Angular-checked方法使用
|
JavaScript
|
Java Unix Shell
Regular Expressions (9)
基于POSIX BRE & ERE
181 0
Regular Expressions (9)
Triangular Collection思维
题目描述 Call a set of positive integers triangular if it has size at least three and, for all triples of distinct integers from the set, a triangle with those three integers as side lengths can be constructed. Given a set of positive integers, compute the number of its triangular subsets.
120 0
|
前端开发 JavaScript
Angular中ui-select的使用
Angular中ui-select的使用 最近工作一直很忙,没有时间整理知识,前几天项目中需要用到angular-ui-select,实现下拉框快速过滤效果,今天有时间研究了一下,终于搞明白了。 一、准备工作 1.
3195 0