MC is model-free. No knowledge of MDP transitions or rewraeds.
MC learns fro complete episodes. no bootstrapping
Here Bootstrapping means Using one or more estimated values in the update step fro the same kind of estimated value.. like in \(SARSA(\lambda)\) or \(Q(\lambda)\)
The single-step TD is to utilize the bootstrap by using a mix of results from different length trajectories.
MC uses the simplest possible idea: value = mean return
2 ways:
model-free: no model necessary and still attains optimality
Simulated: needs only a simulation, not a full model
Caveat: can only apply MC to episodic MDPs
All episodes must terminate
policy evaluation
The goal is to learn \(v_\pi\) from episodes of experience under policy \(\pi\)
\[S_1,A_1,R_2,....S_k \sim 1\]
and the return is the total discounted reward: \(G_t=R_{t+1}+\gamma R_{t+2}+...\gamma ^{T-1}T_T\).
The value function is the expected return : \(v_\pi(x)=\mathbb{E}_{\pi}[G_t~S_t=s]\)
P.S. This policy use empirical mean rather than expected return.
By different visits during an episode, they can be diverged into Every-Visit and First-visit, and both converge asymptotically.
First-visit
proof of convergence
In fact it's the "backward mean \(\frac{S(s)}{N(s)}\)" to update \(V(s)\)
blackjack (21 points)
In Sutton's book, we get the exploration graph like
Every-Visit Monte-Carlo Policy Evaluation
the difference is every visits.
Incremental Mean
The mean \(\mu_1,\mu_2.... \) of the sequent can be computed incrementally by
incremental MC-updates
Monte-Carlo Estimation of Action Values
Backup Diagram for Monte-Carlo
Similar to Bandit, is to find optimal from the explore/exploit dilemma the entire rest of episode included. and the only choice considered is at each state and doesn't bootstrap.(unlike DP).
Time required to estimate one state does not depend onthe total number of states
Temporal-Difference Learning
Bootstrapping
Saying: To lift one up, strap the boot of sb. again and again which is incompletable.
Modern definition: re-sampling technique from the same sample again and again which has the statistical meaning.
intro
TD methods learn directly from episodes of experience
TD is model-free:no knowledge of MDP transitions/rewards
TD learns from incomplete episodes, by bootstrapping
TD updates a guess towards a guess
\((number)\) the number represent the look ahead times.
driving home example
TD is more flexible for MC have to wait for the final result for the update. On policy vs. off policy.
MC make a nearest prediction
We actually can generate the estimated MDP graph and corresponding example for AB example.
Comparison
TD can learn before knowing the final outcome
TD can learn online after every step (less memory & peak computation)
MC must wait until end of episode before return is known
TD can learn without the final outcome
TD can learn from incomplete sequences
MC can only learn from complete sequences
TD works in continuing (non-terminating) environments
MC only works for episodic (terminating) environment
### result is TD performs better in random walk
Reinforcement Learning is an area of machihne learning inspired by behavioral psychology, concerned with how software agents ought to take actions in an environment so as to maximzie some notion of cumulative reward.
Behavioral Psychology
Behavior is primarily shaped by reinforcement rather than free-will.
behaviors that result in praise/pleasure tend to repeat
behaviors that result in punishment/pain tend to become extinct
agent
An entity (learner & decision maker) that is equipped with Sensors end-effectors and goals
Action
Used by the agent to interact with the environment.
May have many di↵erent temporal granularities and abstractions
reward
A rewardR**t is a scalar feedback signal
Indicates how well agent is doing at step t
The agent’s job is to maximize cumulative reward
hypothesis: All goals can be described by the maximization of expected cumulative reward
Main Topics of Reinforcement Learning
Learning: by trial and error
Planning: search, reason, thought, cognition
Prediction: evaluation functions, knowledge
Control: action selection, decision making
Dynamics: how the state changes given the actions of the agent
Theorem: Suppose UCB1 is run as above. Then its expected cumulative regret is at most$\displaystyle 8 \sum_{i : \mu_i < \mu^*} \frac{\log T}{\Delta_i} + \left ( 1 + \frac{\pi^2}{3} \right ) \left ( \sum_{j=1}^K \Delta_j \right )$
Okay, this looks like one nasty puppy, but it’s actually not that bad. The first term of the sum signifies that we expect to play any suboptimal machine about a logarithmic number of times, roughly scaled by how hard it is to distinguish from the optimal machine. That is, if is small we will require more tries to know that action is suboptimal, and hence we will incur more regret. The second term represents a small constant number (the $1 + \pi^2 / 3$ part) that caps the number of times we’ll play suboptimal machines in excess of the first term due to unlikely events occurring. So the first term is like our expected losses, and the second is our risk.
But note that this is a worst-case bound on the regret. We’re not saying we will achieve this much regret, or anywhere near it, but that UCB1 simply cannot do worse than this. Our hope is that in practice UCB1 performs much better.
Before we prove the theorem, let’s see how derive the bound mentioned above. This will require familiarity with multivariable calculus, but such things must be endured like ripping off a band-aid. First consider the regret as a function (excluding of course ), and let’s look at the worst case bound by maximizing it. In particular, we’re just finding the problem with the parameters which screw our bound as badly as possible, The gradient of the regret function is given by
and it’s zero if and only if for each , . However this is a minimum of the regret bound (the Hessian is diagonal and all its eigenvalues are positive). Plugging in the (which are all the same) gives a total bound of . If we look at the only possible endpoint (the ), then we get a local maximum of . But this isn’t the we promised, what gives? Well, this upper bound grows arbitrarily large as the go to zero. But at the same time, if all the are small, then we shouldn’t be incurring much regret because we’ll be picking actions that are close to optimal!
Indeed, if we assume for simplicity that all the are the same, then another trivial regret bound is (why?). The true regret is hence the minimum of this regret bound and the UCB1 regret bound: as the UCB1 bound degrades we will eventually switch to the simpler bound. That will be a non-differentiable switch (and hence a critical point) and it occurs at . Hence the regret bound at the switch is , as desired.
Proving the Worst-Case Regret Bound
Proof. The proof works by finding a bound on , the expected number of times UCB chooses an action up to round . Using the notation, the regret is then just , and bounding the ‘s will bound the regret.
Recall the notation for our upper bound and let’s loosen it a bit to so that we’re allowed to “pretend” a action has been played times. Recall further that the random variable has as its value the index of the machine chosen. We denote by the indicator random variable for the event . And remember that we use an asterisk to denote a quantity associated with the optimal action (e.g., is the empirical mean of the optimal action).
Indeed for any action , the only way we know how to write down is as
The 1 is from the initialization where we play each action once, and the sum is the trivial thing where just count the number of rounds in which we pick action . Now we’re just going to pull some number of plays out of that summation, keep it variable, and try to optimize over it. Since we might play the action fewer than times overall, this requires an inequality.
These indicator functions should be read as sentences: we’re just saying that we’re picking action in round and we’ve already played at least times. Now we’re going to focus on the inside of the summation, and come up with an event that happens at least as frequently as this one to get an upper bound. Specifically, saying that we’ve picked action in round means that the upper bound for action exceeds the upper bound for every other action. In particular, this means its upper bound exceeds the upper bound of the best action (and might coincide with the best action, but that’s fine). In notation this event is
Denote the upper bound for action in round by . Since this event must occur every time we pick action (though not necessarily vice versa), we have
We’ll do this process again but with a slightly more complicated event. If the upper bound of action exceeds that of the optimal machine, it is also the case that the maximum upper bound for action we’ve seen after the first trials exceeds the minimum upper bound we’ve seen on the optimal machine (ever). But on round we don’t know how many times we’ve played the optimal machine, nor do we even know how many times we’ve played machine (except that it’s more than ). So we try all possibilities and look at minima and maxima. This is a pretty crude approximation, but it will allow us to write things in a nicer form.
Denote by the random variable for the empirical mean after playing action a total of times, and the corresponding quantity for the optimal machine. Realizing everything in notation, the above argument proves that
Indeed, at each for which the max is greater than the min, there will be at least one pair for which the values of the quantities inside the max/min will satisfy the inequality. And so, even worse, we can just count the number of pairs for which it happens. That is, we can expand the event above into the double sum which is at least as large:
We can make one other odd inequality by increasing the sum to go from to . This will become clear later, but it means we can replace with and thus have
Now that we’ve slogged through this mess of inequalities, we can actually get to the heart of the argument. Suppose that this event actually happens, that . Then what can we say? Well, consider the following three events:
(1)
(2)
(3)
In words, (1) is the event that the empirical mean of the optimal action is less than the lower confidence bound. By our Chernoff bound argument earlier, this happens with probability . Likewise, (2) is the event that the empirical mean payoff of action is larger than the upper confidence bound, which also occurs with probability . We will see momentarily that (3) is impossible for a well-chosen (which is why we left it variable), but in any case the claim is that one of these three events must occur. For if they are all false, we have
and
But putting these two inequalities together gives us precisely that (3) is true:
This proves the claim.
By the union bound, the probability that at least one of these events happens is plus whatever the probability of (3) being true is. But as we said, we’ll pick to make (3) always false. Indeed depends on which action is being played, and if then , and by the definition of we have
.
Now we can finally piece everything together. The expected value of an event is just its probability of occurring, and so
The second line is the Chernoff bound we argued above, the third and fourth lines are relatively obvious algebraic manipulations, and the last equality uses the classic solution to Basel’s problem. Plugging this upper bound in to the regret formula we gave in the first paragraph of the proof establishes the bound and proves the theorem.
frequentist view: a long-run frequency over a large number of repetitions of an experiment.
Bayesian view: a degree of belief about the event in question.
We can assign probabilities to hypotheses like "candidate will win the election" or "the defendant is guilty"can't be repeated.
Markov & Monta Carlo + computing power + algorithm thrives the Bayesian view.
role
条件概率
所有事情都有条件,条件就会产生概率
e.g. Conditioning -> DIVIDE & CONCUER -> recursively apply to multi-stage problem.
P(A|B) = \(\frac{P(A\ and\ B)}{P(B)}\)
chain rules
有利于分布式计算
Inference & Bayes' Rules
概率分布和极限定理
PDF 概率密度函数
混合型
PDF
valid PDF
non negative \(f(x)\geq0\)
integral to 1:
\(\int^{\infty}_{-\infty}f(x)dx=1\)
probability distribution
summary of probability distribution
三种距离衡量 in ML, DL, AI
全变量距离
usually in GAN
小数定理(稀疏事件) in poisson
去食堂吃饭人数可以用柏松分布来描述
Sample mean
强大数定理SLLN
收敛到真正的概率值以概率为一收敛
弱大数定理WLLN
以概率收敛
中心极限定理
Generating function
PGF - Z
MGF - Laplace
CF - 傅立叶
APPLICATION
branching process
bridge complex and probability
play a role in large deviation theory
## Multi variables.
joint distribution provides complete information about how multiple r.v. interact in high-dimensional space