-
Notifications
You must be signed in to change notification settings - Fork 2
/
main.tex
45 lines (35 loc) · 1.62 KB
/
main.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
\documentclass{article}
\usepackage{import} % to break down the code into modules
\import{}{layout.tex}
\title{\Large{Reinforcement Learning Cheat Sheet}}
\author{
\url{https://github.com/alxthm/rl-cheatsheet}
}
\begin{document}
\maketitle
% todo: check content of other cheatsheets
% - https://github.com/angerhang/reinforcement-learning-cheat-sheet/blob/master/src/en.pdf
% - https://github.com/linker81/Reinforcement-Learning-CheatSheet/blob/master/rl_cheatsheet.pdf
% - (useful for algorithms) https://github.com/udacity/rl-cheatsheet/blob/master/cheatsheet.pdf
\begin{abstract}
Some important concepts and algorithms in RL, all summarized in one place.
In addition to reading the original papers, these more comprehensive resources can also be helpful:
\begin{itemize}
\item \emph{Spinning Up in Deep RL}, by Open AI (\href{https://spinningup.openai.com/en/latest/index.html}{link}). A very nice introduction to RL and Policy Gradients, with code, (some) proofs, exercises and advice.
\item \emph{Reinforcement Learning and advanced Deep Learning (RLD)}, Sorbonne University course by Sylvain Lamprier (\href{https://dac.lip6.fr/master/rladl/}{link}). In French, with proofs for many results.
\item \emph{UCL Course on RL}, David Silver Lecture Notes \cite{silver2015}.
\end{itemize}
\end{abstract}
\setcounter{tocdepth}{3}
\tableofcontents{}
\newpage
\import{sections/}{1_bandits.tex}
\import{sections/}{2_framework.tex}
\import{sections/}{3_dp.tex}
\import{sections/}{4_value_based.tex}
\import{sections/}{5_policy_gradients.tex}
\medskip
\bibliographystyle{unsrt}
\footnotesize
\bibliography{biblio}
\end{document}