41

Reinforcement Learning and Optimal Control (2019)

 5 years ago
source link: https://www.tuicool.com/articles/hit/7F7Nje7
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

REINFORCEMENT LEARNING AND OPTIMAL CONTROL

by Dimitri P. Bertsekas

Athena Scientific, 2019

RLCOVER.jpg

This is a draft of a book that is scheduled to be finalized sometime within 2019, and to be published by Athena Scientific . It represents "work in progress," and it will be periodically updated. It more than likely contains errors (hopefully not serious ones). Furthermore, its references to the literature are incomplete. Your comments and suggestions to the author at [email protected] are welcome.

The purpose of the book is to consider large and challenging multistage decision problems, which can be solved in principle by dynamic programming and optimal control, but their exact solution is computationally intractable. We discuss solution methods that rely on approximations to produce suboptimal policies with adequate performance. These methods are collectively referred to as reinforcement learning, and also by alternative names such as approximate dynamic programming, and neuro-dynamic programming.

Our subject has benefited enormously from the interplay of ideas from optimal control and from artificial intelligence. One of the aims of this monograph is to explore the common boundary between these two fields and to form a bridge that is accessible by workers with background in either field.

The mathematical style of the book is somewhat different from the author's dynamic programming books, and the neuro-dynamic programming monograph, written jointly with John Tsitsiklis. We rely more on intuitive explanations and less on proof-based insights. Still we provide a rigorous short account of the theory of finite and infinite horizon dynamic programming, and some basic approximation methods, in an appendix. For this we require a modest mathematical background: calculus, elementary probability, and a minimal use of matrix-vector algebra.

The methods of this book have been successful in practice, and often spectacularly so, as evidenced by recent amazing accomplishments in the games of chess and Go. However, across a wide range of problems, their performance properties may be less than solid. This is a reflection of the state of the art in the field: there are no methods that are guaranteed to work for all or even most problems, but there are enough methods to try on a given challenging problem with a reasonable chance that one or more of them will be successful in the end. Accordingly, we have aimed to present a broad range of methods that are based on sound principles, and to provide intuition into their properties, even when these properties do not include a solid performance guarantee. Hopefully, with enough exploration with some of these methods and their variations, the reader will be able to address adequately his/her own problem.

BOOK PREFACE, CONTENTS, AND CHAPTERS

Click here for preface and table of contents.

Drafts of book chapters (periodically updated, check for latest versions):

Chapter 1: Exact Dynamic Programming ,

Chapter 2: Approximation in Value Space ,

Chapter 3: Parametric Approximation ,

Chapter 4: Infinite Horizon Reinforcement Learning ,

Chapter 5: Aggregation ,

References .

Click here for a slide overview presentation from Conference on Decision and Control, Dec. 2018.

REINFORCEMENT LEARNING COURSE AT ASU: SLIDES AND VIDEO LECTURES

Lecture slides for a course in Reinforcement Learning and Optimal Control (January 8-February 21, 2019), at Arizona State University:

Slides-Lecture 1 , Slides-Lecture 2 , Slides-Lecture 3 , Slides-Lecture 4 , Slides-Lecture 5 , Slides-Lecture 6 , Slides-Lecture 7 , Slides-Lecture 8 , Slides-Lecture 9 , Slides-Lecture 10 , Slides-Lecture 11 , Slides-Lecture 12 , Slides-Lecture 13 .

Videos of lectures from Reinforcement Learning and Optimal Control course at Arizona State University: (Clisk around the screen to see JUST THE VIDEO, or JUST THE SLIDES, or BOTH SIMULTANEOUSLY).

Video-Lecture 1 , Video-Lecture 2 , Video-Lecture 3 , Video-Lecture 4 , Video-Lecture 5 , Video-Lecture 6 , Video-Lecture 7 , Video-Lecture 8 , Video-Lecture 9 , Video-Lecture 10 , Video-Lecture 11 , Video-Lecture 12 , Video-Lecture 13 .

Lecture 13 is an overview of the entire course .

Dynamic Programming and Optimal Control, Vol. 1, 4th Edition

Dimitri P. Bertsekas

dpcover1.jpg

The fourth edition (February 2017) contains a substantial amount of new material, particularly on approximate DP in Chapter 6. This chapter was thoroughly reorganized and rewritten, to bring it in line, both with the contents of Vol. II, whose latest edition appeared in 2012, and with recent developments, which have propelled approximate DP to the forefront of attention.

Some of the highlights of the revision of Chapter 6 are an increased emphasis on one-step and multistep lookahead methods, parametric approximation architectures, neural networks, rollout, and Monte Carlo tree search. Among other applications, these methods have been instrumental in the recent spectacular success of computer Go programs. The material on approximate DP also provides an introduction and some perspective for the more analytically oriented treatment of Vol. II.

Click here for direct ordering from the publisher and preface, table of contents, supplementary educational material, lecture slides, videos, etc

Dynamic Programming and Optimal Control, Vol. I, ISBN-13: 978-1-886529-43-4, 576 pp., hardcover, 2017

Dynamic Programming and Optimal Control, Vol. II, 4th Edition: Approximate Dynamic Programming

Dimitri P. Bertsekas

dpcover3.jpg

The fourth edition of Vol. II of the two-volume DP textbook was published in June 2012. This is a major revision of Vol. II and contains a substantial amount of new material, as well as a reorganization of old material. The length has increased by more than 60% from the third edition, and most of the old material has been restructured and/or revised. Volume II now numbers more than 700 pages and is larger in size than Vol. I. It can arguably be viewed as a new book!

Approximate DP has become the central focal point of this volume, and occupies more than half of the book (the last two chapters, and large parts of Chapters 1-3). Thus one may also view this new edition as a followup of the author's 1996 book "Neuro-Dynamic Programming" (coauthored with John Tsitsiklis). A lot of new material, the outgrowth of research conducted in the six years since the previous edition, has been included.

A new printing of the fourth edition (January 2018) contains some updated material, particularly on undiscounted problems in Chapter 4, and approximate DP in Chapter 6. References were also made to the contents of the 2017 edition of Vol. I, and to high profile developments in deep reinforcement learning, which have brought approximate DP to the forefront of attention.

Dynamic Programming and Optimal Control, Vol. II: Approximate Dynamic Programming, ISBN-13: 978-1-886529-44-1, 712 pp., hardcover, 2012

CHAPTER UPDATE - NEW MATERIAL

Click here for an updated version of Chapter 4 , which incorporates recent research on a variety of undiscounted problem topics, including

Deterministic optimal control and adaptive DP (Sections 4.2 and 4.3). Stochastic shortest path problems under weak conditions and their relation to positive cost problems (Sections 4.1.4 and 4.4). Affine monotonic and multiplicative cost models (Section 4.5).

PREFACE, SLIDES, AND OTHER INFORMATION

Click here for preface and detailed information.

Click here to order at Amazon.com

Lectures on Exact and Approximate Finite Horizon DP:Videos from a 4-lecture, 4-hour short course at the University of Cyprus on finite horizon DP, Nicosia, 2017. Videos from Youtube. (Lecture Slides:Lecture 1, Lecture 2 , Lecture 3 ,Lecture 4.)

Videos from a 6-lecture, 12-hour short course at Tsinghua Univ., Beijing, China, 2014. From the Tsinghua course site, and from Youtube. Click here to download Approximate Dynamic Programming Lecture slides, for this 12-hour video course.

Click here to downloadlecture slides for a 7-lecture short course on Approximate Dynamic Programming, Caradache, France, 2012.

Click here to downloadlecture slides for the MIT course "Dynamic Programming and Stochastic Control (6.231), Dec. 2015. The last six lectures cover a lot of the approximate dynamic programming material.

Click here to download research papers and other material on Dynamic Programming and Approximate Dynamic Programming.

dpchapter

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK