Reinforcement Learning by Richard S. Sutton,Andrew G. Barto Book Summary:
The significantly expanded and updated new edition of a widely used text on reinforcement learning, one of the most active research areas in artificial intelligence. Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives while interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the field's key ideas and algorithms. This second edition has been significantly expanded and updated, presenting new topics and updating coverage of other topics. Like the first edition, this second edition focuses on core online learning algorithms, with the more mathematical material set off in shaded boxes. Part I covers as much of reinforcement learning as possible without going beyond the tabular case for which exact solutions can be found. Many algorithms presented in this part are new to the second edition, including UCB, Expected Sarsa, and Double Learning. Part II extends these ideas to function approximation, with new sections on such topics as artificial neural networks and the Fourier basis, and offers expanded treatment of off-policy learning and policy-gradient methods. Part III has new chapters on reinforcement learning's relationships to psychology and neuroscience, as well as an updated case-studies chapter including AlphaGo and AlphaGo Zero, Atari game playing, and IBM Watson's wagering strategy. The final chapter discusses the future societal impacts of reinforcement learning.
- Barto, A.G. Bradtke, S.J. & Singh, S.P. (1991). Real-time learning and control using asynchronous dynamic programming (Technical Report 91–57). Amherst, MA: University of Massachusetts, Computer Science Department.Google Scholar
- Barto, A.G. & Sutton, R.S. (1981). Landmark learning: An illustration of associative search. Biological Cybernetics, 42, 1–8.zbMATHCrossRefGoogle Scholar
- Barto, A.G., Sutton, R.S. & Anderson, C.W. (1983). Neuronlike elements that can solve difficult learning control problems. IEEE Trans. on Systems, Man, and Cybernetics, SMC-13, 834–846.Google Scholar
- Barto, A.G., Sutton, R.S. & Brouwer, P.S. (1981). Associative search network: A reinforcement learning associative memory. Biological Cybernetics, 40, 201–211.zbMATHCrossRefGoogle Scholar
- Booker, L.B. (1988). Classifier systems that learn world models. Machine Learning, 3, 161–192.Google Scholar
- Grefenstette, J.J., Ramsey, C.L. & Schultz, A.C. (1990). Learning sequential decision rules using simulation models and competition. Machine Learning, 5, 355–382.Google Scholar
- Hampson, S.E. (1983). A neural model of adaptive behavior. Ph.D. dissertation, Dept. of Information and Computer Science, Univ. of Calif., Irvine (Technical Report #213). A revised edition appeared as Connectionist Problem Solving, Boston: Birkhäuser, 1990.Google Scholar
- Holland, J.H. (1975). Adaptation in natural and artificial systems. Ann Arbor, MI: Univ. of Michigan Press.Google Scholar
- Holland, J.H. (1986). Escaping brittleness: The possibilities of general-purpose learning algorithms applied to parallel rule-based systems. In: R.S. Michalski, J.G. Carbonell, & T.M. Mitchell (Eds.), Machine learning, An artificial intelligence approach, Volume II, 593–623, Los Altos, CA: Morgan Kaufman.Google Scholar
- Kaelbling, L.P. (1990). Learning in embedded systems. Ph.D. dissertation, Computer Science Dept, Stanford University.Google Scholar
- Mahadevan, S. & Connell, J. (1990). Automatic programming of behavior-based robots using reinforcement learning. IBM technical report. To appear in Artificial Intelligence.Google Scholar
- Minsky, M.L. (1961). Steps toward artificial intelligence. Proceedings IRE, 49, 8–30. Reprinted in E.A. Feigenbaum & J. Feldman (Eds.), Computers and Thought, 406–450, New York: McGraw-Hill, 1963.Google Scholar
- Narendra, K.S. & Thathachar, M.A.L. (1974). Learning automata-a survey. IEEE Transactions on Systems, Man, and Cybernetics, 4, 323–334. (Or see their textbook, Learning Automata: An Introduction, Englewood Cliffs, NJ: Prentice Hall, 1989.)CrossRefGoogle Scholar
- Samuel, A.L. (1959). Some studies in machine learning using the game of checkers. IBM Journal on Research and Development, 3,210–229. Reprinted in E.A. Feigenbaum & J. Feldman (Eds.), Computers and Thought, 71–105, New York: McGraw-Hill, 1963.Google Scholar
- Waltz, M.D. & Fu, K.S. (1965). A heuristic approach to reinforcement learning control systems. IEEE Transactions on Automatic Control, AC-10, 390–398.CrossRefGoogle Scholar
- Watkins, C.J.C.H. (1989). Learning with delayed rewards. Ph.D. dissertation, Psychology Department, Cambridge University.Google Scholar
- Werbos, P.J. (1987). Building and understanding adaptive systems: A statistical/numerical approach to factory automation and brain research. IEEE Transactions on Systems,Man and Cybernetics, Jan-Feb.Google Scholar
- Whitehead, S.D. & Ballard, D.H. (1991). Learning to perceive and act by trial and error. Machine Learning, 7, 45–84.Google Scholar