Chevron Left
Voltar para Sample-based Learning Methods

Comentários e feedback de alunos de Sample-based Learning Methods da instituição Universidade de AlbertaUniversidade de Alberta

4.8
12 classificações
5 avaliações

Sobre o curso

In this course, you will learn about several algorithms that can learn near optimal policies based on trial and error interaction with the environment---learning from the agent’s own experience. Learning from actual experience is striking because it requires no prior knowledge of the environment’s dynamics, yet can still attain optimal behavior. We will cover intuitively simple but powerful Monte Carlo methods, and temporal difference learning methods including Q-learning. We will wrap up this course investigating how we can get the best of both worlds: algorithms that can combine model-based planning (similar to dynamic programming) and temporal difference updates to radically accelerate learning. By the end of this course you will be able to: - Understand Temporal-Difference learning and Monte Carlo as two strategies for estimating value functions from sampled experience - Understand the importance of exploration, when using sampled experience rather than dynamic programming sweeps within a model - Understand the connections between Monte Carlo and Dynamic Programming and TD. - Implement and apply the TD algorithm, for estimating value functions - Implement and apply Expected Sarsa and Q-learning (two TD methods for control) - Understand the difference between on-policy and off-policy control - Understand planning with simulated experience (as opposed to classic planning strategies) - Implement a model-based approach to RL, called Dyna, which uses simulated experience - Conduct an empirical study to see the improvements in sample efficiency when using Dyna...

Melhores avaliações

Filtrar por:

1 — 6 de {totalReviews} Avaliações para o Sample-based Learning Methods

por Manuel V d S

Sep 11, 2019

Course was amazing until I reached the final assignment. What a terrible way to grade the notebook part. Also, nobody around in the forums to help... I would still recommend this to anyone interested, unless you have no intention of doing the weekly readings.

por LuSheng Y

Sep 10, 2019

Very good.

por Luiz C

Sep 13, 2019

Great Course. Every aspect top notch

por Stewart A

Sep 03, 2019

Great course! Lots of hands-on RL algorithms. I'm looking forward to the next course in the specialization.

por Ashish S

Sep 16, 2019

A good course with proper Mathematical insights

por Neil S

Sep 12, 2019

This is THE course to go with Sutton & Barto's Reinforcement Learning: An Introduction.

It's great to be able to repeat the examples from the book and end up writing code that outputs the same diagrams for e.g. Dyna-Q comparisons for planning. The notebooks strike a good balance between hand-holding for new topics and letting you make your own msitakes and learn from them.

I would rate five stars, but decided to drop one for now as there are still some glitches in the coding of Notebook assignments, requiring work-arounds communicated in the course forums. I hope these will be worked on and the course materials polished to perfection in future.