Chevron Left
Voltar para Recommender Systems: Evaluation and Metrics

Recommender Systems: Evaluation and Metrics, Universidade de MinnesotaUniversidade de Minnesota

127 classificações
19 avaliações

Informações sobre o curso

In this course you will learn how to evaluate recommender systems. You will gain familiarity with several families of metrics, including ones to measure prediction accuracy, rank accuracy, decision-support, and other factors such as diversity, product coverage, and serendipity. You will learn how different metrics relate to different user goals and business goals. You will also learn how to rigorously conduct offline evaluations (i.e., how to prepare and sample data, and how to aggregate results). And you will learn about online (experimental) evaluation. At the completion of this course you will have the tools you need to compare different recommender system alternatives for a wide variety of uses....

Melhores avaliações

por LL

Jul 19, 2017

wonderful!!! They teach a lot what I did not expect!

Filtrar por:

18 avaliações

por LU WEI

Aug 23, 2018

Confused about some metrics.

por Chris Colinsky

Jul 03, 2018

not an easy course, specifically the honors track. the information is good, but not presented as well as in the previous two courses. Also there are errors in the honors assignment that make it unnecessarily difficult and you spend a lot of time on irrelevant things.

por llraphael

Jun 16, 2018

The computer assignment is lack of explanation.

por Dhruv Mittal

Jun 15, 2018

I was working on a cross-domain recommendation system where i would recommend books to a user whose movie ratings have been given. I made the algorithm but didn't have any idea as to how to evaluate it but this course helped me through. Thanks

por Caio Henrique Konyosi Miyashiro

May 18, 2018

the part of offline evaluation is really good and practical as well. However, although knowing online evaluation is a more complex subject, I felt it lacked a little bit how to put all this knowledge in practice.

por Yury Zelensky

Mar 29, 2018

It is not perfect but best of specialisation so far. It is a little bit philosophical rather than technical and formal, but it was exactly meet my current personal needs. Can not be recommended as a first and only introduction to a topic of an evaluation and metrics of recommender systems.

P.S. Exercises and quizzes, both main and honour, are somewhat eccentric.

por Keshaw Singh

Feb 22, 2018

My issues about the previous courses in this specialization seem to have been addressed in this one. The assignment in the end is a real good one. The creators of this course have done well to evolve a really thought-provoking and relevant assignment. The course itself helps one develop the appropriate thought process, which comes in handy while deciding upon a metric for a problem at hand.

por zheng dai

Feb 09, 2018

nice to learn excel statistic

por Andrew Waterman

Feb 04, 2018

This course was very helpful for giving me a breadth of exposure to various ways to look at evaluating recommender systems. Having faced a very similar problem evaluating a recommender system for a legal document search/suggestion engine (like Google News for lawyers), this gave me a proper "birds eye" perspective on that problem that I wish I had before. We faced exactly the same problem you describe of finding the proper tradeoff between precision and recall, or search vs. discovery.

BUT what is lacking here is teaching us how to go implement these different evaluation metrics in practice. Sadly I don't feel any more equipped to go back to that legal search engine client and guide them toward a very concrete decision about the right metrics to use. I would just come with a mix of new opinions of metrics they should consider -- but how should they choose? what offline evaluation should we do? what online experiment could we run to decide? etc. If you had run us through problem set/assignments involving real-world situations like this, where we had to calculate these different metrics (given sample data) and come up with compelling cases for different metrics to use for evaluation, I would feel otherwise.

That said thank you for your hard work putting the course/specialization together. I hope my feedback helps constructively, but don't see it as criticism. It's because I am very enthusiastic about what you've been teaching me -- and I plan to go implement it for new clients of mine in my Data Science consulting practice ( -- that I only want the course to be the best it can be for others too.

por Maxwell's Daemon

Jan 15, 2018

In addition to the normal number of small errors here and there, the course has too many big errors in the honors track assignments, and no help in the forums. The course appears abandoned.

The videos don't appear to be completely edited, with places where the lecturer says "rewind, I'll start over" or "edit this part out." Also one lecturer in particular will stop mid-sentence as if he has lost the thread of what he was saying, and then finish the sentence with a non-sequitur.

I'm sure they understand the material, but the execution of the presentation is very rough, too rough to continue. I'm bailing out of the specialization after passing 3 courses 100% with honors.