KM
4 de mai de 2020
Excellent professor. Fundamentals and math are provided as well. Very good notebooks for the assignments...it’s just that turicreate library that caused some issues, however the course deserves a 5/5
PD
16 de mar de 2016
I really enjoyed all the concepts and implementations I did along this course....except during the Lasso module. I found this module harder than the others but very interesting as well. Great course!
por YASA S K R
•31 de ago de 2020
good
por ANKAN M
•16 de ago de 2020
nice
por Saurabh A
•19 de jul de 2020
good
por Keyur M
•9 de jun de 2020
good
por Vaibhav S
•16 de mai de 2020
Good
por Vansh S
•10 de mai de 2019
nice
por 王曾
•25 de set de 2017
good
por Birbal
•13 de out de 2016
good
por Vasthavayi a V
•28 de jan de 2022
nyc
por FW Y
•16 de ago de 2017
做中学
por Ablaikhan N
•14 de mar de 2021
A+
por Ganji R
•8 de nov de 2018
E
por Anunathan G S
•28 de ago de 2018
L
por IDOWU H A
•20 de mai de 2018
I
por Ruchi S
•8 de nov de 2017
e
por Alessandro B
•27 de set de 2017
e
por Navinkumar
•17 de fev de 2017
g
por ngoduyvu
•16 de fev de 2016
v
por Miguel P
•2 de dez de 2015
I
por manuel S
•13 de ago de 2017
Interesting course. However, I have some mixed feelings:
I have a BS in mathematics, in Mexico (a "licenciatura", which is just between "BS" and "MS")
So, I'd say I have pretty good knowledge of statistics. So, now it is "training" instead of "fitting". It's "overfitting" instead of "multi colinearity". There are some algorithms to remove/add features (Ridge/Lasso), which -as noted- induce bias in the parameters. However, more "formal" methods susch as stepwise regression and bayesian sequences, are completely ignored.
That'd be fine except for the fact that there not even the slightest attempt to approach statistic significant, neither for the model nor for the individual parameters.
Some other methods (moving averages, Henderson MA, Mahalanobis distances) should also be covered.
So, in summary, an interesting course in the sense that ti gives an idea as to where lies the state of the art, but a little bit disappointing in the sense that -except for some new labels for the same tricks, and a humongous computing power- there is still nothing new under the sun. Still, worth the time invested
por Grant V
•29 de fev de 2016
An excellent and quite extensive foray into regression analyses from single-variable linear regression to nearest-neighbor and kernel regression techniques, including how to use gradient vs. coordinate descent for optimization and proper L1 and L2 regularization methods. The lecture slides have some questionable pedagogical and aesthetic qualities, and they could use some more polish from someone who specializes in teaching presentation methods, but the meat of the course comes from its quizzes and programming assignments, which are well split between practical use (via Graphlab Create and SFrame) and a nuts-and-bolts assignment that have you implement these methods from scratch. An extremely valuable course for someone who wants to use these for a data science application but also wants to understand the mathematics and statistics behind them to an appreciable degree.
por William K
•23 de ago de 2017
The only complaint I have is that the programming exercises were not challenging enough. The lecture videos were great to build up an understanding from fundamentals, but the assignments did not fully test the concepts. There were too many exercises that were fill-in-the-blank with most of the code already written. I would appreciate more rigorous programming exercises to facilitate an in-depth understanding of the topics. Moreover, the programming exercises were not applicable to real-world applications because all the data was already neatly presented and the desired outcome was known ahead of time. In order to mimic real-world machine learning problems, we should be required to clean the data and answer open-ended questions that require exploring and understanding the data before developing machine learning models to extract usable information.
por Denys G
•14 de jan de 2016
Courses like this are always difficult to judge because of the great variety of students coursera reaches. That is, some class members finished this course in the first week it was open, others still struggled till the last minute. For some the math was too simply, for others the python programming was too confusing. All in all it strikes a reasonable balance between novice learners and more advanced students.
What the course could stand to really benefit from is some kind of repository of code, for those students who successfully completed the assignments to compare to their own. It seems pretty clear that there are some advanced python users whose insights could help improve one's coding skills.
por Marvin J A
•27 de nov de 2015
(Beta-Test review)
Status: Still on the first week.
The content is an easy follow, though it might seem to be a slight difficulty for those without a heavy background in calculus. So far, all the links (to the downloadable csv's and ipynb files) work well. All the videos have no apparent bugs and/or problems. I would also suggest to have the slides available for download as in the previous module.
I don't think writing over the animation is a bad thing as long as it's still understandable.
As an aside, I suggest editing out the swallowing sound you might occasionally hear whenever either instructor is speaking. To some, it seems a bit off-putting.
Great course, overall.
Thanks,
Marvin
por Martin B
•11 de abr de 2019
Excellent explanation of the use of regression-based Machine Learning techniques. I recommend taking the specialization on Machine Learning Mathematics before taking this one - it will give you a deeper understanding of some of the mathematical concepts involved and make for a greater experience with this course. Programming assignments are good and help the learner with applying and re-visiting the material. Big drawback is the insistence in most of the assignments on using Python 2 and Graphlab Create. Workarounds for users of Pandas, Scikit-Learn, NLTK etc. are provided but it could be better.