Chevron Left
Voltar para Cleaning and Exploring Big Data using PySpark

Comentários e feedback de alunos de Cleaning and Exploring Big Data using PySpark da instituição Coursera Project Network

51 classificações
13 avaliações

Sobre o curso

By the end of this project, you will learn how to clean, explore and visualize big data using PySpark. You will be using an open source dataset containing information on all the water wells in Tanzania. I will teach you various ways to clean and explore your big data in PySpark such as changing column’s data type, renaming categories with low frequency in character columns and imputing missing values in numerical columns. I will also teach you ways to visualize your data by intelligently converting Spark dataframe to Pandas dataframe. Cleaning and exploring big data in PySpark is quite different from Python due to the distributed nature of Spark dataframes. This guided project will dive deep into various ways to clean and explore your data loaded in PySpark. Data preprocessing in big data analysis is a crucial step and one should learn about it before building any big data machine learning model. Note: You should have a Gmail account which you will use to sign into Google Colab. Note: This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions....

Melhores avaliações

Filtrar por:

1 — 13 de 13 Avaliações para o Cleaning and Exploring Big Data using PySpark

por Farzad K

10 de fev de 2021

I was expecting a project on big data and Spark application on that, but it was only on PsSpark syntax. Not a single word on the Spark technology, only coding.

por Venkat C S G

13 de out de 2020

The project should include more explanation.

por Alexandra A

22 de ago de 2021

Practical walk through of basic PySpark operations. Great quick-start to using Pyspark for data analysis

por Georgete B d P

9 de fev de 2021

Curso rápido e abrangente de fundamentos para utilização do PySpark

por Aruparna M

31 de jan de 2021

Very nice content

por Pris A

5 de abr de 2021


por Jorge G

25 de fev de 2021

I do not recommend taking this type of course, take one and pass it, however after a few days I have tried to review the material, and my surprise is that it asks me to pay again to be able to review the material. Of course coursera gives me a small discount for having already paid it previously. It is very easy to download the videos and difficult to get hold of the material, but with ingenuity it is possible. Then I recommend uploading them to YouTube and keeping them private for when they want to consult (they avoid legal problems and can share with friends), then they can request a refund.

por Saket R

15 de dez de 2020

More theory behind the functions used and concepts behind spark and how it works in a distributed way would've been more benefitting. Overall it was a worthy course.

por nawaz

23 de abr de 2022

use case could be explained a little better, before actually going to the code

por Juan C A

24 de mar de 2022

fast and simple explanation about ow to start to work with Spak on Colab

por shweta s

18 de out de 2021


por Jeremy S

23 de jan de 2022

This course uses the Coursera in-browser notebook processer, Rhyme, rather than Google Colab, Python, or Anaconda. If you want to use Pyspark on your home computer or work computer, this tutorial will not show you how to get there. You will need to seek out those instructions separately and install Python/Java/Spark yourself. The instructor demonstrates quite a few functions and methods that will help you to get started with Pyspark, though he does not go into much depth about any of them. You will understand the statements and operations in this course much better if you have a solid understanding of Python, and at least a basic understanding of SQL commands. In my opinion, this course was worth the $10 I paid.

por Dharmendra T

6 de out de 2020

Overall, it was a good course but I think if some explanations about how things are working, provided then it would have been plus in our learning of data explorations in Spark