This course teaches learners (industry professionals and students) the fundamental concepts of parallel programming in the context of Java 8. Parallel programming enables developers to use multicore computers to make their applications run faster by using multiple processors at the same time. By the end of this course, you will learn how to use popular parallel Java frameworks (such as ForkJoin, Stream, and Phaser) to write parallel programs for a wide range of multicore platforms including servers, desktops, or mobile devices, while also learning about their theoretical foundations including computation graphs, ideal parallelism, parallel speedup, Amdahl's Law, data races, and determinism.
Este curso faz parte do Programa de cursos integrados Parallel, Concurrent, and Distributed Programming in Java
oferecido por

Informações sobre o curso
Resultados de carreira do aprendiz
11%
12%
Habilidades que você terá
Resultados de carreira do aprendiz
11%
12%
oferecido por

Universidade Rice
Rice University is consistently ranked among the top 20 universities in the U.S. and the top 100 in the world. Rice has highly respected schools of Architecture, Business, Continuing Studies, Engineering, Humanities, Music, Natural Sciences and Social Sciences and is home to the Baker Institute for Public Policy.
Programa - O que você aprenderá com este curso
Welcome to the Course!
Welcome to Parallel Programming in Java! This course is designed as a three-part series and covers a theme or body of knowledge through various video lectures, demonstrations, and coding projects.
Task Parallelism
In this module, we will learn the fundamentals of task parallelism. Tasks are the most basic unit of parallel programming. An increasing number of programming languages (including Java and C++) are moving from older thread-based approaches to more modern task-based approaches for parallel programming. We will learn about task creation, task termination, and the “computation graph” theoretical model for understanding various properties of task-parallel programs. These properties include work, span, ideal parallelism, parallel speedup, and Amdahl’s Law. We will also learn popular Java APIs for task parallelism, most notably the Fork/Join framework.
Functional Parallelism
Welcome to Module 2! In this module, we will learn about approaches to parallelism that have been inspired by functional programming. Advocates of parallel functional programming have argued for decades that functional parallelism can eliminate many hard-to-detect bugs that can occur with imperative parallelism. We will learn about futures, memoization, and streams, as well as data races, a notorious class of bugs that can be avoided with functional parallelism. We will also learn Java APIs for functional parallelism, including the Fork/Join framework and the Stream API’s.
Talking to Two Sigma: Using it in the Field
Join Professor Vivek Sarkar as he talks with Two Sigma Managing Director, Jim Ward, and Software Engineers, Margaret Kelley and Jake Kornblau, at their downtown Houston, Texas office about the importance of parallel programming.
Loop Parallelism
Welcome to Module 3, and congratulations on reaching the midpoint of this course! It is well known that many applications spend a majority of their execution time in loops, so there is a strong motivation to learn how loops can be sped up through the use of parallelism, which is the focus of this module. We will start by learning how parallel counted-for loops can be conveniently expressed using forall and stream APIs in Java, and how these APIs can be used to parallelize a simple matrix multiplication program. We will also learn about the barrier construct for parallel loops, and illustrate its use with a simple iterative averaging program example. Finally, we will learn the importance of grouping/chunking parallel iterations to reduce overhead.
Data flow Synchronization and Pipelining
Welcome to the last module of the course! In this module, we will wrap up our introduction to parallel programming by learning how data flow principles can be used to increase the amount of parallelism in a program. We will learn how Java’s Phaser API can be used to implement “fuzzy” barriers, and also “point-to-point” synchronizations as an optimization of regular barriers by revisiting the iterative averaging example. Finally, we will also learn how pipeline parallelism and data flow models can be expressed using Java APIs.
Continue Your Journey with the Specialization "Parallel, Concurrent, and Distributed Programming in Java"
The next two videos will showcase the importance of learning about Concurrent Programming and Distributed Programming in Java. Professor Vivek Sarkar will speak with industry professionals at Two Sigma about how the topics of our other two courses are utilized in the field.
Avaliações
Principais avaliações do PARALLEL PROGRAMMING IN JAVA
Great introduction to parallel programming. Lectures were clear, summaries were helpful, quizzes were not trivial, discussion forum is good, but the assignments' grading system could be improved.
This is a great course in parallel programming. The videos were very clear, summaries reinforced the video material and the programming projects and quizzes were challenging but not overwhelming.
I found the explanation of how to precisely code phasors lacking as the Thread stuff was already provided in sample code and never explained. Quite satisfied with the rest of the course though.
Excellent Course.I always wanted a good course on java concurrency and parallel programming.And finish->async, isolated, forAsync constructs are awesome.I have learnt much from this course.
Sobre Programa de cursos integrados Parallel, Concurrent, and Distributed Programming in Java
Parallel, concurrent, and distributed programming underlies software in multiple domains, ranging from biomedical research to financial services. This specialization is intended for anyone with a basic knowledge of sequential programming in Java, who is motivated to learn how to write parallel, concurrent and distributed programs. Through a collection of three courses (which may be taken in any order or separately), you will learn foundational topics in Parallelism, Concurrency, and Distribution. These courses will prepare you for multithreaded and distributed programming for a wide range of computer platforms, from mobile devices to cloud computing servers.

Perguntas Frequentes – FAQ
Quando terei acesso às palestras e às tarefas?
O que recebo ao me inscrever nesta Especialização?
Is financial aid available?
Mais dúvidas? Visite o Central de Ajuda ao Aprendiz.