So as we discussed threads and processes are independent streams of instructions. They can run in parallel. The difference between them is that threads share the memory address space with each other and processes don't. To explore how they work, we are going to write two programs, one with threads and another with processes running in parallel. As a sketch of this program, I have this little code in C. I declare a global variable called message and initially it is set to an uninitialized value. I have here a function that will print this message, wait a little bit, and then print the message again and I also have a function that is going to change the message. So first, I will call the Watch Message function and then the Change Message function. Let's compile this program and see how it works. As you can see, it prints the address of the message, this is the address, the message is uninitialized and a few seconds later the message is still uninitialized. No surprise here, Watch Message is a regular function, it executes synchronously, meaning, that control does not resume in the main function until the Watch Message is completed. And when they change the message it has no effect on the first one. So what if we could run these two functions in parallel? Then potentially we could change the message as we're waiting. There's two ways to do this, with threads and with processes, and let's see how this works. First I'm going to use the POSIX threads library to execute What Message asynchronously. To use this library, I have to include the header file for the POSIX threads or library. I have to slightly change the signature of the function, it has to accept one argument of type pointer to void which is the thread ID. I'm not currently using the thread ID but it has to be there, so that the thread can identify what it needs to do. When it's time to run a function asynchronously, I have to create a variable to represent the thread of type pthread_t. And instead of just calling the function, I will wrap it in the function call the pthread_create. I will pass to it the thread variable. Optionally, I can set any thread attributes that they want, I'm not setting any special attributes. Then comes the name of the function to run in parallel and optionally function arguments. So now, now which message is executed in a separate thread. And control immediately returns to Change Message. Well, if I want to make this code meaningful, I'd probably want to change the message during this sleep, so that first we see that uninitialized value, then we sleep, as we sleep another function comes in, changes this variable and then we print the result. Well there's a better way to synchronize threads. I will use cheap and simple method. I will just sleep for one second. So first, I will see the uninitialized variable. Then I will sleep for one second, change this variable and then I will print the change the results. At least this is what I expect. For a clean exit I need the pthread_exit. Now we can compile and run this application. The function will not, the code will not compile, unless you specify the link or line argument to link the POSIX threads library pthread. Now we can run. As you can see, what happened is exactly what we expected. The global variable message was initially uninitialized, then the thread was sleeping for two seconds and during the sleep another thread came in, changed this message, and then we printed out the modified message. So here is how threads work. They run in parallel and they share the memory address space. If one thread modifies something in memory the other thread sees it. Okay. Now let's see what would happen if instead of forking of a thread, we forked of a process, and this will illustrate the difference in the memory model for threads and processes. Now, I will repeat this workflow but instead of creating a thread I will fork a process. I will just make a copy of this code. I no longer have to use the pthreads variable, so I no longer have the thread ID here and I no longer have to create the thread. So I'm back to the serial code but what I will do, is I will fork to create a child process and a parent process. To do the fork correctly. I will need two variables. Process ID and styles. After the fork, I will end up with two copies of original process, one with pid zero and another with the pid not zero. The one with the process ID not zero is the child process, and the one with a pid zero is the parent. Well, pid is the return value of fork. Now it makes sense. So, to perform Watch Message and Change Message routine in parallel, I will check, If I am in the child process, I will start watching the message. Otherwise, I will change the message. Both of them are going to be started at the same time by two different processes. For a clean exit, I have to wait and the wait returned the status, so that I can see what's happening. Let's slightly clarify this output. Now, I'm ready to compile and run this application. Watch carefully, as you can see the child process prints the address of the message is this, and the message is uninitialized. One second later, the parent process modifies this memory address. You can see that it has numerically the same memory address, and of course this is the virtual memory address, not the physical address. And it sets the message at this address to "I'm a little teapot." And then another second later, the child process checks. What do I have now as the message? And unlike the case with threads, the message is still uninitialized. So what we can conclude from here is that the virtual memory address space is shared between threads, but it is not shared between processes. We will take this knowledge and see how we can apply it to parallel computing.