So, this is our video for Capstone 1, which is centered around the sonification of movement, which is taking a persons movement in front of the camera and using this to change some audio representation. And let us see how this works in more detail in the future slides. So, the first thing we want to go over is just what you would need for this project. So first, you need the DragonBoard to process everything, you will need the power supply to power that DragonBoard, and you will need a keyboard, mouse, and monitor to interact with the DragonBoard. There is an important note for the monitor. It should either have its own speakers or you should have audio out on the monitor and have speakers to run the audio portion and actually hear audio back, since this is an audio changing project. And then, you also need a USB webcam to take in video and that will be the video that we would be changing the audio against or comparing it against to. Since we are taking video and changing audio depending on that. A video is just a bunch of frames of pictures and in that picture is a bunch of pixels. So, each sections of the picture control certain aspects of the audio. So, motion on the left side of the picture highlight in green controls the tempo and then motion on the right side of the video highlighted in blue controls the melody. Now, the little red box at the bottom, any motion detected there will control the beat and motion in the overall picture itself will control the volume. So, I will go over the video portions of this project which is covered in more detail in course five and it uses a little bit of stuff from the previous courses as well. So, the first thing it does is it takes RGB values from a frame and it converts every frame into grayscale values, which is basically converting everything to some form of gray colors instead of just RGB. You can think of this kind of like brightness, but it is not. There is a little nuance to it as well, but you guys get the point. So, the next part is that it takes the previous frame and the current frame and it subtracts them. So, if there was a pixel that did not have movement the subtraction would be zero, which means that if you look at the difference, the grayscale value would be zero and it would be completely black. But, if there was some kind of difference between them then you would see gray and the frame that is the difference between the previous and current frame. And this is how we detect whether there was motion and to what extent the person moved as well. So, the next part is MIDI, which will be covered in more detail in course four, which is going over some kind of communication aspects of the DragonBoard. MIDI is one way we can communicate audio. It is packaged into a binary message and it is labeled by dot mid. And you put this into a synthesizer and the synthesizer knows this binary message, it represents audio, but I can edit some parts of it as well. So, I can turn off the piano parts of the music or I can adjust the tempo or just the volume. And you do not have this much freedom with file formats such as MP3 because MP3 is just one music file and that is all it is. But, this like binary message. So you can just change individual parts of the message itself. And then the next part is how we took the video parts and the audio parts and put them together to complete this program. The first thing we did was we made it so that the difference between the two frames so as how to grayscale value up to a certain threshold because if you barely moved into frame or like some dust moved into frame as well, this should not cost the music to change because the amount of movement was insufficient. So, we had the person reach a certain threshold of movement before it affected anything. Afterwards, we saw how much changed in the overall frame and how many pixels had reached that much threshold and if there were a lot of pixels that changed from the previous image then we would make it so that it would change volume but if there is like a normal amount of movement, we just said this value is considered normal movement then we would just maintain volume at 50 percent. Afterwards, we adjusted tempo discretely. So, in the picture where we showed the diagram by a grid, you saw that the tempo was shown in like a shade of green. So, each shade of green means that is how much the tempo was at that point. So, if you moved into upper left hand corner of your frame then that means that the tempo would increase to the maximum level. If you just placed your hand in the bottom left hand corner then it would slow down. We will demonstrate this later. We just made it so that each tempo box had a discrete amount of tempo difference instead of changing continuously. Afterwards, we made it so that the movement in the right portion and the bottom portion just turned on or off the melody and beat respectively. So, there was not like a degree to which they showed in the audio, its is just it is on or it is off. And afterwards, we will show you a quick demo of how this works in person. Simon will be showing us how it is done. So, let us get on to our demo of Capstone 1 and we are going to have Simon demonstrate how it works and I will just narrate over it. So let us start off the program. It will take a moment to load. So as you start off, the sound starts playing but you cannot hear the melody anymore. Just ignore the blue circle in the middle, that was specific for me, just to see where I should be positioned but if Simon starts moving his right hand, you will see that the melody starts playing again. Let us have Simon stop moving his right hand and the music stops playing. So, let us have Simon put on his left hand at the very top left hand corner. Let us have him stop doing that and then let us have him start moving his right hand again. You can see that the music is significantly faster now. And let us have Simon move his hand to the bottom left hand corner now and then start moving his right hand. You can see that the music is significantly slower now. Let us have Simon start moving his legs. You can see that the beats start playing now. Let us have Simon just move a lot on the camera so that the volume increases. You can see that the volume gradually increases but if Simon moves a lot more, we can see that the volume will increase significantly. So, Simon has basically reached the maximum volume levels. But, if Simon stops moving like all of a sudden, you will see that the volume would decrease significantly as well. You can see that volume starts to decrease. So, this is the basic idea of Capstone 1. So, in conclusion, what this demonstrates is that there is a potential to help out people, who are having trouble with movement. So, it is just a fun way to have them practice some muscle memory and to practice how they can psychologically move again. So, this is just an entertaining way for them to help themselves and it also an entertaining way to just play around with music. We are very excited to what you guys can create using this as a platform and just show us cool stuff that you can do. We will see you guys next time.