Have you ever watched an entire movie on a phone? It's not quite the same experience as watching a movie theater, is it? Taking something designed for a theater or a TV and shrinking it down to the size of a few inches degrades the experience, but it's still mostly works. But what if you were to go the other way? What if you were to take your favorite phone app and put it in an IMAX theater? Small icons would be blown up to 10 feet in size, and they'd be uncomfortable to look at. Of course, you would lose app interactivity making the app practically useless. Different display environments usually call for different types of design decisions. There are many types of rectangular screens, which include TV's, computer monitors, movie theaters, IMAX theaters, and phones. Non-rectangular screens include domes, virtual reality, augmented reality, curved screens, and other custom displays like for concerts or projections on buildings. These are not only technically different with different aspect ratios and fields of view, but they also create different experiences for audiences. Some software, like game engines, lets you export the same scene to multiple different platforms. This makes switching display environments easy from a technical standpoint, but from the design side, you need to do some more work if you want to create a good user experience. A lot of non-rectangular screens also fall under the category of immersive screens since they cover viewers peripheral vision. If you look straight ahead of you, your vision is still fully surrounded and immersed by the visuals. Extremely large rectangular screens like IMAX screens can be considered immersive for this reason as well. There's a danger when designing for these screens. A significant portion of the population gets simulator sickness in immersive environments. This is the same physical responses motion sickness, even though you're not actually emotion. Viewers can become disoriented, uncomfortable, get headaches and even throw up. There are a few different reasons why this can happen, some we can fix with better design, and some that we can't. When your eyes are reporting that you're moving but your body's sense of gravity doesn't agree, this balance and sensory conflict can cause discomfort. We can minimize that by making sure that camera moves are slow and smooth, and that visual motion cues match what you would expect with things like motion blur. In VR, the fact that the screen is so close to your eyes can cause headaches and eye stream. A slow frame rate can cause additional visual stress. Slow continuous camera moves are ideal for immersive environments, but fast rapid movements and cuts can retain audience attention more on phones, and small screens, and they're popular in platforms like YouTube. When designing a camera move for different screens, you'll notice that immersive screens like VR or Domes don't have edges, or an object can appear from off screen. This makes it hard to take people by surprise, because you can't hide things off the edge, and it also means that you can't use all the clever camera tricks we used to hide data artifacts and box edges. Some non-rectangular screens can present a design challenge, since audiences are able to look in many directions, and they may not be facing the action of the scene. There are some techniques to help with this including onscreen motion cues and sound design. Certain dome theaters can have audiences predominantly facing a direction that points them out one area of the screen called a sweet spot. This is where most of your action should happen. Environments like this can affect your cinematography choices. They should still follow camera design best practices. For instance, the rule of thirds doesn't exactly apply, but you can still move your action to the left and the right of the sweet spot. Having the action around the sweet spot does make it easier to convert between dome and rectangular formats. If you're designing for 3D stereoscopic experience, you're mimicking the way our eyes work. The reason that we see 3D depth is because we have two eyes looking at objects from slightly different perspectives. Our eyes are a few centimeters apart, but the stereo cameras for visualization don't need to be. Think of it like this, when you look up at the night sky, you know that each star you see is at a different distance away from you, but you can't exactly tell that by looking at them. The stars are all different sizes, shine, but for the most part, they look flat as if they were just projected onto a sphere around you. Now, say that you grow a trillion times in size, you've turned into a galactic giant, and now your eye separation is almost the right ear across. When you're that large, the star is no longer a pure flat, because the distances are at scales that are similar to your size. You can actually see that some stars are closer to you than others. When you're dealing with extremely large astronomical or extremely small molecular data, you can get a lot more depth information out of your data by changing your eye separation or interocular distance. Stereo depth is yet one more image dimension into which you can pack information about your data. When you're dealing with data from physical sciences, objects are often interacting across vastly different scales. To make a camera move that takes you from looking at the entire milky way to looking around the area of a single star, you can adjust the interocular distance between the left and right eye cameras as you move through the dataset. Here's a graph showing the eye separation over time for this particular visualization. The best way to create a stereo image is to have two cameras that are parallel to each other and face the same direction. You might be tempted to have your two cameras point directly at your object of interest. This does create a 3D image, but it creates an unnatural vertical parallax effect that can cause audience discomfort. Stereo design is something you'll need to think about if you're designing for VR displays or 3D theaters, but it's irrelevant if you're making a 2D YouTube video. Each target display environment has its own different needs that you'll have to think about to create the most effective possible visualization. Fun fact, virtual reality technology has been around since the 1960s. It became very popular in the early nineties with the release of the cave, a room-scale VR environment, as well as commercially available headsets by companies like Sega and Nintendo. But by the late 90s, it proved too expensive to be commercially viable. But with the advent of mobile devices, the idea that anyone could own VR in their own homes became popular, and we're now experiencing a VR renaissance.