When we see things, we're experiencing light. Photons leave a light source, bounce off an object, and arrive at our eyes. Without a light source, there is nothing to see. Photography is the art of capturing light. Filmmakers have incorporated that art into their visual vernacular. They convey emotion with the color of lights and they outline shapes by casting shadows. But how adaptable is that language to scenes featuring scientific data? Understanding how to light your visualization scene requires you to understand the techniques of film makers as well as the scientific reality of how light interacts with your science. Film lighting often takes liberties with reality, and in visualization, we want to avoid lying or being misleading. In fact, many science stories can be made clearer with more accurate lighting. For instance, to indicate how much sunlight a leaf is photosynthesizing. The first thing to understand is that objects in a scene may or may not be emmisive. That is, they might generate their own light. Most of the material we interact with on a daily basis is not emissive. We see it because light is bouncing off of it from an external source and arriving at our eyes. But some objects are emissive. Some examples of emissive objects might be fire or a computer screen or bio-luminescent algae. Astronomers frequently work with emissive objects like stars and nebulae. Visualizations of these phenomena don't benefit from external light sources. Even if there are other light sources in the scene, the light the objects generate themselves tends to overpower any incoming light. There are no light sources programmed into this scene because the stars and clouds all light themselves, and there's nothing left on which to cast a shadow. But for any scene with anonymous of object, you will need to setup at least one light source and usually several. A good starting place is a common setup in film making, three-point lighting. This can be overkill and not super accurate to nature, but it's a good place to start. The three points of light include, a key light, which illuminates the front of your subject, a rim light, which sculpts the subjects by highlighting sharp edges, and a diffuse light, which fills in any areas of the subject still left in shadow. Knowing which types of lights to use is also important. In the live action world, filmmakers get light from spotlights, big umbrella diffusers, and sometimes the natural sunlight. In computer graphics, light sources have been separated into types based on the computer algorithms that trace their light. The most basic light is usually called an ambient light. An ambient light is most useful during pre-production just to illuminate some things so it's visible and you're not looking at a blank screen. It treats the camera as a light source, so wherever you are looking from, your subject is lit. The effect of this is a very dull and flat illumination and there's nothing particularly realistic about it. The next most common light you will encounter is called a point light. You can think of point lights as light bulbs placed around your scene. Light emits outward from a single distinct point in all directions. Point lights are computationally efficient but not exactly realistic. Even a light bulb in the real-world has some surface area, it's not infinitesimally small. This point light algorithm gives shadows a very sharp edge they can feel fake. Directional lights are similar except instead of projecting light outward, all of their light rays travel in one direction. Directional lights are considered inefficient way to illustrate sunlight. They also cast very harsh shadows but this can be realistic for a clear sunlit day. To get softer more naturalistic shadows, you may choose to use an area light. Area lights emit light from a simple surface like a rectangle or a disk, which is a more realistic proxy for a light source like a panel of fluorescent light bulbs or a sunlit window. They are well suited to indoor environments. Area lights can become exponentially more expensive to compute them point or directional lights though. The bigger the area of the surface, the more light rays have to be calculated. Often, real-world environments have a lot of atmospheric light that's bouncing around and not coming from any particular light source. This is the type of light that can really take a visualization from computer graphics to photographic realism. One way to achieve this in the computer is through an environmental light, sometimes called image-based lighting or a dome light. An environment light uses an image mapped over a sphere or a hemisphere to cast colored light down onto your scene. This is particularly well-suited to outdoor environments with atmospheric lighting that varies by altitude, cloud cover, and time of day. Another type of lighting that helps with this atmospheric scattering is called ambient occlusion. Ambient occlusion is not technically a light with a geometric position, it's actually an efficient rendering algorithm that simulates how light diffuses in tight corners and crevices. It's an extremely effective way to bring more realism to a scene that features hard surface data. Most computer graphics light sources will allow you to define the fall off of the light as well. This defines how much the brightness of the light dims as you get further away from the light. While this gives you an extraordinary creative flexibility, typically, you will want to choose an exponential quadratic fall off because in the real world, light dims as a square of the distance you are from its source. On occasion, a data-set will include both diffuse and emissive material. You might see this in the way lights of a city illuminate the clouds above them from outer space or in the way that the diffuse smoke rising from a fire gets brighter. In this example, an explosive collision turns two diffuse objects into an emissive spinning disk of molten rock and gas. But as the collision begins, you can see how it's important to light the diffuse material from the emissive material. In this situation where our churning morphing data was a source of light, we chose to use a more advanced light type called the geometry light. A geometry light is a special case of an area light. Instead of casting light from a square or circle, it gets cast from the polygons that define any arbitrary surface shape. This makes the geometry light more computationally expensive as the geometry becomes more complicated. If you really wanted to render the light from a light bulb as precisely as possible, you could use a geometry light to represent the teardrop shape, but the computational cost is almost never going to be worth it. You will need to analyze the subject matter of your visualization to understand where light should come from and what types of lights to use. Naturally occurring light sources can often be used to dramatic effect. A dark scene or a scene with harsh lighting can appear mysterious and threatening like the looming storm clouds in this tornado. You can use the transition of a shadow to build anticipation and have an important feature revealed. Lighting can accentuate the edges that help define complex shapes, it can give a sense for the time of day, and blocking out the light can surprise you with dark features on a dark background. One other consideration is what you will do to represent light sources when they appear on the camera. Many computer graphics tools will not actually render light sources, especially because the way a physical camera perceives the light source is heavily affected by the way its intense direct light creates glare on the lens. Typically, this is an effect that's recreated as a 2D effect later in the process. Lighting is one of the most computationally expensive elements in your scene in terms of render time. So it's important to be efficient about lighting placement and the type of lighting used wherever possible. But don't artificially limit yourself to simple lighting setups when a more complex one gives a better result. Modern software has gotten especially good at optimizing setups with multiple lights. You can prototype some lighting arrangements more efficiently by reducing the sampling quality of the lights as well. These create unsightly noise across the image, but that can be easily fixed by increasing the sampling value later when you're ready to render your final images. Lighting can dramatically change how audiences perceive your visualization, and it is often the difference between something that looks like it was made on a computer and something that looks like it was captured from the real world. That realism allows viewers to interpret your data in the way their eyes have been trained to work in the real world, thereby helping them to understand complex science. Fun fact, many large telescopes have problems with diffuse light washing out their imagery. In response to that, the Hubble telescope was painted with a special substance called Aeroglaze. That was one of the darkest materials ever produced, until 2014 when a substance called Vantablack was introduced, which absorbs 99.96 percent of all visible light rays. Vantablack is so black, you can't even see a red laser light being shined on it.