2D Light and Shadow (2/3)

Posted by aeron on June 23, 2014, 8:46 p.m.

Welcome back! In the last part I went over the use of render textures and blend modes for simple lighting. Now it is time to dive into a tangent to visit the math involved in typical 3d lighting. I'll go over the basic shading methods we take for granted when working in 3d. We'll also get to play with some example code that puts the math to good use. Let's get started!

Part 2: Mathematical model

What happens when you turn on a real light? Well, usually the source emits light fairly equally in all directions. These electromagnetic waves move at the speed of light until they interact with matter. When these waves hit, say, a flat surface, theres a chance (depending on the material) some of the light gets reflected precisely according to the angle of the light vs the surface. This is called specular reflection, and it is responsible for the "shininess" of materials.

Most of the light however, is diffusely reflected for most materials. What's that mean? Well basically it means that the light is reflected at many angles rather than just one. In an ideal diffuse scenario this means the viewing angle does not affect the apparent brightness of the surface. In other words, as long as the light reaches a patch of the surface, it doesn't matter if the light perfectly reflects to the viewer, it simply appears lit from any viewpoint. This is consistent with the way many materials are lit in real life.

Reflection types

The ways light can reflect from a surface. More can be read on the subject on Wikipedia.

This is all fine and dandy, but how can this be applied to a game? Do we really need to simulate each ray of light just to get the final picture? Well, the answer to both lies in history. Raytracing is a method of rendering that actually does simulate each light ray and it has been around since the early days of CGI. However, it is extremely expensive to compute and each frame of a movie like Toy Story could have taken hours or even days to render a single frame. This is less than ideal for games which need full frames to be rendered in milliseconds.

Enter the graphics pipeline. It turns out smart folks have been coming up with more and better approximations for these kinds of light reactions for decades. The de facto solution is to work backwards in a sense with a shader. Instead of measuring light bouncing around eventually hitting the lens we just look at every pixel to be rendered and where it is in space (particularly, where it is facing). We compare its position and angle to the position and angle of each light source. From there we can calculate the diffuse and specular components rather painlessly with some simple vector math and shade the object. Don't worry, the hard parts have already been worked out for us my friend. If you've worked in any 3d environment, you've probably been exposed to the solutions in the form of material shaders. Do the terms Lambert, Phong, or Blinn-Phong ring a bell?

Now, it's worth looking into the math behind them so we can understand and emulate their effects.

Crash course on vectors

So now is a good time to digress and talk about vectors for a moment because we will rely on them heavily. A vector is a point in space (in our case, 3d space) and has components for each dimension relative to the origin (0,0,0). You might remember vectors from math class as a sort of 1 dimensional matrix. You might also remember terms for operations like dot product and magnitude. Anyway if you don't thats fine, I'll try and explain briefly. Sometimes it may be useful to picture a vector as a line from (0,0,0) to (x,y,z) in space, other times it is more useful to just picture the point in space.

Say you have two of these vectors, A and B. You can apply some basic operations between them to get different types of information. A - B gives you the difference between the two points in space. In can be imagined as the line between A and B. But remember, it's origin is now (0,0,0) instead of the original A or B. If you take the magnitude (or length) of this vector, you are getting the distance from (0,0,0) to (A.x-B.x, A.y-B.y, A.z-B.z). This is equivalent to the distance between A and B.

It is also useful to use vectors to represent directions. Picture the vector as a line from the origin to a point with an arrowhead at the end. When considering vectors "shooting out" along this direction, the actual magnitude of the original vector is not as important as the direction. Vectors used for such a purpose are almost always normalized, which simply means they are scaled to have a magnitude of 1. This makes it useful for many purposes. For instance, you can multiply a normalized vector by any scalar (single number, not a vector) to get a point in that direction at any magnitude.

The next mathematical construct we need to consider is called the dot product. The dot product is a scalar value derived from two vectors. Geometrically, it is calculated by the product of the magnitude of both vectors and the cosine of the angle between them. Since we will only use it with normalized vectors, the magnitudes cancel out (as they are both one) and we just get the information about the angle between two vectors. The important takeaway in any case is as follows: when two normal vectors are the same (orthagonal), the dot product is one. When they are completely tangent (90 degrees apart), the dot product is zero. So the more similar in direction the vectors are, the closer the dot product gets to one. This is hugely useful for our purposes. And look at that, we now have all the mathematical background we need to actually applying this to lighting. Yay!

Lighting a plane

Now lets get our hands on the actual code!

For this example I'll be using a shader language (glsl to be specific). If you're familiar with shaders you probably already know how useful they are for working with vectors and of course graphics in general. If you are new to shaders, I would suggest finding another tutorial to teach you the basics.

We'll be using the shader to process the game screen, so we actually have to change up the way we render our game. Instead of drawing straight to the screen, we must defer our draw calls to a render texture. We also must send a reference to this texture to the shader. This process will vary based on your game environment, so consult your documentation on how to send variables to a shader after loading it. I'll leave it as an excercise to you, the reader, to wire your shader up properly. It will take some practice and consideration especially if you have other stages of rendering (such as HUD or other post processing) to deal with.

In this example we only need to send one texture to the shader: the scene itself. Whenever you need access to something you write it into the shader with the uniform keyword. In this example we will also send one vector with the screen resolution and another with the mouse coordinates.

glsl
uniform sampler2D screenTexture;
uniform vec2 screenResolution;
uniform vec2 mouseCoords;

void main(void)
{
	// shader goes here
}

The following code will all go into the main() function. The first step is to read the texture for the current pixel color. Fairly straightforward:

glsl
// Read the main texture color
vec2 uv = gl_FragCoord.xy / screenResolution;
vec3 color = texture2D(screenTexture, uv).rgb;

Next we calculate the angle the light is coming from relative to the current pixel. Remember from the vector section that this value is only useful to us when it is normalized.

glsl
// Light position
vec3 light_pos = vec3(mouseCoords,50);

// Calculate light normal at current pixel
vec3 light_normal = normalize(light_pos - vec3(gl_FragCoord.xy,0));

Now we'll compare this direction vector to the direction the surface is facing. In our case, this is straight upwards or towards the camera.

glsl
// Surface normal at current pixel
vec3 normal = vec3(0,0,1); // Straight up

// Lambertian reflectance
float diffuse = max(0.0, dot(normal,light_normal));	

It's almost too easy now that we know the math behind it. I forgot to mention it earlier but the reason for that max() call is to discard values where the dot product is negative since they represent an angular difference > 90 degrees. The next step is to take this brightness value and tint it. Then we multiply it by the original scene pixel color to get the composited scene:

glsl
// Apply light color
vec3 light_color = vec3(1,0.65,0.35);
vec3 light_out = light_color * diffuse;

// Apply lighting
//color = vec3(1,1,1); // Uncomment to see raw lightmap
gl_FragColor = vec4(color*light_out,1.0);

A live demo of the above can be found here.

That is all that is required to make a light! And it is calculated from a reasonably accurate model. It should be clear this algorithm treats lights as infinitely powerful. The appearance of dimming is consistant with the angle of light approaching the angle of the surface as you move further away, but not the distance by itself. Of course the good thing about this being just math, we can alter the lighting curves as we please. We could multiply the diffuse value above by a clamped value based on the light distance to make the light actually lose power according to distance. We can also take our final lighting values and raise them to a power to change the nature of the falloff. A power greater than one makes the light smaller and more acute right near the center while a power greater than 0 and less than one makes a light bigger and rounder with a sharper falloff.

Generalizing the light calculation

It's possible to abstract the above steps into a function and reuse it for multiple lights. Here's a generalized version of lighting calculation:

glsl
vec3 calculate_light(vec3 position, vec3 color, vec2 texture_coords, vec3 surface_normal)
{
	vec3 light_normal = normalize(position - vec3(texture_coords,0));
	float diffuse = max(0.0, dot(surface_normal,light_normal));
	return color * diffuse;
}

A live demo showing off this function can be found here.

With this, we can process a large number of lights very succinctly and generally. It would be otherwise be very repetitive to do many light calculations. Now, we should consider the game engine-side of things. We somehow have to pass information about all our lights to the shader. It would be unwise to write several uniforms into our shader for every light. Instead, it is possible to send entire arrays of values to the shader all at once. From glsl this looks like:

glsl
uniform vec3[128] light_col, light_pos;
uniform int lights;

The 128 is the maximum number of lights you want to support. light_pos contains the light positions and light_col contains the corresponding RGB color values for each light. The lights value is useful for telling the shader the actual number of lights to process from the arrays since it won't always be full. Again, you'll have to consult your documentation to find out how exactly to send these arrays of vectors from your game to the shader. It should be fairly similar to sending a texture or any other value.

Now, I can't demo this on shadertoy since it has limits on what uniforms you send. But believe me it is a very short leap from the last demo. Once you are managing your lights in game and sending them to the shader you are basically home free. There is hardly a cheaper and more accurate way to handle lighting, and we haven't even talked about the best part yet! So yes, with a little extra work we have set ourselves up perfectly for the final stretch.

I'll stop here for now, but in the next installment we will dig deeper into the world of normals and consider the other type of reflection mentioned earlier. Thanks for staying tuned, and if anything isn't clear in this part just leave your question in the comments and I'll elaborate as best I can.

Until next time…

Further reading

Comments