Tuesday 5 January 2016

Basic lighting (part 14)

We're slowly getting through the boring stuff, next up is basic lighting. I'm keeping this simple with a single light source for now, we'll look at more complex lighting at some later time.

First lets go through a tiny bit of theory. Our basic lighting model will implement three types of lighting: ambient, diffuse and specular lighting.
This type of shading is known as Phong Shading

The ambient part of our lighting model is the easy part. Imagine if you will you are in a dark room and you turn on a single light that illuminate the entire room. Now hold up an object like a book and look at the side that is not facing the light. Even though no light hits the object directly from our light source you would expect it to be completely in shadow and black. Yet you'll still see its colors. This is because the light in the room bounces off all the walls and other surfaces and the object becomes lighted indirectly.
To accurately calculate this is a complex task but in computer graphics we take a rather simple shortcut by stating the object is always illuminated by a fixed amount.

The diffuse part of our lighting model requires a bit more calculation. When light hits a surface it scatters. Depending on the type of surface it can be reflected in all directions relatively evenly (and give a nice solid appearance) or be reflected in roughly a single direction (making it a mirror). More complex surfaces might even react differently for different wave lengths.
For our lighting we assume the first situation, the light gets reflected in all directions evenly. The intensity of the light being reflected is a factor of the angle at which the light hits the surface from reflecting no light if the light travels parallel to the surface to full intensity if the light hits the surface at a perfect 90 degree angle.
In order to calculate this we use the normal vector of the surface. The normal vector is a vector that is perpendicular to the surface and we simply calculate the angle between the normal vector and the vector pointing from the surface to the light source. By taking the cosine of this angle we have a great variable that is 1.0 at full illumination, 0.0 when the light travels parallel to our surface and is negative if the light lies behind the surface and we're thus in shadow.
Luckily for us calculating the cosine of a surface is incredibly simple as this happens to be the dot product of two unit vectors.  So we normalize our normal vector and light direction vector and call this handy function called dot().

The specular part of our lighting is the most complex one presented here. Here we are looking at light reflecting off the surface in a specific or narrow direction. If the light reflects towards our viewpoint we see its reflection. Our normal vector again is used but now to determine the vector of the light reflected off of our surface, we have a nice function called reflect() for this.
We again calculate a cosine but now using our reflected light vector and a vector from our eye to the surface.

Because our specular calculation needs to calculating a vector from our eye/camera/viewport it makes sense to do all lighting calculations after we apply our model and view matrices but not our projection matrix. As long as we also adjust the position of our light using our view matrix we're all set.

Normal vectors


As we discussed above we need the normal vector of our surface to perform our lighting calculations. For our cube this is incredibly simple, as each face of our cube is a flat surface the normal vector applies to the entire face and can be easily calculated using the cross product of two edge vectors of our face.

But for more complex shapes this becomes a lot more complex. Also when we look at curved surfaces, even though we're rendering them with flat triangles, we can interpolate the normals between each vertex to create the illusion of a curved surface (we'll see an example of this later). For this reason OpenGL assumes we store a normal for each vertex and just like with texture mapping we have to duplicate vertexes when used for different faces with different normals.

As all this can be relatively complex it makes a lot of sense to calculate all the normals one time and store them for each vertex. To be exact, most 3D modeling software does this for us and handily store all the normals along with the model.

I've adjusted our vertex structure and vertex array to include our normals for our cube:
// we define a structure for our vertices
typedef struct vertex {
  vec3    V;          // position of our vertice (XYZ)
  vec3    N;          // normal of our vertice (XYZ)
  vec2    T;          // texture coordinates (XY)
} vertex;

vertex vertices[] = {
  // front
  -0.5,  0.5,  0.5,  0.0,  0.0,  1.0, 1.0 / 3.0,       0.0,          // vertex 0
   0.5,  0.5,  0.5,  0.0,  0.0,  1.0, 2.0 / 3.0,       0.0,          // vertex 1
   0.5, -0.5,  0.5,  0.0,  0.0,  1.0, 2.0 / 3.0, 1.0 / 4.0,          // vertex 2
  -0.5, -0.5,  0.5,  0.0,  0.0,  1.0, 1.0 / 3.0, 1.0 / 4.0,          // vertex 3

  // back
   0.5,  0.5, -0.5,  0.0,  0.0, -1.0, 1.0 / 3.0, 1.0 / 2.0,          // vertex 4
  -0.5,  0.5, -0.5,  0.0,  0.0, -1.0, 2.0 / 3.0, 1.0 / 2.0,          // vertex 5
  -0.5, -0.5, -0.5,  0.0,  0.0, -1.0, 2.0 / 3.0, 3.0 / 4.0,          // vertex 6
   0.5, -0.5, -0.5,  0.0,  0.0, -1.0, 1.0 / 3.0, 3.0 / 4.0,          // vertex 7
   
   // left
  -0.5,  0.5, -0.5, -1.0,  0.0,  0.0, 1.0 / 3.0, 1.0 / 4.0,          // vertex 8  (5)
  -0.5,  0.5,  0.5, -1.0,  0.0,  0.0, 2.0 / 3.0, 1.0 / 4.0,          // vertex 9  (0)
  -0.5, -0.5,  0.5, -1.0,  0.0,  0.0, 2.0 / 3.0, 2.0 / 4.0,          // vertex 10 (3)
  -0.5, -0.5, -0.5, -1.0,  0.0,  0.0, 1.0 / 3.0, 2.0 / 4.0,          // vertex 11 (6)

  // right
   0.5,  0.5,  0.5,  1.0,  0.0,  0.0, 1.0 / 3.0, 1.0 / 4.0,          // vertex 12 (1)
   0.5,  0.5, -0.5,  1.0,  0.0,  0.0, 2.0 / 3.0, 1.0 / 4.0,          // vertex 13 (4)
   0.5, -0.5, -0.5,  1.0,  0.0,  0.0, 2.0 / 3.0, 2.0 / 4.0,          // vertex 14 (7)
   0.5, -0.5,  0.5,  1.0,  0.0,  0.0, 1.0 / 3.0, 2.0 / 4.0,          // vertex 15 (2)

  // top
  -0.5,  0.5, -0.5,  0.0,  1.0,  0.0,       0.0,       0.0,          // vertex 16 (5)
   0.5,  0.5, -0.5,  0.0,  1.0,  0.0, 1.0 / 3.0,       0.0,          // vertex 17 (4)
   0.5,  0.5,  0.5,  0.0,  1.0,  0.0, 1.0 / 3.0, 1.0 / 4.0,          // vertex 18 (1)
  -0.5,  0.5,  0.5,  0.0,  1.0,  0.0,       0.0, 1.0 / 4.0,          // vertex 19 (0)

  // bottom
  -0.5, -0.5,  0.5,  0.0, -1.0,  0.0, 2.0 / 3.0,       0.0,          // vertex 20 (3)
   0.5, -0.5,  0.5,  0.0, -1.0,  0.0, 3.0 / 3.0,       0.0,          // vertex 21 (2)
   0.5, -0.5, -0.5,  0.0, -1.0,  0.0, 3.0 / 3.0, 1.0 / 4.0,          // vertex 22 (7)
  -0.5, -0.5, -0.5,  0.0, -1.0,  0.0, 2.0 / 3.0, 1.0 / 4.0,          // vertex 23 (6)
};
Note that we change our attributes so that attribute 0 remains our position, 1 becomes our normal and 2 is now our texture coordinate.

Matrices


First lets revisit our matrices for a minute. So far we've calculated our model-view-projection matrix as that is all we needed but for our light calculations we need a few more. We need our model-view matrix and we need what is called our normal matrix.

The normal matrix is a matrix that only applies our rotation and can be used to update the direction of our normal vectors.

Because we adjust everything to our view matrix we also apply our view matrix to our normal matrix. The easiest way to get your normal matrix is to take your model-view matrix and take the inner 3x3 matrix from it. This is what I normally do but it does create some issues when your model applies a non-uniform scale. Now I read somewhere that the solution is to take the inverse of your matrix and then transpose it. Since we may need the inverse of our matrix later on I've gone down this route for now but I may revert to just using the model-view matrix directly as calculating the inverse of a matrix is costly as I rarely use non-uniform scaling anyway.

Anyway, because we'll be doing these calculations a lot I've added a structure to shader.h that can store IDs for the most applicable matrices:
// typedef to obtain standard information, note that not all ids need to be present
typedef struct shaderStdInfo {
  GLuint  program;
  GLint   projectionMatrixId;       // our projection matrix
  GLint   viewMatrixId;             // our view matrix
  GLint   modelMatrixId;            // our model matrix
  GLint   modelViewMatrixId;        // our model view matrix
  GLint   modelViewInverseId;       // inverse of our model view matrix
  GLint   normalMatrixId;           // matrix to apply to our normals
  GLint   mvpId;                    // our model view projection matrix
} shaderStdInfo;
There is also a function called shaderGetStdInfo() that populates this structure.
Finally I've added a function called shaderSelectProgram() that binds the shader program referenced by our structure and then calculates and applies all our matrices from a model, view and projection matrix that is passed to it.

What is very important to know is that GLSL will remove any uniform that isn't actually used in the source code so there is no use defining say modelViewInverse if you're not using it.
While the code logs that it doesn't exist shaderSelectProgram() simply skips those.

There is a way to use a VBO to load all matrices in one go which I may look into at a later date.

Our load_shaders function in engine.c now calls our shaderGetStdInfo function. We still have two uniforms that fall outside of our structure: our light position and our texture sampler (and there are several other uniforms in the shader.

Our engineRender function similarly now calls shaderSelectProgram. 

Our new shaders


The real magic however happens inside of our shaders. The changes to our vertex shader are very straight forward. First we now have an attribute for our normals.
We also have 2 new outputs, one for our position (V) and one for our normal (N). For both the correct matrix is applied.

Our fragment shader has grown substantially, we'll look at each part individually:
#version 330

// info about our light
uniform vec3      lightPos;                         // position of our light after view matrix was applied
uniform float     ambient = 0.3;                    // ambient factor
uniform vec3      lightcol = vec3(1.0, 1.0, 1.0);   // color of the light of our sun

// info about our material
uniform sampler2D boxtexture;                       // our texture map
uniform float     shininess = 50.0;                 // shininess

in vec4           V;                                // position of fragment after modelView matrix was applied
in vec3           N;                                // normal vector for our fragment
in vec2           T;                                // coordinates for this fragment within our texture map
out vec4          fragcolor;                        // our output color
We have a couple of new uniforms. Note that we've added default values for a couple of them so you can set them from code but don't have to:
  • lightPos - the position of our lightsource with view matrix applied to it
  • ambient - our ambient factor, our default is 30%
  • lightcol - the color of our light source, white for now
  • boxtexture - we had this one already, our texture sampler
  • shininess - the shininess for our specular lighting
Our input variables also match the output variables of our vertex shader.
void main() {
  // start by getting our color from our texture
  fragcolor = texture(boxtexture, T);  
  if (fragcolor.a < 0.5) {
    discard;
  };
  
  // Get the normalized directional vector between our surface position and our light position
  vec3 L = normalize(lightPos - V.xyz);
We start by getting our color from our texture map as before. Then we calculate vector L as a normalized directional vector that points from our surface to our lightsource. This vector we'll need in both our diffuse and specular calculations.
  // We calculate our ambient color
  vec3  ambientColor = fragcolor.rgb * lightcol * ambient;
This is our ambient color calculation, we simply multiply our surface color with our light color and our ambient factor.
  // We calculate our diffuse color, we calculate our dot product between our normal and light
  // direction, note that both were adjusted by our view matrix so they should nicely line up
  float NdotL = max(0.0, dot(N, L));
  
  // and calculate our color after lighting is applied
  vec3 diffuseColor = fragcolor.rgb * lightcol * (1.0 - ambient) * NdotL; 
For our diffuse color we first calculate our dot product for our normal and light vector.
We then multiply our surface color with our light color and our dot product taking out the ambient factor we've already used.
  // now for our specular lighting
  vec3 specColor = vec3(0.0);
  if ((NdotL != 0.0) && (shininess != 0.0)) {
    vec3 R = reflect(-L, N);
    float VdotR = max(0.0, dot(normalize(-V.xyz), R));
    float specPower = pow(VdotR, shininess);
  
    specColor = lightcol * specPower;
  }; 
For our specular highlight we first calculate our reflection vector then calculate our dot product using our position.
Finally we apply our shininess using the power function and multiply the outcome with our light color. The higher the shininess value the smaller our reflection.
We only do this if we have a shininess value and if we're not in shadow.
Note that we do not apply our texture color here because we are reflecting our light. An additional specular color related to our surface material may be applied here or even a separate specular texture map but I've left that out in our example.
  // and add them all together
  fragcolor = vec4(clamp(ambientColor+diffuseColor+specColor, 0.0, 1.0), 1.0);
The last step is to add all these colors together. The clamp function we call here makes sure our color does not overflow.

And the end result:
A box really is a terrible shape to show off the lighting, as the normals for each face are all parallel we're basically 'flat shading' this cube. Once we start loading more complex shapes it should look a lot nicer. Also with the specular highlighting implemented the way it is the box has a mirror finish, not really something suitable for cardboard.

Download the source here

So where from here?


You could easily extent the shader to allow for more then one lightsource by simply repeating the ambient, diffuse and specular lighting calculations for those other light sources and just adding the results together. There are a number of additional parameters that you could add to improve on this One I already mentioned is the surface materials specular color. Another that is relatively simple to add are restraints on the angle to the lightsource and a limit to the distance to the lightsource to create a spotlight type light source.

There are many other shader techniques that are worth learning:
Bump mapping or normal mapping which is a technique to simulate groves and imperfections on the surface by adjusting the surface normal. This can be achieved by using a texture encoding normals that you look up instead of using our vertex normals.
Environment mapping is a cool way to do reflections or to solve our ambient issue discussed above. Ever looked at a CGI movies documentary and wondered why they hold up a mirror ball and take a picture? Use that picture as a texture and use your normal vectors X and Y (ignoring it's Z) as a lookup (adjusted so 0.0 is at the center of that image) and voila.
Shadow maps, so objects cast shadows onto other object, but that's a topic for later.

What's next


Now that we've got our basic rendering of 3D objects all sorted the next quick detour will be looking at moving the camera around.
We also need to write something that will let us load 3D objects from disk instead of hardcoding them in source code.
So that will be our next two subjects, hopefully after that we'll jump back into our platform game.

No comments:

Post a Comment