originally posted at https://canmom.tumblr.com/post/159618...
Usually I write these posts while I’m working on the code, but this one had to be written after the fact as I lost internet yesterday.
Just do it in camera space, you silly girl!
So, I was still confused by how such things as the vector from surface to light, vector from surface to camera, and normal are transformed by the perspective projection matrix, renormalised by the z-divide, and transformed again from normalised device coordinates to raster space. I didn’t feel confident I’d be able to preserve the dot product between the normal and light direction through all that.
I had a look at some other graphics/OpenGL tutorials, but nobody seemed to be acknowledging the issue.
But in looking closely at an example of diffuse shader code in this tutorial, I realised they were actually bypassing this issue entirely, by keeping the vertex data from when it’s transformed only by the model-view matrix (i.e. in camera space), and using that to calculate the shading values. So, we can calculate the normals and light direction and dot product in camera space and needn’t worry about the perspective projection at all!
That will also help when we move on to Phong shading or another algorithm where the view direction is relevant.
That required some modification to a whole lot of different places, and to the main program section. The result is a huge commit.
The result
Currently there is only one light, and the results are written exclusively to the red channel. Also the light isn’t exactly where I want it to be.
Still, I did my best to recreate this scene in Blender…
I’m delighted how close I got to the Blender result. There are differences, which I think are mostly to do with how the mesh is triangularised. Some surfaces seem to come out darker in my render than in Blender, which may be due to the light energy in Blender still being too high (I just fiddled with the slider until I got something similar). Still… nice???
The code
I split the function for the camera matrix in two:
This allowed me to make two vectors of vertex data, for camera and clip space (excerpted from main function):
The call to z_divide_all on the camera_vertices_homo involves a lot of unnecessary divisions, since the w coordinate is 1 in each of the vectors transformed. An optimisation would be to write a function that simply drops the w coordinate, or a variant of the transform vector by matrix function that returns a 3-vector.
The lights also get transformed by the model-view matrix. This is done through new functions:
This transformation function does in fact drop the w-coordinate. This seemed the easiest way to deal with it, but could lead to problems if it’s called with a projection matrix!
The triangle drawing function also had to be modified:
What next?
I want to understand how to link C++ code with header files, and this file is getting extremely unwieldy and taking a noticeable amount of time to compile, so I’m going to split it up. I will also (since it shouldn’t be too difficult) add colour to the surface albedo.
I don’t want to spend a long time trying to implement different BDRFs when that would be better done in OpenGL proper. It might, however, be worth implementing texture coordinates, and smooth shading, to improve my understanding of barycentric coordinates.
Comments