Building my own rasteriser: part 1 - setting up
I’m going to learn how rasterisation works! First: what tools and libraries should I use, and how do I set it all up?
Building from scratch a renderer that works the same way as OpenGL, and everything I learned on the way.
I’m going to learn how rasterisation works! First: what tools and libraries should I use, and how do I set it all up?
I get used to compiling code using minGW, and plan out the steps of the algorithm
Some basic geometry, with a view to how we represent vectors and triangles.
Building up the model-view-perspective matrix.
Let’s calculate some Normalised Device Coordinates!
Working out whether a pixel is inside a triangle, and if so, what its barycentric coordinates are.
Rederiving perspective-correct depth interpolation for the sake of curiosity.
I get the first image out of my program and get very excited.
The program gains command-line arguments, and the ability to load geometry from .obj files.
Unsatisfied with the derivation on Scratchapixel, I work through the maths of a Bidirectional Reflectance Distribution Function.
Introducing Lambertian diffuse shading and directional (‘sun’) lights to our setup.
I realise there’s a much more starightforward way handle lighting and refactor hard… and finally get that shading.
Coloured lights and surfaces are enabled. Things no longer have to be red.
Animated output! Hell yeah.
I attempt to interpolate normals to produce smooth shading. Unfortunately, it will later turn out I made some errors in the algebra.
I realise a mistake in my calculation of normal interpolation.
I want to deal with PNG files. Simple, right? Ha. Ha. Ha.
I fix the normals problem, and make some adjustments to backface culling to better imitate OpenGL. We finish by rendering a model from a real videogame.