# Sparrow WebGL Devlog 10: Major Refactoring

## Consistency, Transforms, Matrices, Shaders, and glTF files

I have added a lot of features to my WebGL engine over the last few months including models, animated models, shadows, color picking, and more. However, I mostly focused on implementing the functionalities and didn’t consider the overall usage patterns and consistency of the engine, which is also the biggest weakness of my C++/OpenGL code base. For the past few weeks, I have overhauled many of the core systems of the engine and made it easier to use and more consistent overall.

# Transforms and Matrices

When working in 3D you can control the position, rotation, and scale of the models. This is true for modeling programs like Blender and game engines like Unreal or Uni…, I mean, Godot. Position, rotation, and scale are represented as 3-element vectors, but they can also be combined into a 4x4 matrix. This matrix is called the model matrix and it controls the position, rotation, and scale of objects in a 3D scene.

There is also the view matrix, which defines the position and view direction of the camera, and the projection matrix which projects the 3D scene to a 2D monitor. All 3 of these matrices are multiplied together into the model-view-projection matrix(MVP), which is sent to the shaders to render the scene.

There are a few ways you can handle the matrices and send them to the shaders. By far the easiest is to transfer the matrices individually and multiply them together in the shader. However, this means that the matrices are multiplied on the GPU for every vertex of every mesh.

Generally, it’s preferred to premultiply the matrices on the CPU. This also has the benefit that you don’t have to recalculate the matrices every frame. You only have to update the model matrix if the position, rotation, or scale has changed (which for most static scenery objects, it never does). The MVP matrix, however, has to be recalculated every time the camera is moved and its view matrix changes.

I don’t know how bad the performance hit from multiplying the matrices for every vertex is. I assume the shader compilers can do some code optimizations like C/C++ compilers, so maybe they are smart enough to know…