LightWhat – Pathtracer

Cornell Box rendered in Lightwhat, showing diffuse, glossy, glass and mix shaders

LightWhat is an unbiased CPU Pathtracer I wrote in my first year of University.

The project started out as an experiment and grew over the next five months into a piece of software that includes a rather wide range of components that I needed to research, and I learnt a lot along the way.

Among the main influences for the structure of the code are Blender's Cycles renderer and LuxRender.

Lightwhat is written from scratch apart from using SDL2 for window managment and some libraries for loading model and image data. No Graphics middleware was used.

Key Features

  • Pathtracing using MonteCarlo distribution on a cosine weighted hemisphere
  • Bounding Volume Hierachy accelerated rendering
  • Multithreaded rendering distributed on multiple tiles
  • Tonemapping in a postprocessor
  • Shaders can be combined using special Mix shaders
  • Math Library for vectors, lines, intersection testing etc

Extendable

All these items use inheritance and polymorphism to make extending the feature set easier

  • Support for various types of Geometry including Meshes, Spheres, Planes and Triangles
  • Support for various (interpolated) Texture types, including normal maps
  • Area and Point Lights

Application

  • Simple XML parser
  • Loading Scenes and Settings from a scene description file
  • Bitmap exporter
  • Underlying application engine that manages things such as text, buttons and windows and runs the application loop
  • Code split in three layers for easy reusability and exchangability: UI Engine - Application - Renderer
A mesh based sphere rendered with non mesh shapes
Render of a high poly westland whirlwind in my pathtracer
render of the blender monkey susanne using my pathtracer


 

Implementation

Find the full code on Github->

[Disclaimer: I created this project before knowing a lot about c++ naming conventions or the standard library]

In the following segment I will detail the process of creating the pathtracer somewhat chronologically...

Getting the first Image

The project started as a challenge I set myself after attending a talk by Jacco Bikker - who also wrote some really helpful articles about raytracing - about real time raytracing for games. Back then we where using a simplistic 2d c++ game engine and I wanted to see if I could make it render a 3D image.

The first step was creating a basic 3D Vector math library. I didn't know back then that it is way easier to test ray tracing with spheres, and was generally new to 3D programming, so I called initially called my vectors vertices and put used them for programming meshes.

I structured the meshes similarly to how collada does, since I was familiar with the format. I had no way of importing meshes yet, so I had to model some cubes using code, as can be seen in the image below.

When I had the prerequisits done I started working on ray triangle intersection. This problem is tackled in two steps, the first testing if and where the ray intersects the plane formed by the triangle, and the second testing if the intersection point is inside the triangle. I calculated a working equation for this, but I didn't know about the normal representation of planes back then, so it was a quite complex calculation based on the parametric form of planes.

I later fixed that mistake and also found out about the barycentric coordinate point in triangle test, so after a lot of debugging the raycasting started working. The current ray-triangle intersection code can be found here and the ray-plane intersection along with some of the math library code here.

As a last step I projected the pixels of the screen through a camera approximation and returned colors with some (wrongly implemented) diffuse shading and even used shadow ray casts to lights I placed in the scene (preset color, simply a position treated as a point light) to determine shading, and finally I got a first image.

The first image my renderer created
The first image my renderer created
Modelling with code
Modelling with code

Making a basic application

After waiting 20 minutes and having this first image popping up, I decided it would be a good idea to focus on adding a few features around the raytracer to make it more functional.

The first step was to move it outside of my schools 2D game engine, because it had a lot of features that where not required for this project and I figured it probably slowed down the rendering with those (it probably didn't really). I went ahead anyway and ported it into a basic SDL2 application (which was the first time I used external libraries), so now I had full controll of the code.

I also split the code into different classes and wrote a collada mesh importer (I later replaced it with a library called assimp, but I still use the xml parser I wrote). An essential step for the current state of the design of LightWhat was starting to abstract sections of the code by using polymorphism to make it easily extendable, and I did that with shapes by making meshes inherit from an abstract shape class.

This allowed me to implement mathematical spheres (not meshes, simply using a position vector and a radius scalar) with their own intersection code, and I finally resolved the clipping issues by clamping colors between 0 and 1.

Raytraced Collada Meshes
Raytraced Collada Meshes
Raytraced Mathematical Spheres
Raytraced Mathematical Spheres

Basic Shading

The next step was implementing some basic shading. Back then it was not pathtracing yet, so I implemented a simple Labert + Phong System.

To spice things up, I implemented smooth normals and UV coordinates on the ray hit. The UV coordinates are quite simple to derive from the barycentric coordinates with some 2D vector math by following the vectors between the triangle points in UV space.

Using this and an image import library I was able to implement support for diffuse, specular and tangent space normal map support. Linear texture interpolation was also added.

Diffuse texture mapping with multiple textures
Diffuse texture mapping with multiple textures
Normal mapping. The Phong shading is also visible
Normal mapping. The Phong shading is also visible

Improving the performance

After adding all this stuff, a big problem was the really low performance, so for a while I focused on improving that.

The first step was converting all doubles into floats and inlining certain parts of the code (although the latter barely helped at all)

Performance chart for lightwhat pathtracer

The really big performance improvement came from using Bounding Volume Hierachies, which is a form of spacial partitioning that is also often used for collision in video games. The idea is that all geometry down to the last triangle is put in seperate bounding volumes, and those volumes are put in a tree structure of parent bounding volumes.

This allows the rays to traverse the (binary) tree hierachy by only checking for intersections of objects if their parent volume has been hit, instead of testing intersections with every piece of geometry. If the amount of objects is x, the amount of intersection tests would be sqrt(x) instead of x. On top of that, the volume I chose is an axis aligned bounding box (AABB), which is really fast to calculate the intersection for.

The last step I did was split the rendered image into multiple tiles, which where treated as pending tasks for a multithreaded system, where every thread renders one tile, and when the thread is done it renders the next free tile. On my machine this meant that 8 tiles where rendered at the same time. The amount of threads depends on the amount of cores a processor has and if it uses hyper threading.

Using all of these techniques I managed to bring the render time down to less than a second, from times that where a couple of minutes in some scenes, with improvements of up to 500x speed up. This finally made it possible to look into a new topic: Pathtracing

Pathtracing

I switched from raytracing to pathtracing in multiple steps. The first was implementing shaders that recursively bounce the rays and initiate new raycasts, like reflections and glass. I also implemented ray branching for area lights in order to cast multiple shadow rays to random positions in the light, so instead of either being a shadow or non shadow pixel, the amount of shade is on a scale between 0 and 1.

The next step was multisampling as a means of antialiasing and resolving some of the noise in semi shadow areas. Multiple versions of the image are rended with raycasts in a random area inside the pixel, and the result is averaged out between all versions of the image.

After that I made all shaders bounce light information instead of using phong shading, using BRDFs with a certain angle to make the surface more smooth or more glossy. The rays are reflected randomly in a range of that angle. Weighting those random casts with a cosine distribution yielded a slightly more realistic result. Creating a shader that mixes multiple other shaders also helped getting nice results and higher versatility.

Comparison  between direct illumination with raytracing and global illumination with pathtracing in lightwhat
Noise reduction in LightWhat pathtracer
Comparison between blender cycles renderer and lightwhat pathtracer

Finally, with the use of some tonemapping for exposure and gamma correction, I was able to get some results comparable to the rendering software I was comparing myself to, namely Blender Cycles and LuxRender. I put a little effort into revamping the underlying UI Engine to support buttons to make it slightly easier to use, but finally decided to finish up the project.

There are tons of interesting things that could still be implemented, like spectral pathtracing, importance sampling or bidirectional pathtracing, but since I created this project I have learnt so much about proper coding standards, the c++ standard library graphics programming that it would be wiser to start in a new project than spending time adding features to this one.

That said, the structure of this code has been written with reusability and extendability in mind, so parts of it can easily be taken, slightly refurbished and applied to new projects. As an example, I have used the raycasting function in my navigation mesh pathfinding code.

The entire project is open source (GNU public), so feel free to check out the repository and incorperate parts of it in your own software.