Planet Renderer – Week 5 – 6: Height, Triangular CDLOD

I missed last weeks post because I was writing a research paper about the graduation work and didn’t get around to updating my blog since. You can read the paper here, I go over a lot of the topics I discussed in previous posts, but also discuss some new things. Aside from that I made the terrain sample a height map in the vertex shader to offset the height of the triangles appropriately. I also made sure that the backface culling doesn’t cull terrain with potential peaks that loom over the horizon. This video shows the results quite nicely:

The more interesting change was added last week:
I decided to work with a modified version of CDLOD for my terrain algorithm. Among the reasons for this decision are:

  • It is a modern algorithm designed to leverage the GPUs potential
  • The morphing means that no neighbours need to be determined for crack fixing
  • Morphing allows for smooth transitions in terrain detail instead of popping

The idea behind CDLOD is that there is a quadtree that is subdivided based on camera distance, and at every leaf a patch is drawn. Every patch is a grid of vertices of a predefined resolution. In the vertex shader vertices are transformed to fit into the leaf of the quadtree and with the height of the terrain. Every other vertex is also morphed to linearly merge with its neighbour vertex if it is close to the border of a lower subdivision level based on the camera distance. This way cracks can be avoided at the edges of subdivision levels.terrain lod cdlod cracks planet rendering
The advantage of using these patches is that they only need to be sent to the GPU once, which alleviates the CPU from tons of geometry calculations.

Using CDLOD for planet rendering of course has the typical problem of the algorithm being designed for quadratic terrain patches. Under normal circumstances this would mean that the base geometry of the planet needs to consist of squares, like a cube, and thus loose all the advantageous look up tables generated for equilateral triangles in an icosphere. Therefore, I looked into the possibility of achieving the same morphing effect with triangular patchs, and found one:
Triangle morphing for crack fixing in triangular cdlod for planet rendering

Here, smaller triangles get twisted into the larger triangles until the vertices are at the same position. A small complication is that neighbouring triangles need to twist into the opposite direction, in other words every second triangle twists counter clockwise.

In my implementation every patch holds the vertex positions and morph vectors in a normalised two dimensional form (PatchVertex: vec2 pos; vec2 morph;). These two dimensional coordinates can be treated like barycentric coordinates. Every patch is a leaf node of the triangle tree, where a triangle holds the position A, the vectors R and S and an integer with its subdivision level for morphing. R and S can be constructed from a triangle ABC, where R = B-A and S = C-A.

When transforming a vertex from the patch data using such a triangle its position can be determined as follows:

vec3 triPos = A + R*pos.x + S*pos.y;

Using the distance lookup table described in an earlier post, the camera position and the subdivision level of the patch, a morphing factor can be determined. Once that is the case the position can be modified again:

triPos += R*morph.x + S*morph.y;

In order to make sure the winding is inverted every other triangle the R and S vector can be swapped during the triangle tree calculation on the CPU.

Patches are drawn instanced in order to avoid hundreds of drawcalls per terrain, so a vertex buffer object is created to hold the triangle tree information and updated every frame (PatchInstanc: int level; vec3 A; vec3 R; vec3 S;).

The following video shows how the effect looks like quite nicely:

Using this implementation allowed me to achieve solid framerates at ridiculously high vertex counts, in some cases 120 fps at nearly 3 million vertices, which is not a resolution needed by a long shot to achieve convincing terrain:
Insane resolution CDLOD with triangles for planet rendering

I have not profiled the performance in depth yet, but I seem to be getting very decent results at less than 250k triangles:adequate resolution cdlod opengl planetary terrain rendering triangle morphing

Note that the performance is usually better than the screenshots indicate because the terrain is drawn twice here, solid and wireframe on top.

One problem is that the subdivision always assumes that triangles are at sea level, which can cause problems with distributions on taller mountains, which is a problem I am currently looking into.

Apart from this, I started researching atmospheric scattering now in order to make sure I get all required topics for my graduation work together in a timely manner, and I can always come back to the terrain LOD to refine it later.

As usual, you can find the full implementation on the GitHub repository.

Author: Leah Lindner

Hi, you are looking at the portfolio of a German-English game developer with a focus on graphics programming. I love finding out how things work and visualizing them in a creative way using computer technology. I first became interested in computer graphics when I started creating 3D art using Blender in 2008. After majoring in programming at secondary school and also teaching myself digital painting, I moved to Belgium to take a Bachelor in Digital Arts and Entertainment. Following work on projects in both film and video games, I have increasingly focused on graphics programming, and moved to Brighton to work as a GameDev at Electric Square.

17 thoughts on “Planet Renderer – Week 5 – 6: Height, Triangular CDLOD”

    1. Probably? I haven’t tried it, but you can certainly write custom Shaders in unity, and you can also create your own procedural meshes.

      The one thing I don’t know is whether unity allows you to create custom meshes with instanced draw calls. Worst case you’ll have to do it without instancing, which could be around 50 drawcalls I think, depending on how you size your patches. (Also assuming you do culling properly).

      I should probably make an update post for this as a bunch of stuff has changed, the planet rendering is now integrated into a deferred rendering pipeline which should make the Shader a bit easier to read and adapt to Unity stuff. I suggest looking at
      https://github.com/Illation/ETEngine/blob/master/source/Engine/PlanetTech/Patch.cpp ,
      https://github.com/Illation/ETEngine/blob/master/source/Engine/PlanetTech/Triangulator.cpp ,
      https://github.com/Illation/ETEngine/blob/master/source/Engine/Graphics/Frustum.cpp
      and
      https://github.com/Illation/ETEngine/blob/master/source/Engine/Shaders/PlanetPatch.glsl

      Let me know if you actually implement it in Unity 🙂

      1. And i have other question, how i can open the folder of Planet Renderer-master because visual studio give me error and i don’t know how i can run the source.

        Thanks.

        1. What platform are you on? There are build instructions on the PlanetRenderer repository that show how to generate the project files using “genie”. https://github.com/Illation/PlanetRenderer#building

          If you are using Windows, I suggest though that you don’t bother with the PlanetRenderer repo and instead clone the ETEngine repo: https://github.com/Illation/ETEngine
          As I said it has the planet renderer integrated (with better code), and it also redistributes all dependencies so it should be a lot easier to build

  1. Hi Robert,

    During the last couple of days I’ve been working on trying to understand your code, haven’t started the actual implementation of my own functions yet. Right now I’m trying to work a few questions our regarding each patch. I have the most control on the Frustum Culling since I actually implemented that already and the Back Culling shouldn’t be any issues either. I’m trying to skip everything that has to do with heigh maps now aswell. I’m just focusing on getting CDLOD system working on a sphere.

    From what I understand you prebuild triangles with a set number if divisions(this is depending on what GPU vs CPU load you want). These triangles have vertices position set between 0 and 1, and the 2 for-loops in the Patch.cpp creates a triangle based setup of that, is that correct?

    I’m abit confused regarding what information is sent to the shader, most likely since it’s OpenGL and I’m struggeling to learn DirectX ^^ If I understand the code correctly you set up areas in the buffer to send, pos, morph, level, a, r, s. Could you point me to the part of the code which then actually uses these areas? That might help me understand what a, r and s actually are. The GPU get 1 pos 1 morph, and then the a, r, s and level for each vertices, correct?

    Best Regards and once again thanks in advance
    Johan Karlsson

    1. Hey, I made a small sketch to illustrate this stuff better: https://imgur.com/a/WH9QcA6

      You are looking at the correct variables but they are actually two separate types of data.
      “Level”, ‘a’, ‘r’ and ‘s’ are per instance data.
      “pos” and “morph” are per vertex data.
      Every PatchInstance can have many PatchVertices, depending on the subdivision level (3, 6, 15 etc)

      For each instance ‘a’, ‘r’ and ‘s’ are derived from ‘A’, ‘B’ and ‘C’ that the triangulator generates.
      ‘a’ is the base corner of the patch, and ‘r’ and ‘s’ are vectors that point from ‘A’ to ‘B’ and ‘C’.
      The PatchVertex contains 2D coordinates that note its position inside of the patch, so always between 0 and 1.
      This helps for actually calculating the final vertex position, as we can simply transform “pos” along ‘r’ and ‘s’ with ‘a’ as a starting point, as seen in the vertex shader here: https://github.com/Illation/ETEngine/blob/master/source/Engine/Shaders/PlanetPatch.glsl#L71

      For a simple implementation you can forget about morph vectors for now and add it later, those are there in order to twist the subtriangles into their parent triangles.

      So during initialisation we calculate “pos” and “morph” of the patch vertices once and upload them to the GPU. That happens in Patch::GenerateGeometry():
      https://github.com/Illation/ETEngine/blob/8ede7facda5e1cbfc233042a7267dabae94573ec/source/Engine/PlanetTech/Patch.cpp#L88

      Then every update, the Triangulator calculates the ‘a’, ‘r’, ‘s’ and “level” of the PatchInstances which get uploaded to the GPU in Patch::BindInstances():
      https://github.com/Illation/ETEngine/blob/master/source/Engine/PlanetTech/Patch.cpp#L146
      All of this gets controlled in Planet.cpp at a high level.

      I have an updated version of the uni paper I mentioned in this blog post, reading that might also help you: https://drive.google.com/file/d/1-86Hi6t0zmnihQI3lRcLeNgJRs20gML4/view?usp=sharing

      I should probably update this blog, I added some nice stuff to my planet tech like detailed height maps, starfields and atmospheric scattering which I never got around to describing, but I am pretty busy at the moment so it will have to wait. I’ll gladly help you though if you need more info to understand this, I would love to see more implementations of this technique!

      1. Can’t thank you enough for the detailed helped you give me, will dive right into all this new information and process it! 🙂

        Best Regards
        Johan Karlsson

  2. Things are looking better and better in my brain atleast ^^ I got the idea now of how to get the math behind it all to work. As you suggested I skipped the morph value for now, I’ve decided to just go for a icosphere with the triangles just being rendered inside the same plane as the parent triangle. I’ve also decided to skip Culling for now until I got this patch system nailed down and can actually render the icosphere.

    Right now I’m looking into the way DirectX handles shaders since all I’ve done so far concerning shaders is filling the vertexbuffer with the same amount of vertices every frame and done some manipulating inside the shader to handle height and lightning, but that was 2 years ago on my last attempt to create a planet rendering plattform. So right now I’m googling to learn more about how the render pipelines work. Going to go through my old code aswell to refresh my memory since in there I was able to manipulate single vertices with the height map, might work differently tho since that was a texture I sent in.

    Anyway, just thought I was going to post some progress 🙂

    Best Regards
    Johan Karlsson

    1. That’s cool. You don’t really need anything apart from a vertex and a pixel shader, so you should be fine, no geometry or tesselation shader involved.

      Do you have a public repository?

      1. I do, got the code in a private one tho, will look into open it up tomorrow if you want to take a look.

        Got a question for you regarding shaders that I think is kind of general no matter DirectX and OpenGL. This part binds the vertices for each patch and does it once when the planet.cpp is initiated:

        STATE->BindBuffer(GL_ARRAY_BUFFER, m_VBO);
        glBufferData(GL_ARRAY_BUFFER, m_Vertices.size() * sizeof(PatchVertex), m_Vertices.data(), GL_DYNAMIC_DRAW);
        STATE->BindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_EBO);
        glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(GLuint)*m_Indices.size(), m_Indices.data(), GL_STATIC_DRAW);
        STATE->BindBuffer(GL_ARRAY_BUFFER, 0);

        and this is done every time the geometry needs to be updated due to camera movement, rotation or such things:

        m_NumInstances = (int32)instances.size();
        STATE->BindBuffer(GL_ARRAY_BUFFER, m_VBOInstance);
        glBufferData(GL_ARRAY_BUFFER, instances.size() * sizeof(PatchInstance), instances.data(), GL_STATIC_DRAW);
        STATE->BindBuffer(GL_ARRAY_BUFFER, 0);

        I get that BindBuffer is similar to Map and Unmap vertexdata in DirectX. What I’m unable to wrap my head around is the fact the shader program then understands which vertices should have which instance data(level, a, r and s) since the vertexshader is executed once for every vertex. In a simple Icosphere as it’s easiest form I would have like 20 patches, and in one split within each patch I’m looking at 20 * 6 vertices. How is does the vertex shader which instance belong to a certain vertex, if that makes more?

        Best Regards
        Johan Karlsson

        1. Don’t know if its possible to edit :S Can’t seem to find the button, after some more reading I found that it’s actually only 6 vertices that is sent to the GPU, and not 20*6 as I thought. Just need to understand how those 6 vertices are connected to a patch in the GPU

          1. I think what you want to be looking at is this: https://github.com/Illation/ETEngine/blob/master/source/Engine/PlanetTech/Patch.cpp#L60

            That’s where I do what you would call an input layout in directX.

            It tells the vertex array object that the first 4 floats are the pos and morph vectors which get set per vertex, and then the next section is an integer and 9 floats for the level, a, r and s. It also sets up the divisors so openGL will know to only increment the counter for those elements every instance instead of every vertex.

          2. Yea that was exactly it, I found a Rastertek guide on instancing that will help me on get it setup in my framework before I start doing more advanced instancing stuff.

            Learning by doing, thanks for all the help so far

  3. Hello Robert,

    Thought I would update you on my progress so far. I’ve succeded in making the instance rendering work, and as you guessed the FPS went up by alot. I’m at the last step now before moving on to Culling, and that is the morph factor you calculate. I don’t quiet get the math behind it and was wondering if you could give me any pointers on what the thought process is behind it.

    Best Regards
    Johan Karlsson

    1. Hi, that’s cool that you got patch instancing working, congrats! That is one of the hardest steps along with culling IMO.

      So the morph factor determines how much each vertex in the patches should move along the morph vector, which I illustrated here with the arrows: https://imgur.com/a/WH9QcA6

      It is based on the distance from the camera, so first of all you need to calculate the initial world position of the vertex, and then it’s camera distance.

      If you calculate the way your patches subdivide in the triangulator the same way I do, you should have a lookup table of distances at which each level subdivides. You should upload that table as an array to the shader, and each of your patch instances should be storing its subdivision level.

      Now you can check in the shader what the subdivision distance of the current level and of the next level is. The distance from the camera of your vertex should be somewhere in between those, so you can map that to a range between 0 and 1: 0 means you’re closer to the current subdivision distance and 1 means that your triangle will immediately subdivide if it just gets a tiny bit closer.

      Now you can also define a range where the vertices actually start morphing, for instance only between 0.8 and 1, and normalise the value you calculated before to that. That will be your morph factor.

      This is calculated here:
      https://github.com/Illation/ETEngine/blob/master/source/Engine/Shaders/PlanetPatch.glsl#L58

      and the distance look up table is calculated here:
      https://github.com/Illation/ETEngine/blob/master/source/Engine/PlanetTech/Triangulator.cpp#L102
      and sent to the shader here:
      https://github.com/Illation/ETEngine/blob/master/source/Engine/PlanetTech/Patch.cpp#L155

      Hope this helps.

      1. Hi Robert,

        The morphing looked absolutly fantastic when I got it working, really smart way of handling it. It took away the part I mean was abit “triangly”.

        Added height map now from the shader, I guess I have to add some data to know the exact height above the surface now since right now my GUI shows the altitude above the perfect sphere. How did you work your way around that one? Feels like checking an array for height is very costly for the CPU.

        Also working at somekind of controller to rotate around the planet now, I got a plan which seems to be working ok. Next step is adding mouse controlling to actually be able to rotate the player camera. Also working on implementing your way of construction the frustum since you way is was more understandable then what I use to do(Rastertek guide).

        Also on the todo list is to add color texture and somekind of normal map to add lightning. I see that you used several textures to add to the height, you have 2 textures that only include details, how did you work that out to look nicely? Are those numbers just numbers(700 and 100 for detail1 and detail2) you tried around until it looked nice?

        Best Regards
        Johan Karlsson

Leave a Reply to Robert Lindner Cancel reply

Your email address will not be published. Required fields are marked *

*