Library’s heart... the mesh clip-mip-mapping-based level of detail technique.
    During building, several terrain mesh layers of different detail level are generated using adaptive tessellation.
    Smooth transition at layer connections is achieved by morphing the higher-detailed layer to the lower detailed one using simple and cheap vertex shader technique based on pre generated morph data. If the vertex-based lighting is required, terrain mesh contains normals which are also smoothly morphed. The same LOD system can be used for any user-embedded data, like textures or objects.
    All mesh data is generated at terrain build time, so this technique uses almost no CPU at run-time. Also two levels of internal compression are applied, first decompression performed while streaming, and second on the GPU, preserving storage space, RAM and VRAM. Downside is that, once generated, terrain is not modifiable at run-time.



Fitting it into the application
  - Builder application is used to generate map data from input data, and, with most of the source code provided, it can be modified to fit into any production pipeline.
  - AdVantage library is either statically or dynamically linked into the application, and gives it a clean interface for terrain rendering and collision detection. Full source code provided.

Pipeline steps


  1. When the heightmap data is loaded into the builder, it is analyzed to determine tessellation level required for each terrain segment.
  2. Optimized mesh layer is generated, along with the LOD MIP levels.
  3. Mesh data is compressed into the internal mesh format, and user data (texture, geo data, objects, etc) is embedded into Rendering and Collision LOD layers, if and where required.
  4. Optional external compression algorithm (currently embedded is 7-zip lzma) is applied to the data stream.
  5. Output .avst (AdVantage Streamable Terrain) file is generated.


  1. .avst is loaded into the library: application can use it's own file system to open data, so it can be a regular file, memory stream, web-based source or anything else.
  2. Terrain streaming camera(s) are created – used to specify actual data areas that will be visible and are required for streaming. Application provides threads used for streaming and decompression and so can have full control over the CPU.
  3. When selected area segments get loaded, library provides vertex and index buffers, along with any embedded user data (textures, objects, etc) to the application.
  4. Application defines visibility parameters and provides them to the API, getting back info on how and what to render.
  5. For collision detection, similar selection system is used for streaming. Then, RayTriangle() or GetTriangles() methods can be used to decompress and obtain required collision triangles.