Notch Notch
Manual 1.0 Manual 0.9.23
 Light | Dark
Preparing 3D Assets for Real-time

Preparing 3D Assets for Real-time

Updated: 12 Jan 2026

Working in real-time allows for great speed, flexibility and experimentation vs offline rendering / video. But projects designed to run in real-time have one key consideration that video doesn’t: performance.

Performance of most real-time 3D scenes is limited by a number of factors: the size of the canvas; the complexity of lighting, materials and shading; post processing; visual effects (particles+fields etc) in the scene; and the 3D assets themselves. The scene is always limited by the slowest thing in it - it’s no good optimising the assets but not the particles, for example.

Preparing 3D Assets for Real-time #

Firstly, and obviously, the fastest thing to render is the thing that isn’t rendered. Graphics hardware still processes polygons through the pipeline even if they aren’t visible on screen. Notch culls meshes that lie outside the planes of the rendering camera, but can only do this on a per-mesh basis: if an object crosses the edge of the camera, it will be rendered in entirety. Therefore in some cases, e.g. flying down a tunnel, it may be worth cutting up the mesh into chunks if some chunks can be completely outside of the camera view at some points and therefore not processed. This also applies to polygons that face away from the camera or are hidden behind other meshes. A mesh that exists in the scene but is entirely hidden behind other objects will still be rendered completely and still cost time to process: it should be removed manually by the artist.

A 3D object is often rendered in multiple passes by the engine - for example, each shadow map rendered requires re-rendering the object. Often large parts of the scene have no real effect on the shadow map or can even adversely affect it, and it pays to turn “Cast Shadows” off on those. The floor is a particularly common example: if you have a flat floor in your scene, always disable cast shadows for it. It’ll improve the quality of the shadow map too.

What makes a 3D asset perform badly? The major reasons are, in order:

  • Number of batches
  • Material and texture complexity
  • Deformation, animation, skins
  • Polygon and vertex counts

The number of batches is by far the biggest factor in many scenes. A “batch” is a piece of a mesh that can be rendered in one go by the graphics hardware. Each separate object node, each separate mesh inside each object node, and each block of polygons using a different material inside a mesh, all result in a new batch. Graphics hardware likes to deal with a small number of large batches, and hates to deal with lots of small batches. In fact, with less than around 2000 vertices in a batch the graphics hardware doesn’t even properly spin up - it wastes time doing any less. That’s right - if you have a batch of less than 2000 vertices, you could probably add more vertices and it would take the same amount of time to render! Worse than that, each batch that has to be processed adds a lot of overhead on CPU and GPU. There’s a limit of the number of batches you can actually process in a frame before it impacts the frame rate heavily: more than a few hundred in total is an issue. Considering that a scene is often rendered multiple times - each shadow map rendered requires a full rendering pass of the scene - then batch counts quickly add up. A badly optimised 3D file can easily destroy the frame rate.

How can you minimise batch counts? Firstly, 3D packages often favour creation of lots of small separate meshes - so an innocent-looking scene made of 100 cubes may contain 100 separate meshes, leading to a very slow real-time render. Notch does not merge meshes on load because the user often intends to keep objects separate in order to modify & animate them separately in Notch. Therefore, the first step to be taken is to merge as much of the scene as possible into one single mesh (even one with lots of materials). So important is this step that it’s often worth sacrificing other “optimisations” in order to achieve it: for example it can often be more efficient to take a scene made of lots of rigidly, independently animating meshes, merge them all and bake all the animation to a huge vertex cache animation - or a skin + bone animation - than it is to run the original scene. This also goes against the previous statement about splitting meshes into chunks for culling, and neatly demonstrates that there are few hard rules in 3D scene optimisation.

The next process is to merge materials. Take for example a cube where each face has a separate texture applied. This will be rendered as 6 batches of 2 quads each - incredibly inefficient and slow. It would probably be faster to render a single 10,000 poly mesh with one texture than to render the cube with six different ones. To reduce this cost textures can be merged into a single atlas per mesh using functionality in the 3D software. One material per mesh is ideal.

The next issue is material and texture complexity. Large textures are often considered to be a cause of poor performance but this is not entirely true: what matters is actually texture density. A large texture squeezed on to a small polygon (small in output render size) may be costly. However, Notch auto-creates mipmaps from textures which mitigate this issue, but it’s still worth - for quality of rendering and reduction of aliasing, not just performance - to ensure that your texture map density is in good relation to the size of the thing on screen: a 1:1 ratio is the goal. A 5 pixel sized object on screen does not need a 512x512 texture. In olden days texture sizes had to be power of 2 (512, 1024, 2048 etc). This is no longer necessary but it aids mipmap generation if abided by. Texture formats also matter: a 16bit per channel texture is twice as slow to render as an 8 bit per channel texture. Optimising textures to DXT formats where possible reduces their memory footprint but also greatly improves their cost to render. In a PBR workflow, albedo (colour) maps can typically be compressed to DXT1 (RGB) without much noticeable difference; roughness and specular maps can be compressed to greyscale BC4. Normal maps generally should not be compressed below 8 bits per channel.

However, material complexity is the much greater enemy than textures alone. A material is capable of using a number of textures at once: e.g. normal maps, roughness maps, displacement maps etc. Each of those elements carries a cost: in simple terms, a material using 4 textures is 4 times as heavy to render as material that uses one. But some of these stages - such as normal mapping - take additional processing beyond just reading the texture. Consider whether each texture channel really has an impact on the render. Simple materials and materials without textures are faster to process. The cost of material evaluation also heavily impacts lighting. Lighting is often the most demanding part of the overall rendering pipeline, and options in the material such as reflections are costly. Those will be discussed later.

This brings us to polygon and vertex counts. The biggest fallacy in 3D optimisation is the assumption that reducing the polygon count makes everything go faster; it should now be clear why this is not the case after the discussion of batches. GPUs are tremendously powerful nowadays and if the mesh is properly set up and dispatched millions of polygons can be rendered comfortably. However, ultimately and at extremes, polygon counts do matter - particularly when meshes are additionally processed with deformers, or in particular if the object is cloned multiple times! As with everything else, polygon counts should be kept reasonable for the way the object will be seen on screen.