Is Notch Multi-GPU compatible like …insert raytracing renderer… ?
The short answer: Notch currently only supports rendering a frame on a single GPU.
To understand why this is the case it’s important to understand the difference between raytracing and raster rendering (which Notch uses). In raytracing, a light ray path is calculated from the camera through each pixel of the screen, to the geometry it hits and to the places that the ray bounces. Raytracing is very computationally heavy. And while Notch does support various raytracing features, it is not the default rendering method for performance reasons.
In a raster-based world, the scene is ‘assembled’; fluid / particle / cloner simulations are run, lights and shadow maps calculated and then the viewport is drawn on the basis of this ‘assembled’ environment. You can see raster rendering as a set of batch jobs that assemble the scene. This method is incredibly fast and as software advancements have been made it has enabled high-end realtime rendering to occur.
When it comes to multiple GPU raytracing, while slow, does lend itself to multiple GPU processing as each ‘ray’ can be processed by a different GPU. However, there are large penalties in setting up each GPU for the ray trace every frame which make it unsuitable for realtime. The process goes a little like this:
- Software uploads the full scene (geometry, materials and simulations) to each GPU (not very efficient)
- Each GPU renders a segment of the viewport
- Each GPU copies their rendered segment to the primary GPU (a very slow process despite technologies like SLI, as all GPUs have to stop and sync)
- The primary GPU pieces together the segments and displays the final image.
The inefficiencies above are not noticeable in a frame that takes 2 minutes to render but are unsustainable when you are rendering within 16ms (realtime).
In raster rendering, it is possible to gain some performance advantages by running on multiple GPUs, but only in the region of 0-20%. In the multi-GPU raster rendering scenario the ‘batch jobs’ (cloner / particle / fluid simulations or light and shadow calculations) would be scheduled on different GPUs. However, the act of copying this data between GPUs reduces a lot of the performance benefits you achieve with the distribution of work.
At present, the work that is done to optimise Notch on a single GPU and the pace of GPU hardware development far outstrips the advantages of multi-GPU raster rendering. Once the pace of innovation slows in single GPU rendering, our attention will turn to multiple GPU.
Can my media server use an extra GPU to render Notch graphics?
Notch does allow media servers to render Notch frames on additional GPUs, however, there are some reasons why most media servers have opted not to do this.
In a media server input to output latency is often a top priority. If Notch is run on a separate GPU all live camera feeds have to be copied from the primary GPU to the generative GPU to be processed and then the treated content needs to be transferred back again. These copies are slow and sometimes outweigh the advantages of the additional GPU when live camera effects are used.
When running two GPUs that require multiple copies/sync per frame there are complexities in timings and render-sync between the GPUs (further complicated by genlock and multi-machine setups).
Again, this choice is media server vendor specific and may change over time. You may wish to talk further with your vendor.
Can you select which GPU Notch Builder is running on?
Yes (from 0.9.19). Go to File->Preferences->GPU