Notch Notch Manual 1.0
 Light | Dark
Smart Tracer

Smart Tracer

The hybrid real-time and offline renderer.

image

Method #

This node is our hybrid real-time renderer, utilising hybrid raytracing and deferred rendering to render a high quality image in real time.

Suitable for rendering 3D content offline with any hardware, and real-time with powerful enough hardware.

Parameters

ParameterDetails
Render ModeChanges the method used for rendering the visibility buffer, which has a large impact on how the frame is generated.
  • Rasterise : The visibility buffer is rasterized, which means that each object is individually placed in the visibility buffer. This is faster with scenes that have a lower polygon and object count.
  • Raytrace : The visibility buffer is raytraced directly. This is usually better for scenes with high geometry.
  • Refine : The visibility buffer is raytraced directly, and then the scene is refined over time. This is most appropriate when exporting video, to get extra improvements to anti-aliasing and temporal stability.
Max Refine StepsThe max number of refine steps per frame. Greater steps improve overall quality in the scene, but only applicable for exporting to video.
QualityControls the overall quality settings for the renderer, including shadows and sampling.
AntialiasingThese options allow for different methods of reducing aliasing. For more about aliasing and anti-aliasing, see here
  • DLSS : Uses Nvidias DLSS to resolve aliasing. Often works faster than FSAA, but can introduce artfacts.
  • Temporal : Applies a subpixel jitter applied to the camera, and averages the result over time. Decent AA results for a relatively small cost, but can make scenes appear blurry or jittery in some cases.
  • FSAA : Samples every pixel multiple times per rendered frame, greatly improving AA and also smoothing out some temporal artefacts. Not suitable for live output, but perfect for video and stills.
  • Off : No anti-aliasing is applied.
DLSS ResolutionChange the resolution the scene is rendered at before being passed to DLSS for anti-aliasing. Only functions with DLSS selected for the antialiasing method.
  • 100% : The project stays at the same resolution, so DLSS is prmarily used in reducing aliasing.
  • 50% : The project is run at half resolution, then passed to DLSS for upscaling and antialiasing. This can dramatically improve performance, but can introduce artefacts.
Depth Of FieldControls how depth of field is generated in the scene.
  • Depth Of Field : Uses a variable blur based on the Z-depth of objects in the scene.
  • Off : No Depth Of Field is generated, regardless of camera properties.
Emissive LightingControl how the emissive lights are handled in the scene.
Diffuse BouncesSelect the method used to generate diffuse bounce lighting in the scene. These techniques simulate the way light bounces around in the scene before reaching the camera, creating a much more realistic result.
  • Raytraced : The scene is raytraced with one ray per pixel and a single bounce, using light probes for subsequent bounces. ReSTIR is used to combine the resulting rays with rays from adjacent pixels and previous frames, resulting in a more accurate and stable outcome. Because this technique is somewhat dependant on Light probes, artefacts from light probes such as light bleed and bad probe placement can appear in the final result, but are far less noticeable. It’s also worth noting that this effect can take a moment to refine, so quick camera movement should be used sparingly.
  • Probes : Light probes are automatically generated on the surfaces of all visible geometry in the scene from the camera’s view. They are then used to sample the scene around them to add bounce lighting to nearby surfaces, resulting in efficient bounce lighting at high frame rates. However, this technique can produce artefacts around thin surfaces or with low probe densities. It’s also worth noting that this effect can take a moment to refine, so quick camera movement should be used sparingly.
  • Off : No diffuse bounces are generated. This is faster to process, but can result in very dark scenes without extra lighting to fill out the shadows.
Num Diffuse BouncesControls how many light bounces the probes will account for when sampling the scene. More bounces increases the quality of the bounce lighting to a point, but does can impact performance heavily. 1 is usually fine for outdoor scenes, 2-3 can fill out lighting in interior scenes, 4+ is often overkill unless you are actively putting lighting in hard to reach areas.
Probe Separation DistanceThe minimum distance betweeen light probes in the scene. Higher probe densities can improve quality of bounce lighting to a point, but can impact performance and reduce lighting quality if pushed further than necessary.
ReflectionsControl how reflections are generated in the scene.
  • Raytraced : Uses raytracing to generate accurate reflections in the scene. Accurate, but can impact performance with lots of geometry.
  • Off : No reflections are generated from objects in the scene, but simple reflections from skylights and spcular reflections will still be generated.
RefractionsControl how refractions are rendered in the scene.
  • Multi Bounce : Rays are sent through the geometry, and they bounce up to 8 times before being disregarded by any other refractive surfaces.
  • Single Bounce : Rays are sent through the geometry, which bounce once before ignoring any other refractive surfaces.
  • Simple : Refractions are generated from screenspace warping. Super fast, but inaccurate.
  • Off : No refractions are applied, so refractive surfaces won’t appear in the scene.
Refraction Max DistanceSets a maximum amount a material can be refracted. Useful for limiting some of the artefacts from the simpler refraction modes.
Ambient OcclusionSimulates natural shadowing that occurs in corners and crevices of a scene, by darkening those areas. Useful for exaggerating detail in a scene.
  • SSAO : Generates ambient occlusion using screen space data from the G-Buffer. Fast and reasonably acucrate, but can generate erroneous shading around object edges thin structures.
  • Probes : Uses the probes generated for Global Illumination to also generate Ambient Occlusion. Better for wider ambient occlusion ranges than SSAO, but still relies on some screen space methods to generate AO.
  • Raytraced : Raytraces AO with a few samples, then denoises for a smooth result. Gives great and accurate results, but slower than the other methods.
  • Off : No Ambient Occlusion is generated.
AO Blend ModeControls how the generated Ambient Occlusion is applied to the scene.
  • Multiply : The Ambient Occlusions is multiplied against the scene after all the lighting passes have been generated.
  • Replace : Replaces the current lighting with that of the Ambient Occlusion. Useful for getting a simple clay render of the scene.
  • Add : Adds the Ambient Occlusion lighting on top of the lighting of the scene, brightening all areas except the areas the ambient occlusion is applied.
AO Blend AmountHow much the chosen blend mode is applied into the scene. Doesn’t apply to replace blend mode.
AO DistanceControls how wide the Ambient Occlusion radius is. Lower values tighten the ambient occlusion to smaller areas, larger value can end up darkening the whole object, and in some modes add visible noise.
AO Falloff CurveControls how the weighting of ambient occlusion falls off over distance. Higher values will darken areas of ambient occlusion further, while lower values will lessen the ambient occlusion except in narrow crevices.
Max Output LuminanceSets a maximum output value when sampling light in the scene. Useful for reducing ‘firefly’ noise in the scene, as further light samples are likely to average out faster, but can reduce the overall lighting accuracy in the scene.
Diffuse Bounce MultiplierBoost the brightness of diffuse bounces in the scene. Useful for artificially brightening unlit areas of a scene without adding extra diffuse bounces, but not accurate and if pushed too far can introduce artefacting and blow up the lighting model as surfaces reflect more light than they absorb.
Motion BlurControl how motion blur is applied to the scene.
  • Motion Blur : Applies a motion vector based blur to all objects in the scene, for both camera motion and object motion.
  • Off : No motion blur is generated from the scene.
Show ProbesControl how probes are rendered in the viewport. Useful for evaluating how global illumination is being generated in the scene.
  • Probes : Render the probes in the Viewport as lit spheres.
  • Off : No preview of the spheres in the viewport.

Inputs

NameDescriptionTypical Input
Max Output LuminanceTBCTBC
Diffuse Bounce MultiplierTBCTBC
Probe Separation DistanceTBCTBC
Motion ResponsivenessTBCTBC
Refraction Max DistanceTBCTBC
AO DistanceTBCTBC
AO Blend AmountTBCTBC
AO Falloff CurveTBCTBC