Introduction

Notch can read both RGB and Depth data from both the Kinect v1 or v2. You can use this data in a whole variety of ways, from generating meshes, emitting particles, generating fields, and using the depth and colour images in video processing chains.

Setup

You’ll need either a Kinect 1.0 or a Kinect 2.0 with a Kinect for Windows Adapter. Notch uses the official Microsoft SDKs to integrate with the Kinect cameras, so you may also need to download the relevant drivers from Microsoft (Kinect 1 Drivers v1.8 and Kinect 1 SDK v1.8, Kinect 2 Drivers).

Connect your Kinect to your PC and open Notch Builder. Go to menu Devices > VideoIn/Camera/Kinect Settings to find the Video In settings. Depending on which Kinect sensor you have, tick “Kinect Enabled” underneath the corresponding device, hit apply, and you should notice your Kinect light up. It is possible to read only the colour or depth channel from the Kinect camera if the other channels are not required in order to improve performance.

Adding Kinect sources to your scene

Add a Depth Camera / Kinect Source Node to the Nodegraph (Video Processing > Input Output). This is the source node for any Kinect related outputs. This node contains both the RGB and depth image from the Kinect, and by default it generates an alpha channel which is clipped using the depth image and the near and far planes set on the node. You may also output depth from this node as luminance in the RGB channel. Connecting it to a Kinect Mesh node will allow you to visualize what the Kinect sensor can see in 3D space. You need one Video Kinect Source for each physical Kinect camera in use.

Nodes that use Kinect source

Video nodes can use Kinect RGB or Depth images as source as a inputs. However the following nodes are specifically made for use with the Kinect:

Using multiple Kinects with Notch

While Notch supports multiple Kinect v1.0s (up to 4), the PC hardware / USB buses have limitations on the number of Kinects that they support simultaneously. Users have successfully used up to 4 Kinect v1.0s on a single machine but this is dependent on the PC hardware itself. Only one Kinect v1.0 camera may be used per USB bus lane; on a typical PC, multiple USB ports share a single lane, so it is usually necessary to add additional USB add-on cards to make 4 Kinects work.

Only one Kinect v2.0 is supported per PC. This is a driver restriction not a Notch restriction.

Use of Kinect v1 and Kinect v2 is mutually exclusive – only one or other’s drivers may be installed on the PC at any one time and only one or other may be used in Notch at once, so it is not possible to mix Kinect1 and Kinect2 in one project.

Skeletal data

Notch supports the Microsoft Skeletal data from Kinect 1 and 2. However, we have not found it to be suitable in production environments. The conditions required for Microsoft’s skeletal data don’t lend themselves to the stage or public spaces. Specifically, it requires a control pose at the beginning and the user to stay in a control space in view.

Recording Kinect data

Notch is able to record a stream of Kinect data for playback when there is no camera connected. In order to do this, use the “Capture Kinect1/2 Data” option in the Devices menu (Builder Pro only). Recording begins immediately and continues until the option is used again to switch it off. The resulting file may be loaded into Notch as a “Kinect Stream” resource. The Video Kinect Source node has a parameter which takes a pre-recorded stream which will then be used as a substitute for the live Kinect camera feed.

Considerations for the use of Kinect in performance environments

Using Kinect cameras takes care and planning. It was designed as a gaming controller, not for performance work. If you are inexperienced with their use, we strongly recommend you engage with an experienced interaction/integration specialist. Specific considerations:

  • IR Interference : Incandescent light sources (including the sun, and some models of strobe) significantly degrade the Kinect depth image, making it unusable.
  • Signal extension : Extension of Kinect v1.0 USB cabling is known to be achievable. Certain models of powered USB over ethernet extenders have been found to work to extend the range over 50 metres. However users still report significant technical challenges in extending Kinect v2.0.
  • Field of View : Both versions of the Kinect have very particular field of views and both version’s depth and RGB fov’s differ.

Troubleshooting

My Kinect camera doesn’t show up in Notch. What do I do?

  • Does the camera work outside of Notch? The Kinect SDKs come with a number of example programs that test the cameras. “Kinect Studio” for Kinect v1 is worth trying.
  • The status of the Kinect cameras are visible in Device Manager. They may be missing drivers, or the problem may be due to the USB connection or lack of power.
  • Kinect v2 requires USB3.0, a compatible USB bus and a suitable software USB stack, which is available only in Windows 8 and up. Windows 7 is not able to support the Kinect v2. Older PCs may also be unable to support the device.

The frame rate of the camera appears poor, even though Notch’s frame rate is fine..

  • The Kinect v1 and v2 are both capable of 30hz input. Kinect v1 is only capable of 30hz at 640×480 and below.
  • The frame rate of the Kinect is limited by the speed of your USB bus. Multiple cameras and slow/busy buses will affect this. If you have multiple cameras connected and low frame rate, try disconnecting some cameras.
  • If your scene only requires depth data from the Kinect, try disabling the colour channel in the Video In / Kinect Settings dialog. This will reduce the amount of bandwidth used.
  • If multiple programs try and use the Kinect camera at once the frame rate of the camera seen in each program will be reduced. This is particularly relevant if running Kinects in Notch standalone at the same time as Builder, or running instances of the Notch plugin inside media servers. Only one active plugin may use the Kinect at any time to achieve full frame rate.