Kinect Controller

Introduction

In this projects, students will work with Kinect sensor to detect a set of features of body movements.

Materials

Hardware

This is the basic necessary hardware:

  • Computer with Windows, Linux or OSX.
  • Kinect, version 1.

Software

Software includes 3 layers: driver, middleware and a wrapper for the development environment used.

Drivers:

  • MS SDK v 1.8, that includes microsoft drivers. This is valid only for windows, but it allows the use of all the sensor capacities.
  • libfreenect OpenKinect proyect that have to be compiled. This driver is used by the wrapper openkinect
  • OpenNI Kinect drivers. Different OS have different drivers. For Windows you could also try this. This driver is used by the wrapper SimpleOpenNI

Middleware:

  • For the driver OpenNI you can use NITE middleware. Also could be downloaded from here, that includes functionalities as skeleton detection or gesture identification.

Wrappers for Processing:

  • Kinect4WinSDK. This is the simplest solution to use kinect with processing, as it only needs MS SDK 1.8. It only works in Windows.
  • SimpleOpenNI is a OpenNI and NITE wrapper, but it only works for versions less than processing-3.0

Other environment.

Framework for this project (only for Windows):

  • Processing
  • Kinect4WinSDK.
  • MS SDK v 1.8, that includes microsoft drivers. This is valid only for windows, but it allows the use of all the sensor capacities.

You will also need some libraries for Processing:

  • BlobDetection allows to detect pixels with brightness below a configurable threshold value.
  • Box2D is a library that simulates rigid bodies in 2D.

Sensor Kinect

This section summarizes the main features of Kinect (v1) sensor:

kinect.jpg
  • Video Camera that gives RGB image of 640x480 or 1280x960 pixels, in 12, 15 at 30 fps.
  • 2 IR Cameras - depth data of 80x60, 640x480 or 320x240 at 30 fps.
  • 4 microphones.
  • Tilt motor.

Kinect depth sensor range goes from about 60-80cm to 4-5m.

Coordinate system

There are two main coordinate systems that have to be taken into account. This figure shows the sensor coordinate system:

IC534689.png

But there is also a projection coordinate system, that is related to what Processing projects to the screen.

Skelleton

Kinect is able to give information about the
IC534688.png

Exercises

  • Install all the necessary SW components, and check they work.
  • Check the tutorials out, for getting started with Processing language.
  • Run Kinect4WinSDK example.
  • Get used to the different data the kinect gives:
    • Kinect.GetImage(), Kinect.GetDepth() and Kinect.GetMask()
    • SkeletonData
  • Extract some features of the user. For example:
    • 3D vector of the skeleton points.
    • 3D vector of the Center of Mass of user's in scene
  • Calculate as a normalized scalar value the relative area of the body in the screen respect to the maximum when the user is in "viturbio pose" (value 1), so it is scale prone.
  • Calculate the optical flow using any of the determination methods (see link). You could find help in this reference.
  • From the optical flow, calculate the 3D vector of the total movement of the user.
  • From the optical flow, calculate a set of 3D vectors by zones with a maximum optical flow respect a configurable threshold. The origin of each vector should go from the center of mass of each zone, and would be the sum of the optical flows vectors from each pixel.
  • Make an application that includes kinect in Particles example of Procesing, so the particles source is in one hand. Particle example is in File-Examples-Demos-Graphics.

Links

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License