Augmented Body


This project issues from the following questions:

What type of contribution can the controlled visual projection make to the poetry of the body?

Can a dance-work itself generate a device and not just a choreography for the scene?

The dialog between dance and technology automatically issues one main concept: the real-time feedback. In the visual part, this concept have had a main consequence: the use of a sensor for getting the body movements, that is why the project is based on a Kinect sensor. At the same time this have consequences in the dramatic discourse: from all body language alternatives it becomes mandatory the use of improvisation for providing interaction between the controlled visual projection and the choreography composition in the present time.

This two main elements, the movement sensor and the improvisation, generates a composing framework as the beging of the rest of the work (dialog).

At the end of this page there is a video. that shows the results of this DART project.

More information of the project can be found in the Aula de las Artes website (in Spanish).

For follow the evolution of this project, please visit the blog of the engineer Javier Picazas (in Spanish).

Voice-over Text and The Classical Myth of The Hero.

The dancer knows the possibilities of the device that in dialog has been created. Even more, the body of the dancer is just one possible body that is circumstantial. This idea brings to a text that would be read in voice-over so it does not depends of the body that is in scene.
This text contains the conceptual part of the work and establishes the name of the project: The Augmented Body (from Alfredo Miralles, dancer and writer). It is post-dramatic, therefore, it is analogous to the abstract esthetic aspect of the visual part. It is influenced by La Tristura collective, that works in the line where text and action configure two reading levels despite they are not in an obvious relationship. Without the presence of characters, actions and a narrative outcome the Augmented Body text it is related to the idea of the generation of a conceptual line from the artistic direction, an improvisation (so a priori non-controlled action) and a presence that is not directly linked to the project, as the interactive projection is later, when the improvisation finishes, offered to the public as an existential game.

The text reviews the figure of a classical hero and its belonging to the quotidian world. In this way the Augmented Body emerges as a feasible piece of such device (the controlled interactive projection), and
as a call for each individual to take the power in relation with his or her life.

Technical Description

Hardware Description

Basically, the system is formed by a Kinect sensor directed to a big screen that covers the action of the dancer that is situated between the sensor and the screen.

There is also a projector in front of the screen in the same axis of the Kinect sensor. The projector has an ideal projection distance between 16m and 20m. This distance is chosen depending on the environment where the piece is going to be played.
In the other hand, Kinect sensor works in a range between the 1m and 12m aprox. Therefore the sensor and the projector use to be in a different distance respect to the user, what can represent a big problem for the projections to match the dancer figure.
For solving this “offset problem” a calibration algorithm has been developed. This algorithm just scales the image that is been projected depending on the dimensions of the whole projected screen. Also a manual scale factor can be introducing, in the calibration process, what is made by the technician before the show.

Control Description

The Control System follows a classical feedback loop.

There is a functional block responsible of reading data from the Kinect sensor in 3 structures: depth image, RGB image and infrared image. Depth image gives 3D points of the environment just in front of the Kinect. As in our stage, the back of the environment is the screen, it is quite easy to differentiate the environment from the dancer. From the reading of the sensor it is extracted a few set of high-level parameters that are directly used for controlling the visual projection by the control block. The draw block takes charge of generate the visual elements.

The setup block calibrates the projected images in function of the disposition of the hardware of the system. Therefore, the system is adaptable and quite easy to install in any space, what is an important requisite for the piece.

Input Elements: high-level Parameters.

The system abstracts from such data the relevant information extracting higher level parameters such as the main center 3D point where the dancer is, the main velocity of all the points of the dancer's body, or the surface that the dancer is taking up from the captured image.

The skeleton of the user is also calculated using OpenNI and NITE libraries. This data give 3D points of each joint of the user (11 different joints, including the head). It also gives a 3D velocity vector for each joint.

Output Elements: the Scenes.

The output of the system is basically made by the visual images that are projected over the dancer. These images are created at real-time using Processing framework. The main application is divided in several scenes with different esthetic intentions. The movement from one scene to another is controlled by the dancer that use the interactive projection as an augmented device. Notice that each scene is also interactive, therefore interaction is present in each scene and in the scene changing process.

Scenes are created from a set basic of visual resources that include, but are not limited to: drawing pointed images (such snow) lines (such lightning or threads), surfaces (such a part of the user's body) or 3D objects (cubes or spheres); changes in the user's figure or background visual features such as texture or colour; changes

Scenes include "the tail" where the part of the dancer's body that moves faster than a threshold is maintained in the image with a special colour creating a type of rainbow in the movement. Ther is also the snow scene where snow particles from the background interact with the user figure.

The Augmented Body on Stage.

The final piece has been presented in several dancing and Arts festivals. The sequence of the piece is divided in two parts. In the first one a dancer interacts with the created device (the interactive projections) while the the main voice-over text sounds. The Each time it is presented the sequence of the scenes but it is possible to find a video from the Oxifest.

In such video it is possible to see how the dancer searches the vertical direction as a metaphoric symbol of individual freedom. The pieces begins with movements in the horizontal direction that are perceived and transformed in a velocity parameter that controls a scene that articulate different lines in the background of the image.

Next scene is based on a projection of random threads in two directions horizontal and vertical that are united in the mass center of the dancer's body. This high-level parameter is also used to control the following scene where a pattern of points are projected to the dancer.

In the next scene a set of particles are moved depending on their distance with the mass center, also. Finally, when the dancer minimize the surface region in the captured image (e.g. by crouching) the scene changes to another where a circle is reduced to the mass center in a loop.

In the second stage of the piece, the public is invited to interact with the device as a game and a freedom invitation as described above. More scenes than in the previous stage are added. The scenes are changed following the same rules, but now the public has to play to find such rules. For example to crouch, or to move fast or slow (depending on the context).

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License