Tatsuya Nakatani @ Lyons Zemi

We’ll be wrapping up the last class of the year with a visit from master percussionist and improvisor Tatsuya Nakatani. Tatsuya is a Kansai native who has been living in the USA and touring the world for just about as long as I’ve been living in Japan (and touring the world.) This should be an exciting event and Tatsuya said we can invite anyone who may be interested. The more the merrier. So invite any of your friends who may enjoy having their ears and minds opened.

  • Tatsuya Nakatani, Guest Talk and Performance
  • 2013年1月15日(火)2時限目(10:40ー12:10)
  • 充光館301

Clicking on either picture will take you to a Facebook page for this event.

Danse Neurale: NeuroSky + Kinect + OpenFrameworks

This performances makes use of the NeuroSky EEG sensor as well as the Kinect. Visuals and music are driven by EEG and registered with the performers body using the Kinect. It seems their system runs under OpenFrameworks. In fact, I noticed this video in the OF gallery. The second half of the video consists of an interview with the technical team and performer.

This performance uses off-the-shelf technology but is cutting edge in more than one sense. No one can accuse these guys of lacking commitment.

A project page may be found here: Danse Neurale.

They generously list the code used to acquire signals from the NeuroSky server in the OF forum. This part of the system is written in P5 (Processing).

Here are a few details on the technical background of the work, given by one of the creators in the OF forum:

Sensors:

– breath: it’s sensed with a wireless mic positioned inside Lukas’ mask. its signal goes directly through a mixer controlled by the audio workstation

– heart: it’s sensed with modified stethoscope connected with a wireless mic; signal works just like the breath (we’re not sure, but in the future we may decide to apply some DSP on it)

– EEG: we use the cheaper sensor from NeuroSky; it streams brainwaves (already splitted into frequencies) via radio in a serial like protocol; these radio packets arrive to my computer where they’re parsed, conveted into OSC and broadcasted via wifi (we only have 2 computers on stage, but the idea is that if we have an affine hacker soul between the public, he/she can join the jam session 🙂 )

– skeleton tracking: it’s obviously done with ofxOpenNI (as you can see in the video we also stage the infamous “calibration pose”, because we wanted to let people understand as much as possible what was going on)

The audio part maps the brainwave data onto volumes and scales, while the visual part uses spikes (originated i.e. by the piercings and by the winch pulling on the hooks) to trigger events; so, conceptually speaking, the wings are a correct representation of Lukas’s neural response and they really lift him off the ground.

Singing with your Hands

Currently reported on Gizmodo: friend and collaborator Prof. Sidney Fels, University of British Columbia, and part of his team describe their work on using hand gestures to control speech and singing synthesis. Those interviewed in the video, including Sid, graduate student, Johnty WangProf. Bob Pritchard (School of Music, UBC), professional classical vocalist Marguerite Witvoet are some the people I enjoy hanging out with when I attend the annual NIME conference, which Sid and I co-founded in 2001.

The video contains demos and an excerpt from a vocal performance by Marguerite.