Sonifying Tweets: The Listening Machine

Via: http://www.thelisteningmachine.org/

The Listening Machine
by Daniel Jones and Peter Gregson

The Listening Machine is an automated system that generates a continuous piece of music based on the activity of 500 Twitter users around the United Kingdom. Their conversations, thoughts and feelings are translated into musical patterns in real time, which you can tune in to at any point through any web-connected device.

It is running from May until October 2012 on The Space, the new on-demand digital arts channel from the BBC and Arts Council England. The piece will continue to develop and grow over time, adjusting its responses to social patterns and generating subtly new musical output.

The Listening Machine was created by Daniel JonesPeter Gregson and Britten Sinfonia.

See also: The Listening Machine Converts 500 People’s Tweets into Music (Wired)

Kugelschwung – Pendulum-based Live Music Sampler

Kugelschwung is the result of a second year Human Computer Interaction project by six Computer Science students at the University of Bristol. These students should be roughly the same age as students in our seminar. The interface is simple but works very well and the concept is brilliant. The work has been accepted for presentation at NIME-12.

Motor Vehicle Sundown – George Brecht (dedicated to John Cage)

This is one of the events kicking off the annual NIME-12 conference, which will be held starting next weekend at the University of Michigan. Really looking forward to this and all the other exciting things that will be happening at NIME-12. Nice also that this performance is part of the centennial brithday celebrations for our patron saint, John Cage.

From the University of Michigan Museum of Art web site:

As the lights go down on UMMA’s exhibition Fluxus and the Essential Questions of Life, please join us for a rare performance of Motor Vehicle Sundown, written by Fluxus artist George Brecht and dedicated to the American composer John Cage. This performance by students and faculty from the University of Michigan is presented in conjunction with the annual International Conference on New Interfaces for Musical Expression (NIME), and in celebration of John Cage’s 2012 centennial. Motor Vehicle Sundown is written for any number of motor vehicles arranged outdoors. In true Cagean fashion, 22 timed auditory and visual events and 22 pauses written on randomly shuffled instruction cards are performed on each vehicle.

The performance will take place in parking Lot C-2 on the south side of N. University at Thayer, next to Kraus Natural Science Building.

This program is co-sponsored by NIME, the UM School of Music, Theatre, and Dance, the UM College of Engineering and UMMA. Fluxus and the Essential Questions of Life was organized by the Hood Museum of Art and was generously supported by Constance and Walter Burke, Dartmouth College Class of 1944, the Marie-Louise and Samuel R. Rosenthal Fund, and the Ray Winfield Smith 1918 Fund. UMMA’s installation is made possible in part by the University of Michigan Health System, the University of Michigan Office of the Provost, Arts at Michigan, and the CEW Frances and Sydney Lewis Visiting Leaders Fund.


View Larger Map

Danse Neurale: NeuroSky + Kinect + OpenFrameworks

This performances makes use of the NeuroSky EEG sensor as well as the Kinect. Visuals and music are driven by EEG and registered with the performers body using the Kinect. It seems their system runs under OpenFrameworks. In fact, I noticed this video in the OF gallery. The second half of the video consists of an interview with the technical team and performer.

This performance uses off-the-shelf technology but is cutting edge in more than one sense. No one can accuse these guys of lacking commitment.

A project page may be found here: Danse Neurale.

They generously list the code used to acquire signals from the NeuroSky server in the OF forum. This part of the system is written in P5 (Processing).

Here are a few details on the technical background of the work, given by one of the creators in the OF forum:

Sensors:

– breath: it’s sensed with a wireless mic positioned inside Lukas’ mask. its signal goes directly through a mixer controlled by the audio workstation

– heart: it’s sensed with modified stethoscope connected with a wireless mic; signal works just like the breath (we’re not sure, but in the future we may decide to apply some DSP on it)

– EEG: we use the cheaper sensor from NeuroSky; it streams brainwaves (already splitted into frequencies) via radio in a serial like protocol; these radio packets arrive to my computer where they’re parsed, conveted into OSC and broadcasted via wifi (we only have 2 computers on stage, but the idea is that if we have an affine hacker soul between the public, he/she can join the jam session 🙂 )

– skeleton tracking: it’s obviously done with ofxOpenNI (as you can see in the video we also stage the infamous “calibration pose”, because we wanted to let people understand as much as possible what was going on)

The audio part maps the brainwave data onto volumes and scales, while the visual part uses spikes (originated i.e. by the piercings and by the winch pulling on the hooks) to trigger events; so, conceptually speaking, the wings are a correct representation of Lukas’s neural response and they really lift him off the ground.

Elektron Musik Studion 1974 Stockholm

This video offers a glimpse at an earlier era in electronic and computer music production – as well as what it was like to use a computer in the early 1970s. My first experience with computers dates from this around this time. It is interesting to reflect what has and hasn’t changed in the nearly 40 year interim.

The First NIME

The BBC News web site currently has a good article about the Theremin, an electronic instrument invented by Russian Léon Theremin in 1919. Though it was not the first synthesizer, I consider the Theremin to have been the first NIME: earlier electronic synthesizers like the Telharmonium (1897) mainly used keyboard-like controllers. The Theremin, by contrast, is played with non-contact gestures (waving your hands in the air). It is a full blown new electronic instrument, the product of a futurist/modernist outlook and cutting-edge technical mastery (for the day), which opened the potential for completely new forms of human artistic expression.

I am not going to introduce the Theremin here. Better to start by reading the BBC article or Wikipedia entry, both linked above. You can get an idea of how it can be played by watching any number of videos on YouTube. Here’s Léon Theremin’s neice, Lydia Kavina, playing Claude Debussy’s “Clair de Lune”:

The BBC article quotes an interesting statement from Theremin biographer Albert Glinsky which I would like to include here:

RCA felt this was going to replace the parlour piano and anyone who could wave their hands in the air or whistle a tune could make music in their home with this device. The Theremin went on sale in September 1929 at the relatively high price of $220 – a radio set cost about $30. It was also much more difficult to play than the advertising claimed. And just one month later came the Wall Street Crash. You took it home and found that your best efforts led to squealing and moaning sounds. So the combination of the fact that only the most skilled people could teach themselves how to play it and the fact that there was a downturn in the economy meant that the instrument really wasn’t a commercial success

This seems to be a somewhat common story with the most unusual/innovative new music technology – it is difficult to get make significant cultural impact.

The Theremin, however, has had good staying power. It is still a niche instrument with few virtuosi, but you can, in fact, purchase a Theremin and find someone to help you to learn how to make music with it. High quality Theremins are made by Moog Music. As it turns out, electronic instrument pioneer Robert Moog got started by selling Theremin kits while he was still an undergraduate physics student. Moog represents an important link between the earliest NIMEs and contemporary electronic music technology and culture. For this reason we invited Dr. Moog as a Keynote speaker for NIME-04, which was held in Hamamatsu, Japan’s mecca for music technology.

I had the good fortune to invite Dr. Moog to give a talk at the company where I used to work, ATR, in southern Kyoto prefecture. It was a great experience to spend time talking with Dr. Moog and to introduce him to a huge and enthusiastic audience who showed up from all over Japan to hear him speak. I will never forget what a kind-hearted person Bob Moog was.

 

NIME Zemi Basic Tools Part II: Max/MSP

While we are on the topic of basic tools for this seminar, I suspect everyone already knows about, or at least has heard of Max/MSP, a multi-media visual programming environment sold by the company Cycling ’74. If you have any intention to do work in the media arts, and I expect that everyone who joins my seminar has such an interest, then you should try to develop some knowledge of P5, mentioned in the previous post, and Max/MSP.

Our faculty does not presently offer any courses in either of these two tools, however there are plenty of ways you can learn about these through self-study. Just as with P5, there are many resources online for learning about Max/MSP, including the built in tutorials and help files. There are also introductory books, and a popular book written in Japanese is 2061: A Max Odyssey. This book dates from 2006, and there have been some changes to Max/MSP in the mean time, however the basic way of working with Max/MSP has not changed and this book is still very useful.

Max/MSP can be even more fun than P5 because it uses a visual programming paradigm: you create a program by connecting little boxes having a dedicated function. The programs are called patches because the links between the objects resemble the electronic patch cables of the old analog sound synthesizers. Here’s what a simple Max/MSP patch looks like:

Click for larger image.

This is a Max/MSP patch I created after a few hours experimenting with percussion sequences based on the Fibonacci numbers. I’ve titled it ‘Quasi-Periodic Drum Circle’ but it’s not really quasi-periodic because the patterns eventually do repeat and it’s not really a drum circle, because some of the presets use other MIDI instruments such as whistles, and a Cuíca. A more accurate name would be ‘Quasi-Quasi-Periodic Latin Percussion Circle’. This is what a few of the presets sound like:

One of the (many) nice things about the new version of Max/MSP, Max 6, is that it allows you to create standalone applications. If you’d like to try my Percussion Circle as a standalone application on Mac OS X, send me a quick email and I’ll reply with a download link. Because I’ve used several Fibonacci numbers to create the drum patterns, it will take a very long time before the pattern repeats (I’ll leave it as an exercise to calculate just how long it takes.) So this also functions as an ambient generative Latin Percussion app: you can run it as background music if you like that sort of thing.

Like P5, Max/MSP is multi-platform, there are versions for Windows and Mac. Unlike P5, Max/MSP is not free, however it is very reasonably priced and there is a good student discount. Moreover, Cycling ’74 allows you to download Max 6 and try the entire software package for free for one month. Cycling ’74 is an excellent small company to deal with. Members have also showed up at the annual NIME conference from time to time.

You might be wondering where the name Max/MSP comes from. MSP stands for ‘Max Signal Processing’ because MSP handles the audio signals. MSP also stands for the initials of Miller S. Puckette, who first developed Max. Max is named for Max Matthews, who is considered the father of computer music, known for, amongst other things, having programmed the song sung by HAL in the film 2001: A Space Odyssey. Unfortunately Dr. Matthews passed away last year. I have nice memories of meeting him at NIME and other conferences. Here is Max graciously acting as the MC during the first NIME conference concert in 2001.

Siggraph Asia 2012 Web Site uses NIME Images

While checking the web page with instructions for course proposals for Siggraph Asia 2012, I was pleasantly surprised to notice that the Siggraph Asia 2012 page uses images from the course Prof. Sidney Fels and I have been teaching at Siggraph recently.

From left to right:

  • The ReacTable developed in Sergi Jorda‘s group in Barcelona
  • Mari Kimura with Eric Singer‘s Lemur Guitarbot during the concert program of NIME04 in Hamamatsu.
  • Former professional jazz musician turned professional linguistics researcher (and former ATR colleague), Dr. Ichiro Umata, trying out the Mouthesizer, which I developed starting in 2000-2001.
  • Musicians Linda Kaastra and Sachiyo Takahashi playing the Tooka, a collaborative flute developed in Sid Fels’ group. Picture from a performance at NIME05 in Vancouver.

Perhaps it is just a coincidence, but I suspect that what appeals to the organizers of Siggraph Asia 2012 about our images is the suggestion of East-West collaboration and combination of high-tech and high-culture, a mix that seems to suit Singapore, where the conference will be held.

Singing with your Hands

Currently reported on Gizmodo: friend and collaborator Prof. Sidney Fels, University of British Columbia, and part of his team describe their work on using hand gestures to control speech and singing synthesis. Those interviewed in the video, including Sid, graduate student, Johnty WangProf. Bob Pritchard (School of Music, UBC), professional classical vocalist Marguerite Witvoet are some the people I enjoy hanging out with when I attend the annual NIME conference, which Sid and I co-founded in 2001.

The video contains demos and an excerpt from a vocal performance by Marguerite.