China My China (1974)

This pre-MTV music video shows Brian Eno performing the track China My China from his second solo album Taking Tiger Mountain (By Strategy) in front of a Nam-Jun-Paik-like television wall. Judy Nylon and Polly Eltes provide backing on guitars. Check out the use of a typewriter for percussion from about 1:40. This is post-punk from the period before punk. Performance artist/musician Judy Nylon looks really new wave, but this is 1974, not 1980. This is closer to video art than music video. Surely Eno’s concerns here are artistic, not aimed at gratification of a pop audience. As with other innovative Eno works there seems to be a focus on process over product. Something to reflect on as we begin a new year.

Here’s another track from the same album. Third Uncle is considered notable as an example of proto-punk, but again this is really closer to post-punk. There’s a strong resemblance to Joy Division here, however this was recorded a couple of years before Joy Division was formed.

StopMotion Recorder

This video is a first effort using the StopMotion Recorder App on the iPhone 4s. Playback frame-rate was set to 4 fps. Images were captured manually an irregular intervals according to the movement of the subject. The ‘Vintage Green’ setting was selected in the App settings. This app is quite easy to use, but by the same token it’s fairly restrictive.

Two more clips with manual, irregular frame acquisition and the same playback rate (4fps):

Elektron Musik Studion 1974 Stockholm

This video offers a glimpse at an earlier era in electronic and computer music production – as well as what it was like to use a computer in the early 1970s. My first experience with computers dates from this around this time. It is interesting to reflect what has and hasn’t changed in the nearly 40 year interim.

Point cloud painting with the Kinect

This short video by Daniel Franke & Cedric Kiefer is one of the most aesthetically impressive uses of the Microsoft Kinect I have seen yet. Apparently they used three Kinects. Not sure whether the visuals could be rendered in real time because there is clearly interpolation between the 3D views involved in producing this video. Also for real-time use this probably involves programming in C++, or at least Openframeworks. For anyone interested in the Kinect, it’s worth trying to find out more about what went into producing the video. Some links are given:

onformative.com
chopchop.cc

There’s full quality version of the video available online:

daniel-franke.com/unnamed_soundsculpture.mov

And a ‘making-of’ video on Vimeo:

Here is a statement by the artists:

The basic idea of the project is built upon the consideration of creating a moving sculpture from the recorded motion data of a real person. For our work we asked a dancer to visualize a musical piece (Kreukeltape by Machinenfabriek) as closely as possible by movements of her body. She was recorded by three depth cameras (Kinect), in which the intersection of the images was later put together to a three-dimensional volume (3d point cloud), so we were able to use the collected data throughout the further process. The three-dimensional image allowed us a completely free handling of the digital camera, without limitations of the perspective. The camera also reacts to the sound and supports the physical imitation of the musical piece by the performer. She moves to a noise field, where a simple modification of the random seed can consistently create new versions of the video, each offering a different composition of the recorded performance. The multi-dimensionality of the sound sculpture is already contained in every movement of the dancer, as the camera footage allows any imaginable perspective. The body – constant and indefinite at the same time – “bursts” the space already with its mere physicality, creating a first distinction between the self and its environment. Only the body movements create a reference to the otherwise invisible space, much like the dots bounce on the ground to give it a physical dimension. Thus, the sound-dance constellation in the video does not only simulate a purely virtual space. The complex dynamics of the body movements is also strongly self-referential. With the complex quasi-static, inconsistent forms the body is “painting”, a new reality space emerges whose simulated aesthetics goes far beyond numerical codes. Similar to painting, a single point appears to be still very abstract, but the more points are connected to each other, the more complex and concrete the image seems. The more perfect and complex the “alternative worlds” we project (Vilém Flusser) and the closer together their point elements, the more tangible they become. A digital body, consisting of 22 000 points, thus seems so real that it comes to life again.

Project Glass @ Google

Google has posted information about Project Glass. The photos and video show a stylish, lightweight eyeworn see-through display operated via a speech interface, which allows you to do many of the things you already do with your iPhone, but without manual interaction. This announcement has been expected for some time and we mentioned it in an earlier post on HMDs titled Retro-Future. It will be increasingly difficult to justify the purchase of outrageously priced and bulky retro-HMDs when consumer products with superior form-factor and functionality come on to the market. Currently price estimates for the AR eyewear run from $200.00 to $600.00, but this is guessing. No doubt Google will make APIs easily available for the eyewear, so that R&D can be conducted by anyone with modest means and sufficient motivation. The video, available via YouTube, is probably mostly a mock-up and I’m wondering where the battery will be located.

Walking around in Nakanoshima

Here’s a recording I made in Nakanoshima, on February 02, 2012, testing out a windshield for binaural microphones. If you listen with headphones you can hear some of the spatial qualities of the audio.

That’s a situationist yelp close to the beginning when a somewhat rude older couple hogging the sidewalk nearly walked into me. Manners!

The section at the end, a close-up recording of some kind of noisy machine for fixing the road, gives a strong percept of space. Here’s the section by itself:

Use headphones (or, more likely?, earbuds) to hear the binaural effect.