Interviews with the designers are also available (unfortunately not really a ‘making of’ video, since they do not reveal much about their methods):
This video is a first effort using the StopMotion Recorder App on the iPhone 4s. Playback frame-rate was set to 4 fps. Images were captured manually an irregular intervals according to the movement of the subject. The ‘Vintage Green’ setting was selected in the App settings. This app is quite easy to use, but by the same token it’s fairly restrictive.
Two more clips with manual, irregular frame acquisition and the same playback rate (4fps):
This short video by Daniel Franke & Cedric Kiefer is one of the most aesthetically impressive uses of the Microsoft Kinect I have seen yet. Apparently they used three Kinects. Not sure whether the visuals could be rendered in real time because there is clearly interpolation between the 3D views involved in producing this video. Also for real-time use this probably involves programming in C++, or at least Openframeworks. For anyone interested in the Kinect, it’s worth trying to find out more about what went into producing the video. Some links are given:
There’s full quality version of the video available online:
And a ‘making-of’ video on Vimeo:
Here is a statement by the artists:
The basic idea of the project is built upon the consideration of creating a moving sculpture from the recorded motion data of a real person. For our work we asked a dancer to visualize a musical piece (Kreukeltape by Machinenfabriek) as closely as possible by movements of her body. She was recorded by three depth cameras (Kinect), in which the intersection of the images was later put together to a three-dimensional volume (3d point cloud), so we were able to use the collected data throughout the further process. The three-dimensional image allowed us a completely free handling of the digital camera, without limitations of the perspective. The camera also reacts to the sound and supports the physical imitation of the musical piece by the performer. She moves to a noise field, where a simple modification of the random seed can consistently create new versions of the video, each offering a different composition of the recorded performance. The multi-dimensionality of the sound sculpture is already contained in every movement of the dancer, as the camera footage allows any imaginable perspective. The body – constant and indefinite at the same time – “bursts” the space already with its mere physicality, creating a first distinction between the self and its environment. Only the body movements create a reference to the otherwise invisible space, much like the dots bounce on the ground to give it a physical dimension. Thus, the sound-dance constellation in the video does not only simulate a purely virtual space. The complex dynamics of the body movements is also strongly self-referential. With the complex quasi-static, inconsistent forms the body is “painting”, a new reality space emerges whose simulated aesthetics goes far beyond numerical codes. Similar to painting, a single point appears to be still very abstract, but the more points are connected to each other, the more complex and concrete the image seems. The more perfect and complex the “alternative worlds” we project (Vilém Flusser) and the closer together their point elements, the more tangible they become. A digital body, consisting of 22 000 points, thus seems so real that it comes to life again.
I uploaded the same HD video file (1280×720 progressive) to YouTube and Vimeo. Watch these on full screen, with resolution set to 720p on YouTube and ‘HD Mode’ selected on Vimeo.
This is an unusual video in that it consists only of moving black lines on a pure white background, so compression artifacts are quite noticeable. The raw data (1800 png image files) is over 200MB but artefacts were barely visible in the ~100MB H.264-compressed mp4 file I uploaded to both Vimeo and YouTube. That noted, it seems clear that YouTube has the advantage in terms of quality. Since I’ve read otherwise in informal reports on the web, I’m not sure whether or not the quality might improve with Vimeo Plus, a paid upgrade currently available for the discounted price of US$60/year. Overall, Vimeo offers a calmer, more pleasant user experience than YouTube. The user interface is nicely designed and the online help files are easy to navigate and genuinely helpful. The content and the community are generally more edifying: no denying it there’s a lot of trash on YouTube. The advertising on YouTube is also more obtrusive and distracting. As for pure spec: time to upload and process a video is much quicker with YouTube. Upload is slower with Vimeo and they also make non-paying users wait for at least 30 minutes before the video goes online. Moreover there are weekly limits on total data and only one HD video may be uploaded per week. Third party advertising is less intrusive but lately Vimeo is pushing the paid-subscription fairly hard.
Addendum: Vimeo offers a slightly better experience when browsing from iOS devices in that it’s easier to directly open the 720p viewer. But compression artefacts are still more noticeable, with this file, than with YouTube. Note also that YouTube allows upload of higher resolution videos such as ‘Full HD’ (1080p).
This is a simple, lightweight P5 sketch captured to a low-res (320×240), compact video just to illustrate that it is easy to create animations with Processing.
Uploaded with Vimeo also, but Vimeo makes non-paying users wait.
Here is the same P5 sketch rendered at 720p (watch full screen):
The lower-resolution video was made using the built-in MovieMaker class. The 720p version was made by saving each frame as a png image file by calling the saveFrame() method. The frames were then concatenated and saved in mp4 format using FFmpeg.