Finally a bit of time to breathe. A lot happening lately, again. Just finished 2 assignments, 1 more to go for now.
This time I'd like to share some of the things I study in Creative Computing at Goldsmiths, University of London.
I am handing in the project for Audio Visual Processing. The task given was to "implement interactive real-time audio
and video systems using the programming tool of your choice (either maxMSPJitter, C++ using provided libraries, or Java/Processing)."
There was plenty of information to sponge in a short amount of time, but very fun. What I've noticed so far is: the simpler, the better. What I mean is, if I use the right numbers by processing video or audio, the results might be a bit 'jittery' and might not convey a theme, that could be easier conveyed with a bit of cheating. The brain does a lot of things for us, and when it comes to video we only need 24 frames per second be fooled into seeing continuos animation, when linked to sound, the images gain value, and feed information (confirmation) back to the sound (source) in a strange but effective loop. I haven't managed to get the right balance between the data processed and the way to map that wisely so it's instantly and easily perceivable.
Well, enough talk for now. Here is a an attempt to modify 3d geometry (a plane) using video(color) and audio(peak amplitudes sampled every 40 milliseconds) as source. For some reason it makes me think it could be a 'quick'n'dirty' technique to achieve an effect similar to Michel Gondry's "Fell in Fell In Love With A Girl Lyrics" video for The White Stripes. The source video is a performance by Les Elephants Bizarres, talented friends from back home.
The colour channels from the video control the height of each grid square, coresponding to a group of polygons in the 3d plane. The resolution of the 3d plane and source video can be altered in realtime. The inputs can trigger a change all the time, or only when a change in peaks occurs, as with this video. Also the height can be controlled by the audio, not fully demonstrated in the video.
Click here for the MaxMSP/Jitter patches used to generate the videos(really compressed videos).
Here is a simpler attempt to modify a procedural 2d texture that in turn modifies the geometry of a 3d plane using sound as input. This time I used the pfft object narrow 3 ranges of frequencies (low, medium, high), but they're not very cleverly mapped to the texture inputs. The low frequencies control the scale of the procedural map, the peaks in medium frequencies change the procedural map, while the high frequencies alter the weight applied to the maps.
Music by Valentin Leonat. Be sure to check out more tunes on Valentin's MySpace page.
The method to analyze audio is good enough, but not the method to control the visual. The inputs might work better with a glitchy tune. Here is the Ghost of 3.13 with Orchids and Lilacs