At disturb media we wrote a detailed tutorial on how to use the Sound and SoundMixer class in actionscript for something else, not just fancy sound visualizers.
The tutorial made it in issue 169 of WebDesigner Magazine It's a 4 page feauture, lot's of details covered, do check it out!
Click on the image to see a demo.
The "heaps aweseome patrol captain" Chris designed the character and setup the assets, and Alex performed and produced the sound. Big thanks !
A great exhibition ended this Sunday, the Decode Gallery at the Victoria & Albert Museum. I've seen a lot of great projects out from the likes of John Maeda, Casey Reas, Marius Watz, Golan Levin, Jonathan Harris, Joshua Davis, Robert Hodgin, Chris O'Shea, et al. It was the first time I've seen so much technology in a museum, at this scale. More interesting than the technology itself was the chance to see the reactions of the other visitors.
An interesting catch is that the source code for the Decode Identity by Karsten Schmidt is released on GoogleCode, so people could submit recoded versions. I am way behind with my main project, but I spent a bit of Sunday tinkering with the code. I did run intro trouble when I had to put the images together as a video. I've used After Effects for the first time just to import a sequence of images and lay a sound track, but it took quite some time to get used to the basics for some reason. The sound is a fragment from a song by Valentin Leonat performed back in 2007. I've used the Minim library, FFT mostly to use the sound as a trigger for changes in the video. The drums control the mesh colour a bit (the intensity of the blue-ish tint), but the guitar controls the background colour and the camera position on higher peaks and the distortion of the mesh throughout the length of the video.
The video doesn't feel the same way, as recording a sequence of images from Processing seems to take a lot of resources, but I encourage you to download my version of the source code and have a look for yourselves.
Finally a bit of time to breathe. A lot happening lately, again. Just finished 2 assignments, 1 more to go for now. This time I'd like to share some of the things I study in Creative Computing at Goldsmiths, University of London.
I am handing in the project for Audio Visual Processing. The task given was to "implement interactive real-time audio and video systems using the programming tool of your choice (either maxMSPJitter, C++ using provided libraries, or Java/Processing)."
There was plenty of information to sponge in a short amount of time, but very fun. What I've noticed so far is: the simpler, the better. What I mean is, if I use the right numbers by processing video or audio, the results might be a bit 'jittery' and might not convey a theme, that could be easier conveyed with a bit of cheating. The brain does a lot of things for us, and when it comes to video we only need 24 frames per second be fooled into seeing continuos animation, when linked to sound, the images gain value, and feed information (confirmation) back to the sound (source) in a strange but effective loop. I haven't managed to get the right balance between the data processed and the way to map that wisely so it's instantly and easily perceivable.
Well, enough talk for now. Here is a an attempt to modify 3d geometry (a plane) using video(color) and audio(peak amplitudes sampled every 40 milliseconds) as source. For some reason it makes me think it could be a 'quick'n'dirty' technique to achieve an effect similar to Michel Gondry's "Fell in Fell In Love With A Girl Lyrics" video for The White Stripes. The source video is a performance by Les Elephants Bizarres, talented friends from back home.
The colour channels from the video control the height of each grid square, coresponding to a group of polygons in the 3d plane. The resolution of the 3d plane and source video can be altered in realtime. The inputs can trigger a change all the time, or only when a change in peaks occurs, as with this video. Also the height can be controlled by the audio, not fully demonstrated in the video. Click here for the MaxMSP/Jitter patches used to generate the videos(really compressed videos).
Here is a simpler attempt to modify a procedural 2d texture that in turn modifies the geometry of a 3d plane using sound as input. This time I used the pfft object narrow 3 ranges of frequencies (low, medium, high), but they're not very cleverly mapped to the texture inputs. The low frequencies control the scale of the procedural map, the peaks in medium frequencies change the procedural map, while the high frequencies alter the weight applied to the maps.
The method to analyze audio is good enough, but not the method to control the visual. The inputs might work better with a glitchy tune. Here is the Ghost of 3.13 with Orchids and Lilacs