You can see a little demo of the project here:
Cheap Tricks from dizzy pete.
It uses a Kinect and OpenCV to get the blobs (in this case people and my Ikea furniture) then turns them into triangles that are audio reactive, as well as adding in some visual effects with no real purpose other than to look cool.
Reading from the Kinect is relatively straightforward, and more or less lifted straight from Kinect examples that come with oF. It uses the depth information in a grayscale OpenCV image and does some simple thresholding based on depth settings.
This grayscale image is passed to the stock OpenCV blob finder, then the blob information is passed to ofxTriangle to get a set of triangles representing the blob. I initially was using ofxDelaunay but found ofxTriangle just worked a little better for what I was after.
The audio reactive part is done via ofxFFT by julapy (there are a few out there so make sure you have the right one if you’re taking a look). I originally wanted to do beat detection and have the figures pulse in time with the beat, but that was a little too involved given the time constraint, so decided just to stick with the very low end.
I experimented with various numbers of FFT bins but ended up using the default buffer size of 512, bin size 256 and averaging the lowest three bins. This is the variable ‘low’ that is used throughout the code.
Ideally I would have had a proper audio device feeding in from the DJ’s mixer, but as it turned out I was using the default microphone on my MacBook for audio input, which worked much better than I expected.
Based on the current amount of low-end audio detected, each vertex of each triangle is displaced from its center by some amount. Triangle sizes vary; the torso of the detected body has larger triangles than the hand or arm, how much displacement to be applied is a function of the triangle area.
To be precise it is the square root of the triangle’s area, and can be tweaked by a parameter called jitterSize in the gui. I don’t actually displace each vertex, just two out of three. The code to do all third is there but commented out, I just liked the way it looked doing only two.
Once the vertex displacement has been calculated, the triangle is added to a mesh along with texture coordinates, which are used for coloring. But more on that in a second.
|The basic triangle form|
By this stage the basic audio-reactive triangle people thing I was after was done, but it still looked a bit flat.
Poking around some OpenCV examples, I saw examples of motion detection that looked kinda cool.
First it takes the difference between the current frame and last frame, which gives areas where motion has been detected. This is then scaled up as a function of low-end audio and GUI parameter glowExtra, then added to a buffer image. The buffer image gets decayed over time, so things fade out as the motion fades.
This looked ok, but I wanted it to be a bit smoother so used ofxFboBlur to add in excessive amounts of Gaussian blur to the motion buffer image. I had to change its source a bit just to play a bit nicer with FBO sizes and drawing, which is why it appears in the source folder and not as an addon.
|The blurred motion detection background|
Coloring was probably the hardest part for me, and even then it was pretty easy.
There are lots of different ways this could be done, but I settled on using a two-color gradient rendered to an FBO, using a mesh with vertex colors. Each frame, the mesh vertex colors are updated by cycling through the Hue component in HSB space, then the updated mesh is rendered to the FBO.
This FBO gets used in two places. First it is blended with the grayscale blurred motion detection background using a multiplicative blend mode, which takes care of coloring in the swishy aura type effect in the background.
Then the FBO texture reference is bound and the mesh containing our displaced triangularized blobs is drawn as a wireframe. This is where having texture coordinates for each vertex in the mesh is important, though fiddling around I noticed I don’t need tex coords if ofBackground() has been called. Who knows.
Regardless, when the wireframe is drawn, it is colored based on the bound FBO texture reference, which takes care of coloring in our triangle people.
|The two layers colored and drawn together|
So that is just a quick run through of how it works. I used a lot of addons, and all up it took me about 2 days. I started on the evening of the 29th and was finished by the afternoon of the 31st, just in time. Without the oF addons it would’ve taken me well over a week, possibly many weeks.
People at the party really seemed to enjoy it as well, which is always a great feeling.
I used soundflower during development (and oF 0.7.3 eep), but updated it to use oF 0.8, which is why you see ofxSoundStream in the code.
If you don’t have a Kinect it will use the device camera, but it does not work as well. You can use the far threshold setting for the OpenCV threshold in that case.
The source is up here.
Also I have put the source for Particles up here as well
You can find me on twitter here !
Particles 1 from dizzy pete.