So after the last post I made, I started investigating the best ways to go about my ideas. It gets deep quite fast - it turns out that there is no clear way to use prexisting max objects in order to build what I need, certainly not on the PC and I dont have a mac, though getting one for the project is obviously on my mind.
I found that there are a host of max objects on the Mac version that could be used to grrab data suitable for midi conversion, here is a list and explanation, with link to the site they are available from.
sigmund~ - A sinusoidal analysis and pitch tracking object.
fiddle~ - pitch following and sinusoidal decomposition object
bonk~ - described as a "percussion follower". presumably a form of transient detection.
centroid~ - computation of spectral centroid. I need to research more how this may be useful.
More details about the creators along with download links can be found here -
VUD - Max
As I said, these do not exist in the PC domain of Max and to be frank, I am not enthused by the idea of learning an actual coding language. Max is learnable in the time frame, however not if I have to start coding objects for it.I have emailed the people behind this to ask if/when it will be available for the PC or suggestions otherwise.
I did find a tool called peaks though, which is available for PC. This simply take the incoming audio and measures the volume, spits out MIDI data that corresponds at the other end. This is obviously useful for making lights pulse to audio, which is a part of the battle I am trying to win here. What the sinusoidal analysis gives me is the ability to track frequency neatly. However it isn't the only way. If I was to split the audio into 7/8 bands using that number of channels with band pass filters, then put a "peaks" unit afterwards, then effectively I would have the data that I require, if in a convoluted way.
I love the internet.
During my many searches with Google, I have turned up some really cool projects and as it turns out, I might be trying to go about this all the wrong way. I have still yet to find anybody who has implemented my idea in the way I am thinking, so I am pretty sure I still have academic validity in doing this, however I have found people thinking along the same lines, just for different reasons and ends.
I found this guy last night
This is using a microprocessor called the MSGEQ7, when I searched this, it spewed out a whole host of videos on YouTube, which led me to 2 clear conclusions. The first being this is the way forwards for simple implementation of the colour seperation on the front end of the device I am looking to make. The second being people need to learn how to name YouTuibe videos better.
So I need to now think about how best to chain this all up. The MSGEQ7 does almost negate the need to use any other information to create live audio spectrum mapping with colour. It does add an extra level of live control over what is going on with the colour though. It would be great to be able to define the spectrum you find most useful. Perhaps some people would prefer to work with blue at the bass end of the spectrum, while others would prefer red. Would be an excellent parameter for control.
I also need to think about how I am going to implement the lights themselves. As I mentioned in my first post, I love the idea of remixing. This whole process has got me thinking about remixing ideas, I am now thinking it maybe an interesting take on the question. Perhaps something relating to remixing ideas about musical hardware together.
I have this fantasy now of owning a pair of monitor speakers that not only sound fantastic, but give you visual feedback also, built in. Imagine perspex speakers with this idea built in, automatically changing colour and pulsing with the music. They would give you automatic visual feedback about the spectral content, volume and stereo balance of the audio being played through them.
You could also implement this idea into acoustic treatments. Because of the grid like layout of diffusers, they could be effectively turned into a device that also gives you spectral infomation. I love the idea of a light reactive diffuser that is all frosty, diffusing light and sound in one device.
Both of these are remixes of two ideas that already exist themselves, but have yet to be combined, so I feel this is a strong avenue to persue.
I have had this feeling in the back of my mind about another passion of mine and the possiblity of persuing that as my project, the idea of remixing being what ties all of this together.