For some typically obscure reason, my phone has decided not to record video anymore, which is annoying as I wanted to use that in place of screengrab software for now. Anyways, this is the most advanced and relevant of the tutorials featured on Max 7.
Here we really start to see the power of using the audio stream for control over visual parameters. Again using the "p turn" subpatchers mentioned in the last post, but with a different path for the VIZZIE data created from the amp and timbre data of the audio.
This utilises the "jit.world" and jit.mesh" objects within the JITTER catagory of Max 7. jit.world simply creates the window and container environment for the visuals to be created withing, however without it we wouldn't see anything, making it a vital part of the patch.
The "floating" and "erase_colour" are best left alone when you are playing, however you need to activate the patch to work by first clicking the "X" in the top left corner, then setting the audio to play as in the last tutorial.
The two green devices you see within the patch are again Beap object, "INTERPOL8R" and "SLIDR". These control parameters within the "jit.mesh" object below them (which you cannot see in the screen grab). They basically control the grid mesh graphics that can be created within jit.mesh.
Also control the jit.mesh object, is another object "jit.gl.material" which has many functions, but in this case it is used to control the colour palette of the grid mesh.
When I took the screen grab I was in the middle of starting to iterate on this tutorial device, by using the amplitude data to control the colour palette. I will follow up this development in further posts, but once again, open this patch up and have a play.
The controls you are looking to use in the patch are those contained in the INTERPOL8R and SLIDR object. For some reason the interp mode on the INTERPOL8R object is a bit tempremental. It starts on a garunteed interpolation to create visuals. Some others work too, but some just make all the visuals disappear. Being honest I dont know why... What can I say, Im still learning!
https://www.dropbox.com/s/k8wg3nh8tjw4b7a/Visualising%20Music%203.2.maxpat?dl=0
Sunday 7 December 2014
Max 7 Visualising Music 2
In this next tutorial patcher, we learn about the two subpatchers - "p turn timbre into VIZZIE data" and "p turn amplitude into VIZZIE data".
This is extremely useful knowledge in terms of this project, as timbre and amplitude are two of the most useful data stream from audio for control of visuals.
So the music loop this time routes in two paths, the first is straight to a device called Stereo from the Beap library, which is essentially a DAC for routing audio out of Max.
The second path is more convoluted. It first routes into the subpatchers.mentioned above. You can see the change in data types - as an audio signal in Max, yellow and black stripped cables are used. As soon as it is a numerical data stream, the cables changes to a block grey colour.
The devices they feed is called Patterniser. This is the device that gives the shape to the graphics you can see at the bottom of the patch. You have choices for many of the parameters of the graphics, including shape position, size, pixel seeds etc.
The device you see inbetween is called MAPPR. This devices allows you to control the levels of RGB saturation with the visualisations.
Run the Drum Loop at the top, and then simply play with the parameters. You can also draw your own saturation curves on the RGB MAPPR device. Have a play about!
This is extremely useful knowledge in terms of this project, as timbre and amplitude are two of the most useful data stream from audio for control of visuals.
So the music loop this time routes in two paths, the first is straight to a device called Stereo from the Beap library, which is essentially a DAC for routing audio out of Max.
The second path is more convoluted. It first routes into the subpatchers.mentioned above. You can see the change in data types - as an audio signal in Max, yellow and black stripped cables are used. As soon as it is a numerical data stream, the cables changes to a block grey colour.
The devices they feed is called Patterniser. This is the device that gives the shape to the graphics you can see at the bottom of the patch. You have choices for many of the parameters of the graphics, including shape position, size, pixel seeds etc.
The device you see inbetween is called MAPPR. This devices allows you to control the levels of RGB saturation with the visualisations.
Run the Drum Loop at the top, and then simply play with the parameters. You can also draw your own saturation curves on the RGB MAPPR device. Have a play about!
And here is a link to the project itself.
https://www.dropbox.com/s/rdbiw96m7q2in9w/Visualising%202.maxpat?dl=0
Max 7 Visualising Music 1
In the first tutorial, we learn how to use a combination of the Jitter, MSP and new BEAP sub catagories within Max 7.
Here is a screen grab from the finished article.
On the left above, you can see a drum loop being routed into two devices. On the left you have a four way splitter, with a VU style meter visualisations to show how hot each of the four frequency bands are, on the right we have a sonogram, which shows an FFT based visual spectrum of the sound, with the top of the graph being high frequencies, scaled down to the bottom where the lowest frequencies are.
On the right of the picture, you see another patch which allows for a different view of spectral analysis. When you open the patch, use the drop down menu with "drum" written on it. Select any of the samples there and the device shall work. Below is a link to the patch via Dropbox.
https://www.dropbox.com/s/xrwsnxcguyoh94m/Visualising%20Music%20%601.maxpat?dl=0
Here is a screen grab from the finished article.
On the left above, you can see a drum loop being routed into two devices. On the left you have a four way splitter, with a VU style meter visualisations to show how hot each of the four frequency bands are, on the right we have a sonogram, which shows an FFT based visual spectrum of the sound, with the top of the graph being high frequencies, scaled down to the bottom where the lowest frequencies are.
On the right of the picture, you see another patch which allows for a different view of spectral analysis. When you open the patch, use the drop down menu with "drum" written on it. Select any of the samples there and the device shall work. Below is a link to the patch via Dropbox.
https://www.dropbox.com/s/xrwsnxcguyoh94m/Visualising%20Music%20%601.maxpat?dl=0
Introduction to Max 7
So it's been a while since my last post because of the fairly large amount of paperwork needing done, however I have been working in Max in my spare time. So rather than going over the breadth of devices available, I am going to focus in now on the devices I am using for development.
I am going to use my digital camera to record the screen rather than use screen grab software at this point as I am pushed for time. This is mostly for the benefit of the Pre-Production document so I can disseminate where I am at in the project properly.
So I touched on the fact that Max 7 had been released. I knew this was coming for a while, and I actually stopped my work in Max 6 that I was doing over the summer to wait for the release, as it is a huge platform upgrade. One of the main advancements is being able to develop and/or use max for live devices within Max, without Ableton actually being open. There have also been massive upgrades to the API and search index that make it massively easier to use and understand, which as someone who doesn't understand programming at a deep level was very appealing!
If you have ever used Max 6 or previous versions, then the picture above will probably look quite unfamiliar. In order to make a project look like this in Max 6, you would have had to take massive amounts of time. As of Max 7, these modular objects are now all drag and drop and contained within a neat search system, with great tagging and innovative ways of file searching.
They have also added a frankly incredible tutorial system that is built into the program. As time goes on, this tutorial section will be expanded, but it ships with 5 tutorials that show you a massive amount of what is new to Max 7
It is encouraging for this project that, with the exception of one tutorial, they are ll aimed at exploring the new ways Max 7 allows the user to manipulate sound and visuals. (Someone is looking out for me!)
At this point I would say it's advisable to go and get the 30 day free and fully working version of Max 7 from cycling74.com if you don't have it already, as this will allow you to inspect and play with the projects I have made through following tutorials. (Kenny, I'm looking at you)
Next post please....
I am going to use my digital camera to record the screen rather than use screen grab software at this point as I am pushed for time. This is mostly for the benefit of the Pre-Production document so I can disseminate where I am at in the project properly.
So I touched on the fact that Max 7 had been released. I knew this was coming for a while, and I actually stopped my work in Max 6 that I was doing over the summer to wait for the release, as it is a huge platform upgrade. One of the main advancements is being able to develop and/or use max for live devices within Max, without Ableton actually being open. There have also been massive upgrades to the API and search index that make it massively easier to use and understand, which as someone who doesn't understand programming at a deep level was very appealing!
If you have ever used Max 6 or previous versions, then the picture above will probably look quite unfamiliar. In order to make a project look like this in Max 6, you would have had to take massive amounts of time. As of Max 7, these modular objects are now all drag and drop and contained within a neat search system, with great tagging and innovative ways of file searching.
They have also added a frankly incredible tutorial system that is built into the program. As time goes on, this tutorial section will be expanded, but it ships with 5 tutorials that show you a massive amount of what is new to Max 7
It is encouraging for this project that, with the exception of one tutorial, they are ll aimed at exploring the new ways Max 7 allows the user to manipulate sound and visuals. (Someone is looking out for me!)
At this point I would say it's advisable to go and get the 30 day free and fully working version of Max 7 from cycling74.com if you don't have it already, as this will allow you to inspect and play with the projects I have made through following tutorials. (Kenny, I'm looking at you)
Next post please....
Wednesday 26 November 2014
Max for Live & Max 7
So what exactly is Max? Well, it's a programming language. It's a visual programming language, that allows users to connect together elements of code and processing in a modular way.
"Max - Visual programming language for media" is how the creators, Cycling 74, descriptively open their product page. I suppose their focus is on clarity and ease of use....
Taken from their product page, above we see the key features that Max offers, with details of the updated capabilities of the platform. The beauty of using this platform is the way it integrates with one of the most commonly used studio DAW's, the one I use, Ableton Live
Because Ableton Live was built within Max, Cycling 74 worked with Ableton's creators in order to create something truly innovative - a DAW that allows the user to edit it's capabilities, it's instruments, it' routing, directly from within the UI. It is safe to say that the Max/Ableton relationship is a game changer; the number of unique and fascinating projects that are being born from the abilities it has allowed users is large. Very large.
Although not fully relevant to this project, this project shows its power in the extreme. It does actually user an amazing graphical interface to show what is actually happening - Ableton 4D
That's enough for today, tomorrow I shall go more in depth into devices and projects directly relevant to this one.
"Max - Visual programming language for media" is how the creators, Cycling 74, descriptively open their product page. I suppose their focus is on clarity and ease of use....
Taken from their product page, above we see the key features that Max offers, with details of the updated capabilities of the platform. The beauty of using this platform is the way it integrates with one of the most commonly used studio DAW's, the one I use, Ableton Live
Because Ableton Live was built within Max, Cycling 74 worked with Ableton's creators in order to create something truly innovative - a DAW that allows the user to edit it's capabilities, it's instruments, it' routing, directly from within the UI. It is safe to say that the Max/Ableton relationship is a game changer; the number of unique and fascinating projects that are being born from the abilities it has allowed users is large. Very large.
Although not fully relevant to this project, this project shows its power in the extreme. It does actually user an amazing graphical interface to show what is actually happening - Ableton 4D
That's enough for today, tomorrow I shall go more in depth into devices and projects directly relevant to this one.
Platforms that use Colour to Describe Sound
I have already been over the theory behind colour and sound and showed how they are in fact naturally related.
It is important at this stage to say that just because we can prove certain frequencies are related to certain colours, doesn't mean we have to stick firmly to this structure every time that we choose to describe sound with colour. It is also important to state that from now on, any readers can assume that we are using FFT analysis and processing as the underlying means for everything being generated ie waveforms, spectral images, metering, colour etc.
A platform I use regularly called Traktor (DJ Software) has four different choices for colouring the audio waveform that it displays, obviously they feel that certain people will prefer different colour scemes from others.
I am going to assume that anyone reading this uses DAW software, again these commonly allow the user to change the colour of tracks and groups within the software, in order to visually keep track of where you are.
Image-Lines FL Studio takes this to the next level, with some fantastic plugins that show the user with graphics and colour exactly what is going on with the device.
First off tis the "Z Game Imager", which allows you to visualize in a creative and intuitive way what is happening in the music. Other than simple velocity mapping to the size scale (X, Y, Z planes) of the image and a frequency analyser there doesn't seem to much controlled by the music itself; there is a certain amount of predefined graphical movement regardless of music. However it is still enough to add something to the experience in my opinion.
This is a fairly new feature, the are plugins that use other graphcal feedback.In this tutorial by "Seemless" on YouTube, we see how their most powerful sythesiser is actually capable of taking any image and turning it in to sound. It can then in turn display that info on the right hand side as a spectral image.
In this next one, we can see it again uses a spectral image in a pinky/orange hue to show the harmonic content of the audio the equaliser is editing. The actual content of the tutorial has no meaning for this project, it just shows the EQ nicely.
In the next post I will look at Max for Live, the programming language that Ableton Live is built upon.This allows for far more interesting and detailed analysis and graphical reconstruction of audio signals.
It is important at this stage to say that just because we can prove certain frequencies are related to certain colours, doesn't mean we have to stick firmly to this structure every time that we choose to describe sound with colour. It is also important to state that from now on, any readers can assume that we are using FFT analysis and processing as the underlying means for everything being generated ie waveforms, spectral images, metering, colour etc.
A platform I use regularly called Traktor (DJ Software) has four different choices for colouring the audio waveform that it displays, obviously they feel that certain people will prefer different colour scemes from others.
I am going to assume that anyone reading this uses DAW software, again these commonly allow the user to change the colour of tracks and groups within the software, in order to visually keep track of where you are.
Image-Lines FL Studio takes this to the next level, with some fantastic plugins that show the user with graphics and colour exactly what is going on with the device.
First off tis the "Z Game Imager", which allows you to visualize in a creative and intuitive way what is happening in the music. Other than simple velocity mapping to the size scale (X, Y, Z planes) of the image and a frequency analyser there doesn't seem to much controlled by the music itself; there is a certain amount of predefined graphical movement regardless of music. However it is still enough to add something to the experience in my opinion.
This is a fairly new feature, the are plugins that use other graphcal feedback.In this tutorial by "Seemless" on YouTube, we see how their most powerful sythesiser is actually capable of taking any image and turning it in to sound. It can then in turn display that info on the right hand side as a spectral image.
In this next one, we can see it again uses a spectral image in a pinky/orange hue to show the harmonic content of the audio the equaliser is editing. The actual content of the tutorial has no meaning for this project, it just shows the EQ nicely.
In the next post I will look at Max for Live, the programming language that Ableton Live is built upon.This allows for far more interesting and detailed analysis and graphical reconstruction of audio signals.
Concept Development
So for the next two weeks the focus is on concept development. First off I think it would be useful to outline here what I identified in my research proposal as the main area's I need to focus on for completion.
Max 7 and Resolume are the platforms I have chosen for primary development of the system. There are a number of reasons for this:
I have found a number of projects on the Max for Live website that display similar characteristics to the ones I have previously identified as needed within this project, which is a great start. In the next weeks I will outline the different devices that generate graphics from audio.
The main issue I have found so far is no-one seems to have scaled the colour wheel to audio parameters yet, within Max. As my research identified this is easily achievable (with some clever maths). When it comes to said clever maths I might not be the man to implement it in Max, hopefully once I know who my project supervisor is they can help or point me in the right direction. I have already started to make contact with certain practioners within the scene, with little response luck to date, however I have started early so I am confident that I will have success there.
During the next few weeks I will also be putting together post detailing case studies and similar systems from youtube, vimeo etc. This is in order to identify ideas that I like and ideas I don't; this will aid me in the process of developing the most effective wasy of implementing the system in a room i.e. would projections mapping the generative graphics onto the speakers be more effective than a tradition 2D screen?
So in summary, for the next two weeks, expect posts on the following topics.
Max 7 and Resolume are the platforms I have chosen for primary development of the system. There are a number of reasons for this:
- I own a projector that I can use in my room for development.
- I own Ableton, Max & Resolume
- Max & Resolume haave better online learning resources than the other platforms identified in research.
- The LED system requires many more parts, making for a more cumbersome end product.
- LED's do not allow for complex graphical output except in extremely expensive cutting-edge systems.
- You can emulate the effect of LED's within Max for Live.
I have found a number of projects on the Max for Live website that display similar characteristics to the ones I have previously identified as needed within this project, which is a great start. In the next weeks I will outline the different devices that generate graphics from audio.
The main issue I have found so far is no-one seems to have scaled the colour wheel to audio parameters yet, within Max. As my research identified this is easily achievable (with some clever maths). When it comes to said clever maths I might not be the man to implement it in Max, hopefully once I know who my project supervisor is they can help or point me in the right direction. I have already started to make contact with certain practioners within the scene, with little response luck to date, however I have started early so I am confident that I will have success there.
During the next few weeks I will also be putting together post detailing case studies and similar systems from youtube, vimeo etc. This is in order to identify ideas that I like and ideas I don't; this will aid me in the process of developing the most effective wasy of implementing the system in a room i.e. would projections mapping the generative graphics onto the speakers be more effective than a tradition 2D screen?
So in summary, for the next two weeks, expect posts on the following topics.
- Generative graphic devices withing Max for Live and Ableton
- Projection mapping examples using Resolume Arena
- Music that has been created with the visuals being considered as an integral part of the production and performance
- Studio scenarios where the aesthetic has been strongly considered
Saturday 22 November 2014
So Max 7 is out and this is going to be the focus of my ongoing project dev.
I keep upto date with a lot of what is going on in the world of music tech, and funnily this new device developed by Francesco Grani popped up today on my newsfeed. Though I have had no part to play up to date, he does mention on the DL page for the M4L device that he is interested in working with people and mentions students in particular. So I have contacted him to ask if he is comfortable with my use of his model for development of my own.
Here is a link to the page for the device
http://www.maxforlive.com/library/device.php?id=2665#LastComments
I have yet to have a chance to play with it, that is my plan for tonight, I shall make a post after a few hours of play and hopefully get a video out if I can find some nice free cam software.
It's great taht so many things seem to be happening in the world of generative graphics from audio just now, as the year moves on hopefully I can become an active member of the community of builders.
I keep upto date with a lot of what is going on in the world of music tech, and funnily this new device developed by Francesco Grani popped up today on my newsfeed. Though I have had no part to play up to date, he does mention on the DL page for the M4L device that he is interested in working with people and mentions students in particular. So I have contacted him to ask if he is comfortable with my use of his model for development of my own.
Here is a link to the page for the device
http://www.maxforlive.com/library/device.php?id=2665#LastComments
I have yet to have a chance to play with it, that is my plan for tonight, I shall make a post after a few hours of play and hopefully get a video out if I can find some nice free cam software.
It's great taht so many things seem to be happening in the world of generative graphics from audio just now, as the year moves on hopefully I can become an active member of the community of builders.
Friday 14 November 2014
Powering LED's from a modular synth?
Something that had crossed my mind a while ago, but did little more, was the idea of using the control voltages of analogue modular synthesizers to power LED lights. They both work around a 5volt architecture, so it seemed logical that the numerous ways of mangling electrical waveforms in modular synths, could be applied to the colour of LED's.
Caspers Light Synth
Above is the incarnation of my idea before I had it. Strange how this keeps happening, but anyways...
Though having a very "mod-synth-geek" look about it, in the setting of a darkened bedroom, it is nothing short of mesmerizing.
I'm sure you will agree these are both quite fascinating videos. I certainly feel more confident that the idea of using evolving light colours that correspond to sound will be something very appealing to the bedroom producer. Baring in mind my strong feeling is that this will improve the creative process, this is a great step in the right direction in terms of research and opens up a whole new avenue of possibility and reference for my project.
Update
So it's been a while since my last post, and within a few days I will be back in earnest, however I thought I should post an update of some developments.
I have been working hard on my research proposal, which has consisted of a lot of the information detailed in other posts here, I will be sending a draft to Kenny in the near future.
Fortunately for me, there have been some technical developments which I previously mentioned I was not sure of the time scale on, namely Max 7. November 10th was a happy day for audiovisual programmers worldwide!
For now, all I need say is there have been some massive technical upgrades made to Max and its interface, basically making use of any other programming platform, for me, redundant.
Once I have finished my research proposal, I shall be doing a series of posts on Max 7 and the reasons behind why it has suddenly become the most suitable platform by far.
I should say that Resolume, another platform affore mentioned, will still be used as this can be controlled from within Max and Ableton.
I have been working hard on my research proposal, which has consisted of a lot of the information detailed in other posts here, I will be sending a draft to Kenny in the near future.
Fortunately for me, there have been some technical developments which I previously mentioned I was not sure of the time scale on, namely Max 7. November 10th was a happy day for audiovisual programmers worldwide!
For now, all I need say is there have been some massive technical upgrades made to Max and its interface, basically making use of any other programming platform, for me, redundant.
Once I have finished my research proposal, I shall be doing a series of posts on Max 7 and the reasons behind why it has suddenly become the most suitable platform by far.
I should say that Resolume, another platform affore mentioned, will still be used as this can be controlled from within Max and Ableton.
Thursday 30 October 2014
Introduction to Proccessing, Developing Focus
I am finding now that I am making massive jumps from one idea to another again, simply because of the amount of new information that I am finding out. However today I feel like I have turned a corner in terms of the direction that I want to take.
Proccessing is a programming language primarily aimed at creating visuals and graphics, though this is certainly not all you can do with the platform. I feel like this is a strong platform for me to work from based upon the research I have done on it.
During my research I have stumbled upon people using a library within processing called Minim,which enables the capture of FFT data for use in Processing. I also managed to find a short series of tutrorial's from a user named "Switchboard" on Vimeo, which show how to get some basic functionality coming from within Processing.
Switchboards Vimeo Channel
Over the next few days I am going to work through his tutorials and try to get a better grasp of what is happening in the programming language itself.
I feel like my "Scope" is narrowing now. I certainly feel like between Processing and M4L I will be able to achieve the creation of graphics from audio, which was has been the main aim of my project for some time now.
From hear on in, I will be concentrating on music visualization using M4L and Processing, with the aim being to create an application for studio use, both in a creative sense and for use as a clinical audio measurement device, using FFT spectral analysis.
Proccessing is a programming language primarily aimed at creating visuals and graphics, though this is certainly not all you can do with the platform. I feel like this is a strong platform for me to work from based upon the research I have done on it.
During my research I have stumbled upon people using a library within processing called Minim,which enables the capture of FFT data for use in Processing. I also managed to find a short series of tutrorial's from a user named "Switchboard" on Vimeo, which show how to get some basic functionality coming from within Processing.
Switchboards Vimeo Channel
Over the next few days I am going to work through his tutorials and try to get a better grasp of what is happening in the programming language itself.
I feel like my "Scope" is narrowing now. I certainly feel like between Processing and M4L I will be able to achieve the creation of graphics from audio, which was has been the main aim of my project for some time now.
From hear on in, I will be concentrating on music visualization using M4L and Processing, with the aim being to create an application for studio use, both in a creative sense and for use as a clinical audio measurement device, using FFT spectral analysis.
Monday 27 October 2014
Technical Research Contact
In my hunt for any good explanation of how FFT works, I stumbled upon this page..
Gamma Devices - Metering
I have contact the man behind the devices via Blogger, hopefully he will be able to give me some advice on where to gather the information needed for me to build a device capable of converting Frequency to Colour.
Above is the video link for the "Gamma Devices M-Series". I am curious as to the build of the Graphic EQ style display, as this maybe the best way to gather individual frequency data and send it out to be processed into colour.
Gamma Devices - Metering
I have contact the man behind the devices via Blogger, hopefully he will be able to give me some advice on where to gather the information needed for me to build a device capable of converting Frequency to Colour.
Above is the video link for the "Gamma Devices M-Series". I am curious as to the build of the Graphic EQ style display, as this maybe the best way to gather individual frequency data and send it out to be processed into colour.
Wednesday 22 October 2014
History of the "Colour Organ"
As suspected, not only am I not the first person to have concieved of the idea of generating colour from sound, the idea has infact been around for centuries.
Colour Organs through the Ages
"Around 1742, Castel proposed the construction of a clavecin oculaire, a light-organ, as a new musical instrument which would simultaneously produce both sound and the "correct" associated color for each note."
The first point of interest for me, is that the concentration was not on the complete keyboard. Without exception, the men behind the incarnations of various colour organs through the ages have concentrated on mapping the colours to Key; C is always red for example, rather than my former idea where the lowest register was entirely represented by one colour, blending as we move up through the registers.
As we can see from the graphic above, no clear concensus have ever been reached for which note should fit which colour. We can also see that some pretty big names have had a shot at it, which I did not expect to see! Namely Newton & Helmholtz, two men without whom I certainly would not be sitting here writing this!
Ill save my explanations for anotherr post, but lets just say just now I have those who I can understand and those who I can't. Or perhaps it's those who's colour choices I like and dislike... hmmm. I will say this though, from a logical implementaion standpoint, Vishnogradsky's spectrum stands above the rest, simply by using the sharps as gradient keys between the whole tone colours.
Having thought about it, certainly from an accuracy standpoint, this could be a better idea for me. It is also encouraging to me that this was something of great fascination in days gone by. I can imagine in the 1700's this would have been pretty cool! There is also a trend for taking idea from the past and reinventing them for today. These were intended for beauty; if I can add accuracy to that equation then I feel I'm onto a winning idea.
"And dark and light colors do actually have effects which are comparable to low and high musical tones. Dark colors are sonorous, powerful, mightly like deep tones. But light colors, like those of the Impressionists, act, when they alone make up a whole work, with the magic of high voices: floating, light, youthful, carefree, and probably cool too." Karl Gerstner, The Forms of Color 1986.
Colour Organs through the Ages
"Around 1742, Castel proposed the construction of a clavecin oculaire, a light-organ, as a new musical instrument which would simultaneously produce both sound and the "correct" associated color for each note."
The first point of interest for me, is that the concentration was not on the complete keyboard. Without exception, the men behind the incarnations of various colour organs through the ages have concentrated on mapping the colours to Key; C is always red for example, rather than my former idea where the lowest register was entirely represented by one colour, blending as we move up through the registers.
As we can see from the graphic above, no clear concensus have ever been reached for which note should fit which colour. We can also see that some pretty big names have had a shot at it, which I did not expect to see! Namely Newton & Helmholtz, two men without whom I certainly would not be sitting here writing this!
Ill save my explanations for anotherr post, but lets just say just now I have those who I can understand and those who I can't. Or perhaps it's those who's colour choices I like and dislike... hmmm. I will say this though, from a logical implementaion standpoint, Vishnogradsky's spectrum stands above the rest, simply by using the sharps as gradient keys between the whole tone colours.
Having thought about it, certainly from an accuracy standpoint, this could be a better idea for me. It is also encouraging to me that this was something of great fascination in days gone by. I can imagine in the 1700's this would have been pretty cool! There is also a trend for taking idea from the past and reinventing them for today. These were intended for beauty; if I can add accuracy to that equation then I feel I'm onto a winning idea.
"And dark and light colors do actually have effects which are comparable to low and high musical tones. Dark colors are sonorous, powerful, mightly like deep tones. But light colors, like those of the Impressionists, act, when they alone make up a whole work, with the magic of high voices: floating, light, youthful, carefree, and probably cool too." Karl Gerstner, The Forms of Color 1986.
Monday 20 October 2014
Colour to Frequency - The Light Side
So there is another side to this equation, that I touched on earlier. time to get into it a little bit deeper.
Wavelength to Colour Converter
Website - Colour Science
The first link is pretty much what it says on the tin. The science behind it is explained in the next link, all you need know to play is this - enter a value between 380nm and 780nm...
The second is some preisting science, explaining the mapping of colours throught their frequency spectrum.
As with audio, light and the colours contained within exist in their own spectrum. We can think of this as a little window of perception for us. Vibrations of atoms with a frequency between 20HZ and 20KHZ are known to us as sound, that we percieve with our ears (mostly). In exactly the same way, we percieve light with our eyes, with the colour seperated by its wave frequency. Now there is "slightly" more to it all than I am getting into just now and I am certainly not a physicist. I guess the fact that light exists as a wave and a particle at the same time would be one! Anyways, this is not relevant just now.
The colours seperate out 380nm and 780nm. nm stands for nanometers. When combined into this equation, you can work out the frequency of light waves, pretty cool huh!?
The Relationship Between Speed, Frequency and Wavelength
As the article points out, this equation shows that, in order for us to percieve a medium red colour, 448,000,000,000,000 photon wavecrests pass over our retinas, every single second. 448 TRILLION HERTZ. That's 448Bil every millisecond. 448Mil every nanosecond.
Almost to much to take in, huh? Well, not for your brain. Infact, our brains decoded that, along with every other frequency of colour hitting our eyes, with amazing accuracy. At the same time as running your whole audio perception system, your body, and itself. Unbelievable...
Anyways, the bottom line is this. Frequency and vibration is integral to everything in the Universe. Sound and light are one. As with sound, light has lower and upper registers, harmonics and octaves. 20HZ converts to Red
We exist in a constantly evolving orchestra of light, my mission for this project is to try and reinvigerate the project studio, in a universal way!
Wavelength to Colour Converter
Website - Colour Science
The first link is pretty much what it says on the tin. The science behind it is explained in the next link, all you need know to play is this - enter a value between 380nm and 780nm...
The second is some preisting science, explaining the mapping of colours throught their frequency spectrum.
As with audio, light and the colours contained within exist in their own spectrum. We can think of this as a little window of perception for us. Vibrations of atoms with a frequency between 20HZ and 20KHZ are known to us as sound, that we percieve with our ears (mostly). In exactly the same way, we percieve light with our eyes, with the colour seperated by its wave frequency. Now there is "slightly" more to it all than I am getting into just now and I am certainly not a physicist. I guess the fact that light exists as a wave and a particle at the same time would be one! Anyways, this is not relevant just now.
The colours seperate out 380nm and 780nm. nm stands for nanometers. When combined into this equation, you can work out the frequency of light waves, pretty cool huh!?
The Relationship Between Speed, Frequency and Wavelength
As the article points out, this equation shows that, in order for us to percieve a medium red colour, 448,000,000,000,000 photon wavecrests pass over our retinas, every single second. 448 TRILLION HERTZ. That's 448Bil every millisecond. 448Mil every nanosecond.
Almost to much to take in, huh? Well, not for your brain. Infact, our brains decoded that, along with every other frequency of colour hitting our eyes, with amazing accuracy. At the same time as running your whole audio perception system, your body, and itself. Unbelievable...
Anyways, the bottom line is this. Frequency and vibration is integral to everything in the Universe. Sound and light are one. As with sound, light has lower and upper registers, harmonics and octaves. 20HZ converts to Red
We exist in a constantly evolving orchestra of light, my mission for this project is to try and reinvigerate the project studio, in a universal way!
Spectrum Lab
So as a continuation from last night, I found this fantastic little program called "Spectrum Lab", that takes real time audio information and runs an FFT process on it, in order to generate a colour mapped frequency analyser. Further more, you can adjust the colour spectrum, threshhold points and frequency response, allowing a pretty large range of displays and graph types. Here is a link to the Website, with a couple of screen grab so you get the idea.
Spectrum Lab
If nothing else, this program demonstrates clearly that others have had similar thought processes (and have been capable of actually making them happen). It is also going to be invaluable in helping me explain the project to people I may have to consult in future; I can easily convey the ways in which I would ideally change this system in order to do what I want to do in the studio environment.
I have already touched on the fact Max/MSP can do FFT proccessing along with modernised varients, so the question is now, how can I utilise prexisting Max/MSP objects to recreate a slicker, graphically focused version of what you see above? Also, how can a get it out of the computer Real-Time? As cool as Spectrum Lab is, it is old. Their idea of real-time back then is perhpas not quite what I understand real-time to mean, so streamlining the process would seem to be important.
At this point, I think I can safely say that no matter what path my research leads me down in terms of production, I will be using Ableton as the Hub. There are many reasons behind this. It is the "Musicians DAW" (though a lot of others would lay claim to this, too). It has seemless Max/MSP integration and I can also feed a range of software from it very simply by comparison to others. Finally, it runs on both Mac and PC, though I will be building this project on the Mac side first - there are some objects that only exist on the Mac version of Max/MSP that
Sunday 19 October 2014
Researching Alternatives to Arduino
I have found out something very important this week - YouTube is not the place to look for exciting new artistic projects. It seems like Vimeo is the place to be for them!
What exciting artistic project are you talking about in particular, I hear you cry.... Well, this.
Madmapper to DMX/LED
There might be some interesting developments around the corner for splitting out audio frequency information neatly & quickly, but for the time being we are just going to have to stick with filterbanks splitting it all up. In short I am not going to be attempting to do things in Max for Live that nobody has ever tried before, simply because I dont have the skill and it would take the whole year just to learn enough to start.
I am very comfortable with signal flow processing, if not coding, however. As you can see from the link above, this process is majoritivly in the box and predefined in how to set up, I just want to expand the idea.
It clearly allows me to use MadMapper to set up a means of splitting out the audio frequencies to control seperate sets of LED's, which is what I have been looking for the whole time.
I figure that if I get this working, I will have a much stronger understanding, along with all the components I need, to experiment with how to get a smart means of integration into a working enviroment.
This is on top of the fact that MadMapper also can run my projector, so I can perhaps experiment with more generative graphics running onto a screen, along with LED's. I found this the other day which blew my mind!
Sound Reacting 3D Waveform Generator.
If I could develop a means of having it change its colour spectrum in tandem with the LED's, whilst at the same time generating a waveform representation, I feel it would really bring the idea of metering music alive. Something that is normally such a boring process being turned into something that is integral and enjoyable can only be a positive thing, surely!?
Not to mention beautiful. Which can be inspirational in itself. Hopefully the outcome will result in a room that you want to create music in more, a room you want to be in all the time.
2D Standing Wave Visualiser
Something I may touch on later in the year once I have got the techy part of the project sorted, are other forms of audio reactivity, as you can see above.
What exciting artistic project are you talking about in particular, I hear you cry.... Well, this.
Madmapper to DMX/LED
There might be some interesting developments around the corner for splitting out audio frequency information neatly & quickly, but for the time being we are just going to have to stick with filterbanks splitting it all up. In short I am not going to be attempting to do things in Max for Live that nobody has ever tried before, simply because I dont have the skill and it would take the whole year just to learn enough to start.
I am very comfortable with signal flow processing, if not coding, however. As you can see from the link above, this process is majoritivly in the box and predefined in how to set up, I just want to expand the idea.
It clearly allows me to use MadMapper to set up a means of splitting out the audio frequencies to control seperate sets of LED's, which is what I have been looking for the whole time.
I figure that if I get this working, I will have a much stronger understanding, along with all the components I need, to experiment with how to get a smart means of integration into a working enviroment.
This is on top of the fact that MadMapper also can run my projector, so I can perhaps experiment with more generative graphics running onto a screen, along with LED's. I found this the other day which blew my mind!
Sound Reacting 3D Waveform Generator.
If I could develop a means of having it change its colour spectrum in tandem with the LED's, whilst at the same time generating a waveform representation, I feel it would really bring the idea of metering music alive. Something that is normally such a boring process being turned into something that is integral and enjoyable can only be a positive thing, surely!?
Not to mention beautiful. Which can be inspirational in itself. Hopefully the outcome will result in a room that you want to create music in more, a room you want to be in all the time.
2D Standing Wave Visualiser
Something I may touch on later in the year once I have got the techy part of the project sorted, are other forms of audio reactivity, as you can see above.
Sunday 12 October 2014
Contextualising my Project.
So in light of the fact I have no decided which path I am going to follow, I feel it's important for me to shed some light on why I am choosing to do what I am doing and what benefit I feel can be gained from this.
I would say it is fairly obvious that anyone on this course is interested in audio and music; we wouldn't be here otherwise. However, we all have different motivations in being here, different paths that we want to follow afterwards.
In understanding the path I want to follow afterwards, it's important to understand the one I have been walking on upto this point. Through school I played the Piano lightly, the Tuba to grade 4 (weird, I know) and tried my hand at most others. However as with most kids of my age at that time, computer games were becoming more and more prevelant in my life, not to mention the push at my school to do sports and my love of Tennis. Though both my Grandparents are concert grade Pianists, neither of my parents play anything, despite loving music itself. So the intent I had to learn an instrument was easily ignored for addictive games and other than the occasional "How is your Piano practise coming along?" from my Grandparents, there really wasn't much push to keep it up from anywhere else.
Fast forward 7 or 8 years and school is nearly over, with adolesence well under way. I discover DJing with my friends from School and frankly, all the culture that goes with it. Music is in my family, even if it skipped a generation. When I discovered that you didn't have to "get grades" in order to perform and, in a certain sense, create music, my mind was blown. I was obsessed with DJing and Dance Music in a way I had never been about anything in my life. This was before I had ever even stepped foot in a nightclub...
Needless to say, the deal was sealed on the first clubbing experience. To this day, it is still the clearest image I have of "clubbing" in my mind, the purest it has even been. Now don't get me wrong, I was wasted; some of the grandure could have been imagined. It is a clear image though, even if partially imagined, associated with a feeling that is only describable as love.
Fast forward again to today. Here I am sitting on my computer, in my bedroom. Though I am writing my blog just now, later this evening I will turn to my DAW to start writing electronic dance music. For a while though, I have felt like making music is a sterile process... It is extremely easy to become distracted - surf the web in other words. It certainly feels nothing like a club, regardless how loud I turn it up. A club is all about sensory saturation. It's a mixture of frequencies coming together, both light and audio spectrums, in order to transport you to another place. A place that is a million miles away from where I am sat just now.
So the question is how can you bring the club vibe into the studio situation, in a way that doesn't require everybody to be a lighting engineer/VJ. In the same way that the Subpac (3rd blog post) has brought the club into the studio for the audio spectrum, I am looking to answer the question for the lighting side of the equation. At the Honours presentation, I want people to be le to play their favourite tune in a darkened room, which is reacting to what they put into the system with light. Light that is consistently changing colour in tandem with the musics frequency spectrum, in order to bring a far more imersive experience to the listener.
From a testing standpoint, I want to see if it can be developed to the accuracy required to be useful as a mixing and engineering tool and the most useful implementation of audio reactive lights. By examining the excitment and drive to produce more music, along with their opinions on usefulness as a tool, I should be able to determine whether or not light can become an integral part of the music making experience for producers.
I would say it is fairly obvious that anyone on this course is interested in audio and music; we wouldn't be here otherwise. However, we all have different motivations in being here, different paths that we want to follow afterwards.
In understanding the path I want to follow afterwards, it's important to understand the one I have been walking on upto this point. Through school I played the Piano lightly, the Tuba to grade 4 (weird, I know) and tried my hand at most others. However as with most kids of my age at that time, computer games were becoming more and more prevelant in my life, not to mention the push at my school to do sports and my love of Tennis. Though both my Grandparents are concert grade Pianists, neither of my parents play anything, despite loving music itself. So the intent I had to learn an instrument was easily ignored for addictive games and other than the occasional "How is your Piano practise coming along?" from my Grandparents, there really wasn't much push to keep it up from anywhere else.
Fast forward 7 or 8 years and school is nearly over, with adolesence well under way. I discover DJing with my friends from School and frankly, all the culture that goes with it. Music is in my family, even if it skipped a generation. When I discovered that you didn't have to "get grades" in order to perform and, in a certain sense, create music, my mind was blown. I was obsessed with DJing and Dance Music in a way I had never been about anything in my life. This was before I had ever even stepped foot in a nightclub...
Needless to say, the deal was sealed on the first clubbing experience. To this day, it is still the clearest image I have of "clubbing" in my mind, the purest it has even been. Now don't get me wrong, I was wasted; some of the grandure could have been imagined. It is a clear image though, even if partially imagined, associated with a feeling that is only describable as love.
Fast forward again to today. Here I am sitting on my computer, in my bedroom. Though I am writing my blog just now, later this evening I will turn to my DAW to start writing electronic dance music. For a while though, I have felt like making music is a sterile process... It is extremely easy to become distracted - surf the web in other words. It certainly feels nothing like a club, regardless how loud I turn it up. A club is all about sensory saturation. It's a mixture of frequencies coming together, both light and audio spectrums, in order to transport you to another place. A place that is a million miles away from where I am sat just now.
So the question is how can you bring the club vibe into the studio situation, in a way that doesn't require everybody to be a lighting engineer/VJ. In the same way that the Subpac (3rd blog post) has brought the club into the studio for the audio spectrum, I am looking to answer the question for the lighting side of the equation. At the Honours presentation, I want people to be le to play their favourite tune in a darkened room, which is reacting to what they put into the system with light. Light that is consistently changing colour in tandem with the musics frequency spectrum, in order to bring a far more imersive experience to the listener.
From a testing standpoint, I want to see if it can be developed to the accuracy required to be useful as a mixing and engineering tool and the most useful implementation of audio reactive lights. By examining the excitment and drive to produce more music, along with their opinions on usefulness as a tool, I should be able to determine whether or not light can become an integral part of the music making experience for producers.
Friday 10 October 2014
First Progress Meeting
As I mentioned yesterday, I was looking forward to having a chance to chat with James. I wasn't disappointed tbh!
I have certainly come away with some confidence that wasn't there before. I also got a clear impression of which project he thought had legs, and tbh it was the one I knew it would be.
So back to audio reactive lights I go. However I realised today that I have been missing a whole range of possible applications and methods to achieve the project and the uses for the system.
Certainly, environmental testing of musicians in environments is the most obvious, however dependant on my progress I could push the idea further.
Generative graphics were a topic that was raised during the meeting and it was certainly a way I had thought of achieving the colour to frequency mapping. This technique can be shifted in order to generate a whole range of parameters for generating randomised patterns and colours etc. You can have image-change parameters shift on transient detection; you can change many more parameters infact when using a program such as Resolume Arena to control your projections. There are too many to mention just here frankly.
So I am going to do a rough draft of the aims of the project as I see them just now, so I have a reference point for the future.
1. To create an environment that is capable of reacting to audio dynamically, through the use of FFT/Spectral Analysis techniques, in order to map the the colour balance, hue and intensity of LED lights, or the visual content of a projector(s) mapped within a room.
2. To test both common platforms of audio analysis currently in use, Max/MSP/M4L & Arduino, in order to find the most suitable platform for deconstruction of audio information into usable data for control of lighting/projection systems.
3. To find the most efficient way of transmitting the information needed to activate the lights. Possible methods include direct micing of the environment, line level signal transmitted from the audio stream, MIDI to DMX conversion.
4. To test a range of scenarios for possible use of the system, with focus on studio production, mixing and mastering scenarios, in order to establish a consensus of it's use for the purpose intended; there is a possibility that in the studio environmental activity is not wanted/useful. It could be that for certain types of music it is more applicable than others. These can all be tested using scientific double blind testing on the producer community already exisiting within Abertay.
5. To test the positioning of dynamic lighting within the environment. Can splitting the stereo positioning of the image in the room aid/engage the producer or mix engineer in a more pronounced way with the audio? Dependant on the results, this could be proved to be a mix enhancing tool.
There are a number of other applications that this could be useful, including nightclubs & festivals, theaters, bars. You could have your bathroom lights react dynamically to the sound of splashing in the water, or just a great addition to a bedroom soundsystem for listening....
So now it's time to get started properly, over the weekend I am going to find a range of texts that I can get on both Max and Arduino. This is the most pressing issue I have, as my knowledge base of both these platforms needs to expand fast. At the start of the week I will also be buying my Arduino; I understand there are a fair few and some reviews need to be watched to make the best choice on which one to buy. Dependent on what my research over the weekend yields I may also go ahead and buy a bread board and a strip of LED's and the basic tools I will need. There is no point in holding back with this, the sooner I get this stuff the sooner I will know how to use it. I have found over my time in education, immediated and practicle application of the knowledge I have been given/found is the way I learn best.
I have certainly come away with some confidence that wasn't there before. I also got a clear impression of which project he thought had legs, and tbh it was the one I knew it would be.
So back to audio reactive lights I go. However I realised today that I have been missing a whole range of possible applications and methods to achieve the project and the uses for the system.
Certainly, environmental testing of musicians in environments is the most obvious, however dependant on my progress I could push the idea further.
Generative graphics were a topic that was raised during the meeting and it was certainly a way I had thought of achieving the colour to frequency mapping. This technique can be shifted in order to generate a whole range of parameters for generating randomised patterns and colours etc. You can have image-change parameters shift on transient detection; you can change many more parameters infact when using a program such as Resolume Arena to control your projections. There are too many to mention just here frankly.
So I am going to do a rough draft of the aims of the project as I see them just now, so I have a reference point for the future.
1. To create an environment that is capable of reacting to audio dynamically, through the use of FFT/Spectral Analysis techniques, in order to map the the colour balance, hue and intensity of LED lights, or the visual content of a projector(s) mapped within a room.
2. To test both common platforms of audio analysis currently in use, Max/MSP/M4L & Arduino, in order to find the most suitable platform for deconstruction of audio information into usable data for control of lighting/projection systems.
3. To find the most efficient way of transmitting the information needed to activate the lights. Possible methods include direct micing of the environment, line level signal transmitted from the audio stream, MIDI to DMX conversion.
4. To test a range of scenarios for possible use of the system, with focus on studio production, mixing and mastering scenarios, in order to establish a consensus of it's use for the purpose intended; there is a possibility that in the studio environmental activity is not wanted/useful. It could be that for certain types of music it is more applicable than others. These can all be tested using scientific double blind testing on the producer community already exisiting within Abertay.
5. To test the positioning of dynamic lighting within the environment. Can splitting the stereo positioning of the image in the room aid/engage the producer or mix engineer in a more pronounced way with the audio? Dependant on the results, this could be proved to be a mix enhancing tool.
There are a number of other applications that this could be useful, including nightclubs & festivals, theaters, bars. You could have your bathroom lights react dynamically to the sound of splashing in the water, or just a great addition to a bedroom soundsystem for listening....
So now it's time to get started properly, over the weekend I am going to find a range of texts that I can get on both Max and Arduino. This is the most pressing issue I have, as my knowledge base of both these platforms needs to expand fast. At the start of the week I will also be buying my Arduino; I understand there are a fair few and some reviews need to be watched to make the best choice on which one to buy. Dependent on what my research over the weekend yields I may also go ahead and buy a bread board and a strip of LED's and the basic tools I will need. There is no point in holding back with this, the sooner I get this stuff the sooner I will know how to use it. I have found over my time in education, immediated and practicle application of the knowledge I have been given/found is the way I learn best.
Thursday 9 October 2014
More Ideas - Acoustic/Musical Sculpture.
So over the last while I have done yet more thinking, in particular again about what would be useful after university; I want my project to be appealing to employers or relevant to self employment.
I also was reminded of this video that I saw way back at the start of my time at Abertay, something that blew my mind & that I was thinking about for some time after.
This falls into a fair number of my catagories. It's musical, it involves building & art. It would also be fantastic for the honours presentation if I could design af few instruments over the year. I will have to research lots of maths and acoustics, not to mention the history of musical instruments and indeed what defines a musical instrument. I this sense I think it would be a fascinating and worthwhile project.
A few of the main objectives I may face for this are more practicle than anything though. I already know that I will need tols and space to work, which can be tricky and costly at best. Some of the techniques I would have to undertake I believe actually involve licences, such as welding. Obviously crafting wooden instruments and other materials is possible. I have a meeting to discuss possiblities with James tomorrow so I will wait to see what he thinks of this path.
I also just tonight found this amazing company that specialise in interesting acoustic designs, for diffusion panels, which once again brought me back to the interactivity between art and sound.
ZR - Sample Rate 8 Bit Diffuser
I have also been continuing my research and practise with M4L, as regardless of whether I deem it to be a wise choice/neccessary for use in my Honours, I want to learn it and I will need this knowledge if I do end up going down that route. I am now t the stage where I can move around the program with ease and get very basic signal flow, however there are still some aspects that are unknown and rather daunting to say the least. Onwards and upwards, so they say....
I also was reminded of this video that I saw way back at the start of my time at Abertay, something that blew my mind & that I was thinking about for some time after.
This falls into a fair number of my catagories. It's musical, it involves building & art. It would also be fantastic for the honours presentation if I could design af few instruments over the year. I will have to research lots of maths and acoustics, not to mention the history of musical instruments and indeed what defines a musical instrument. I this sense I think it would be a fascinating and worthwhile project.
A few of the main objectives I may face for this are more practicle than anything though. I already know that I will need tols and space to work, which can be tricky and costly at best. Some of the techniques I would have to undertake I believe actually involve licences, such as welding. Obviously crafting wooden instruments and other materials is possible. I have a meeting to discuss possiblities with James tomorrow so I will wait to see what he thinks of this path.
I also just tonight found this amazing company that specialise in interesting acoustic designs, for diffusion panels, which once again brought me back to the interactivity between art and sound.
ZR - Sample Rate 8 Bit Diffuser
I have also been continuing my research and practise with M4L, as regardless of whether I deem it to be a wise choice/neccessary for use in my Honours, I want to learn it and I will need this knowledge if I do end up going down that route. I am now t the stage where I can move around the program with ease and get very basic signal flow, however there are still some aspects that are unknown and rather daunting to say the least. Onwards and upwards, so they say....
Sunday 5 October 2014
Other ideas
It funny how doing the research on this is starting to cause me to rethink everything, again and again. Not just in the sense of the project, but in terms of what I want to do with the rest of my life. Profound stuff indeed!
I think it would be useful for me to clarify in my own my mind the things I love doing. I am a strong believer that if you love what you do you will be successful in life. More over, you will be useful and productive.
It's this ethos that led me to the idea of colour to frequency conversion as a project, as this could be useful for use in the festival circuit that I am a part of. However the reality is that thinking slightly into the future, running about sorting out rigs and lights whilst getting messy in fields is perhaps something I want to be more a part of on the performance side rather than the organisation side.
The whole reason behind me getting involved with a festival crew was so that I would get to play out on a big system to lots of people.
Anyways, back to the list.
Loves/Passions
From an inspirational point of view, this week I did see a few things that have given me some ideas, but nothing concrete
This is a very cool idea, though I think it would be unachievable for me to undetake anything like this, the idea of surround sound in dance music is obviously being taken a lot more seriously.
Anyways, enough rambling for today, I haven;t really got anywhere with this today, but Im just having one of those days.
I think it would be useful for me to clarify in my own my mind the things I love doing. I am a strong believer that if you love what you do you will be successful in life. More over, you will be useful and productive.
It's this ethos that led me to the idea of colour to frequency conversion as a project, as this could be useful for use in the festival circuit that I am a part of. However the reality is that thinking slightly into the future, running about sorting out rigs and lights whilst getting messy in fields is perhaps something I want to be more a part of on the performance side rather than the organisation side.
The whole reason behind me getting involved with a festival crew was so that I would get to play out on a big system to lots of people.
Anyways, back to the list.
Loves/Passions
- Studio Work - Production Skills, DAW work, Music Production
- Synthesis
- Electronic Performance
- Critical Listening
- Music Business
From an inspirational point of view, this week I did see a few things that have given me some ideas, but nothing concrete
This is a very cool idea, though I think it would be unachievable for me to undetake anything like this, the idea of surround sound in dance music is obviously being taken a lot more seriously.
Anyways, enough rambling for today, I haven;t really got anywhere with this today, but Im just having one of those days.
Saturday 27 September 2014
Don't be too hasty...
So after the last post I made, I started investigating the best ways to go about my ideas. It gets deep quite fast - it turns out that there is no clear way to use prexisting max objects in order to build what I need, certainly not on the PC and I dont have a mac, though getting one for the project is obviously on my mind.
I found that there are a host of max objects on the Mac version that could be used to grrab data suitable for midi conversion, here is a list and explanation, with link to the site they are available from.
sigmund~ - A sinusoidal analysis and pitch tracking object.
fiddle~ - pitch following and sinusoidal decomposition object
bonk~ - described as a "percussion follower". presumably a form of transient detection.
centroid~ - computation of spectral centroid. I need to research more how this may be useful.
More details about the creators along with download links can be found here -
VUD - Max
As I said, these do not exist in the PC domain of Max and to be frank, I am not enthused by the idea of learning an actual coding language. Max is learnable in the time frame, however not if I have to start coding objects for it.I have emailed the people behind this to ask if/when it will be available for the PC or suggestions otherwise.
I did find a tool called peaks though, which is available for PC. This simply take the incoming audio and measures the volume, spits out MIDI data that corresponds at the other end. This is obviously useful for making lights pulse to audio, which is a part of the battle I am trying to win here. What the sinusoidal analysis gives me is the ability to track frequency neatly. However it isn't the only way. If I was to split the audio into 7/8 bands using that number of channels with band pass filters, then put a "peaks" unit afterwards, then effectively I would have the data that I require, if in a convoluted way.
I love the internet.
During my many searches with Google, I have turned up some really cool projects and as it turns out, I might be trying to go about this all the wrong way. I have still yet to find anybody who has implemented my idea in the way I am thinking, so I am pretty sure I still have academic validity in doing this, however I have found people thinking along the same lines, just for different reasons and ends.
I found this guy last night
This is using a microprocessor called the MSGEQ7, when I searched this, it spewed out a whole host of videos on YouTube, which led me to 2 clear conclusions. The first being this is the way forwards for simple implementation of the colour seperation on the front end of the device I am looking to make. The second being people need to learn how to name YouTuibe videos better.
So I need to now think about how best to chain this all up. The MSGEQ7 does almost negate the need to use any other information to create live audio spectrum mapping with colour. It does add an extra level of live control over what is going on with the colour though. It would be great to be able to define the spectrum you find most useful. Perhaps some people would prefer to work with blue at the bass end of the spectrum, while others would prefer red. Would be an excellent parameter for control.
I also need to think about how I am going to implement the lights themselves. As I mentioned in my first post, I love the idea of remixing. This whole process has got me thinking about remixing ideas, I am now thinking it maybe an interesting take on the question. Perhaps something relating to remixing ideas about musical hardware together.
I have this fantasy now of owning a pair of monitor speakers that not only sound fantastic, but give you visual feedback also, built in. Imagine perspex speakers with this idea built in, automatically changing colour and pulsing with the music. They would give you automatic visual feedback about the spectral content, volume and stereo balance of the audio being played through them.
You could also implement this idea into acoustic treatments. Because of the grid like layout of diffusers, they could be effectively turned into a device that also gives you spectral infomation. I love the idea of a light reactive diffuser that is all frosty, diffusing light and sound in one device.
Both of these are remixes of two ideas that already exist themselves, but have yet to be combined, so I feel this is a strong avenue to persue.
I have had this feeling in the back of my mind about another passion of mine and the possiblity of persuing that as my project, the idea of remixing being what ties all of this together.
So after the last post I made, I started investigating the best ways to go about my ideas. It gets deep quite fast - it turns out that there is no clear way to use prexisting max objects in order to build what I need, certainly not on the PC and I dont have a mac, though getting one for the project is obviously on my mind.
I found that there are a host of max objects on the Mac version that could be used to grrab data suitable for midi conversion, here is a list and explanation, with link to the site they are available from.
sigmund~ - A sinusoidal analysis and pitch tracking object.
fiddle~ - pitch following and sinusoidal decomposition object
bonk~ - described as a "percussion follower". presumably a form of transient detection.
centroid~ - computation of spectral centroid. I need to research more how this may be useful.
More details about the creators along with download links can be found here -
VUD - Max
As I said, these do not exist in the PC domain of Max and to be frank, I am not enthused by the idea of learning an actual coding language. Max is learnable in the time frame, however not if I have to start coding objects for it.I have emailed the people behind this to ask if/when it will be available for the PC or suggestions otherwise.
I did find a tool called peaks though, which is available for PC. This simply take the incoming audio and measures the volume, spits out MIDI data that corresponds at the other end. This is obviously useful for making lights pulse to audio, which is a part of the battle I am trying to win here. What the sinusoidal analysis gives me is the ability to track frequency neatly. However it isn't the only way. If I was to split the audio into 7/8 bands using that number of channels with band pass filters, then put a "peaks" unit afterwards, then effectively I would have the data that I require, if in a convoluted way.
I love the internet.
During my many searches with Google, I have turned up some really cool projects and as it turns out, I might be trying to go about this all the wrong way. I have still yet to find anybody who has implemented my idea in the way I am thinking, so I am pretty sure I still have academic validity in doing this, however I have found people thinking along the same lines, just for different reasons and ends.
I found this guy last night
This is using a microprocessor called the MSGEQ7, when I searched this, it spewed out a whole host of videos on YouTube, which led me to 2 clear conclusions. The first being this is the way forwards for simple implementation of the colour seperation on the front end of the device I am looking to make. The second being people need to learn how to name YouTuibe videos better.
So I need to now think about how best to chain this all up. The MSGEQ7 does almost negate the need to use any other information to create live audio spectrum mapping with colour. It does add an extra level of live control over what is going on with the colour though. It would be great to be able to define the spectrum you find most useful. Perhaps some people would prefer to work with blue at the bass end of the spectrum, while others would prefer red. Would be an excellent parameter for control.
I also need to think about how I am going to implement the lights themselves. As I mentioned in my first post, I love the idea of remixing. This whole process has got me thinking about remixing ideas, I am now thinking it maybe an interesting take on the question. Perhaps something relating to remixing ideas about musical hardware together.
I have this fantasy now of owning a pair of monitor speakers that not only sound fantastic, but give you visual feedback also, built in. Imagine perspex speakers with this idea built in, automatically changing colour and pulsing with the music. They would give you automatic visual feedback about the spectral content, volume and stereo balance of the audio being played through them.
You could also implement this idea into acoustic treatments. Because of the grid like layout of diffusers, they could be effectively turned into a device that also gives you spectral infomation. I love the idea of a light reactive diffuser that is all frosty, diffusing light and sound in one device.
Both of these are remixes of two ideas that already exist themselves, but have yet to be combined, so I feel this is a strong avenue to persue.
I have had this feeling in the back of my mind about another passion of mine and the possiblity of persuing that as my project, the idea of remixing being what ties all of this together.
Wednesday 24 September 2014
Initial Ideas
I am really keen to do somethingthis year that is going to serve me well in the years to come; something that I can hopefully turn into a business, or that aids and advances my skillset in the world of sound design and music composition.
I am also keen to build something, as for the last four years I have been almost solely sitting in front of screens, hands confined to a keyboard of one description or another. I want to have something solid to show for my time spent in education.
With that in mind, I spent the summer considering everything that I have done over the last four years. I tried to think about what had inspired me the most during this time, the solid moments in my mind where I felt I turned a corner, or made progression in one area or another. Though in the end that was quite a few, there are a few I am going to share just now that have stuck with me and kept running around in my head.
Acoustics
Since the first class I had in college about acoustics, I have been fascinated by the topic. I would go as far as to say that it has become an obsession in my life, certainly in the practical sense - I am constantly now striving to make my listening experience better & more accurate.
This has led me on a journey to discover a wealth of knowledge that I would have otherwise over looked. I have rediscovered a passion for numbers and shapes, one lost since the time I spent in my earlier years of school. The ancient history of Pythagoras & Fibonaci; the "Golden Ratio". How places of worship in days gone by were designed with this in mind 100's of years ago, without the aid of staring at computer screens with machines doing all the work, with perfect acoustic characteristics.
Just writing here makes me want to go and spend the rest of the day reading and watching things about the topic, maybe move my speakers a bit or change the placement of my acoustic treatments to try and get that stereo image a bit more poppin'... God I'm sad.
I really do that though. Aaaaaand it really does make a difference. You will hear all sorts of things online on both sides, that it either is all you need in your life and you should sell your clothes for it, right through to it being a pointless waste of time that doesn't make a difference. Well both are clearly bollocks, however I have infact ended up selling some (fairly old and fairly useless) property, in the chase for a better sounding room. I guess this means I er on the side of, well.... bollocks? Could never have won that one anyways...
Seriously though, I ended up building these
8 Broadband Absorbers, 3 Corner Traps & 1 Slatted Helmoltz Absorber/Diffuser. Obviously this is something I like doing, so my first consideration for my honours, in a nut shell, is acoustic treatment R&D. Looking at how it effects making music, what the main treatment types are just now and how they could perhaps be improved upon. Something in this field anyways...
Speakers
Before I realised how important acoustic treatments were, I like most used to think that speakers were the be all and end all of the resulting end sound that we hear. Rarely do you find someone that has even so much as thought about the baring your environment has on the sound you are hearing. Around the time I started higher education in audio, my friends finished a pretty high end DIY SoundSystem build. Though I had no part in the build itself, this also was the time that I bought my monitor speakers. So these situations combining at the same time as acoustics becoming a thing in my life kinda peaked my interest. I would now say that speakers are equally as important as acoustic treatment. They are like sinks and taps - you shouldn't have one without the other. Certainly not if you are serious about sound.
Though you can only see one above, I like many have a pair... a big pair (no pun intended). They are definately one of the loves of my life and I seriously use them all the time; I am addicted to listening to music through them. I would add that their full potential wasn't realised until I got a proper audio interface, however that is a whole other kettle of fish.
The point is, I have strongly been considering going down the route of speaker design. Is it possible to create a "studio accurate" pair of speakers at home is a question that keeps coming to mind, though Im not sure about the academic validity of such a project.
Lights
Bare with me. This post has been composed over the first few weeks, simply because though I had thought a lot about what I wanted to do, I hadn't really got any closer to making my mind up by the start of play.
Ableton Live is my tool of choice for creating music & sound. It is built around a programming language known as Max/MSP, which can also be accessed through an aplet within ableton, cleverly named, "Max for LIve", or M4L as it is more affectionately known.
It is something that I have wanted to get a lot more in-depth with; it is viewed as somewhat of a dark art in the world of bedroom producers. I'm sure to a hardcore programmer it is basic as hell, but there you go. I started wondering about ways that I could incorporate all of my wants for this year into one, using this magical piece of software...
Well the common theme throughout all of what I have said just now is audio reactivity, at least in its analogue sense so far. Both speakers and acoustic treatments are audio reactive. Max for live opens a world of possiblity when it comes to audio reactivity, from within the digital domain this time. Like I said though, I want to BUILD something this year. Then it hit me.
Until I am told it is a bad idea, or can't be a good question made of, this is the plan. Real-time, audio reactive lights, where the colour is mapped to the frequency spectrum, and the user can choose beetween either Amplitude, Peak Detection or RMS data to scale the dimming effect. In the box, my aim is to build a Max for Live app, that splits the incoming audio data (sits on the masterbus) into 8 bands, representative of the 8 audible octaves of sound. Each of the bands (filters) can then be modulated on the way in.
Though the aim will be to keep the centre point of the band pass stable, two modulation parameters will be useful at the input stage. The Q point can be modulated to give a more specific band reaction, it can also be given a reaction threshold - if its not loud enough, it wont activate anything. The system is much the same throughout as a classic vocoder, however I am looking for other data at the output stage.This data can be scaled and spat out as MIDI data, which in turn can run DMX.
In parallel I will run a chain that collects the Amp, Peak & RMS data from the overall mix, compensates for the enevitable delay incurred by the chain above, and hopefully the combines to be spitting out 11 glorious channels of MIDI data, ready to run the second stage of the project.
DMX and MIDI, sitting in a tree, K-I-S-S-I-N-G... Shit joke. Anyways, they do love each other and there are a lot fo programmers and electricians that have built a wealth of great toys to play with. So I am going to buy four reals of RGB LED's, a power supply and a DMXis MIDI to DMX converter in order to build the reactive lights themselves.
The real idea behind this, is to hopefully create the feeling of sitting within a metering system. I want to explore the possibilities of calibrating a system like this in such a way that it served to benefit a music producers working environment. I have spent a lot of time thinking about this and, though technically challenging in a few ways, I believe it is not un-doable. It is also something I am passionate about, or certainly have become so in the last few weeks. Here are a few rudimentary examples of what I am talking about in video form, however they don't really get across fully what I have in my head. They will certainly give you a glimpse of the poissibilities...
Colour Organ
Peak Reactive.
Anyways, this has been a long post so I will leave it there for now. To surmise, I am starting my research into the uses of RGB LED lights as a full room metering system for audio production, using Max for Live and outboard MIDI/DMX convertes/LED's, that will sit behind my acoustic treatments. A fully audio reactive room is the goal, fun times ahead!
I am really keen to do somethingthis year that is going to serve me well in the years to come; something that I can hopefully turn into a business, or that aids and advances my skillset in the world of sound design and music composition.
I am also keen to build something, as for the last four years I have been almost solely sitting in front of screens, hands confined to a keyboard of one description or another. I want to have something solid to show for my time spent in education.
With that in mind, I spent the summer considering everything that I have done over the last four years. I tried to think about what had inspired me the most during this time, the solid moments in my mind where I felt I turned a corner, or made progression in one area or another. Though in the end that was quite a few, there are a few I am going to share just now that have stuck with me and kept running around in my head.
Acoustics
Since the first class I had in college about acoustics, I have been fascinated by the topic. I would go as far as to say that it has become an obsession in my life, certainly in the practical sense - I am constantly now striving to make my listening experience better & more accurate.
This has led me on a journey to discover a wealth of knowledge that I would have otherwise over looked. I have rediscovered a passion for numbers and shapes, one lost since the time I spent in my earlier years of school. The ancient history of Pythagoras & Fibonaci; the "Golden Ratio". How places of worship in days gone by were designed with this in mind 100's of years ago, without the aid of staring at computer screens with machines doing all the work, with perfect acoustic characteristics.
Just writing here makes me want to go and spend the rest of the day reading and watching things about the topic, maybe move my speakers a bit or change the placement of my acoustic treatments to try and get that stereo image a bit more poppin'... God I'm sad.
I really do that though. Aaaaaand it really does make a difference. You will hear all sorts of things online on both sides, that it either is all you need in your life and you should sell your clothes for it, right through to it being a pointless waste of time that doesn't make a difference. Well both are clearly bollocks, however I have infact ended up selling some (fairly old and fairly useless) property, in the chase for a better sounding room. I guess this means I er on the side of, well.... bollocks? Could never have won that one anyways...
Seriously though, I ended up building these
8 Broadband Absorbers, 3 Corner Traps & 1 Slatted Helmoltz Absorber/Diffuser. Obviously this is something I like doing, so my first consideration for my honours, in a nut shell, is acoustic treatment R&D. Looking at how it effects making music, what the main treatment types are just now and how they could perhaps be improved upon. Something in this field anyways...
Speakers
Before I realised how important acoustic treatments were, I like most used to think that speakers were the be all and end all of the resulting end sound that we hear. Rarely do you find someone that has even so much as thought about the baring your environment has on the sound you are hearing. Around the time I started higher education in audio, my friends finished a pretty high end DIY SoundSystem build. Though I had no part in the build itself, this also was the time that I bought my monitor speakers. So these situations combining at the same time as acoustics becoming a thing in my life kinda peaked my interest. I would now say that speakers are equally as important as acoustic treatment. They are like sinks and taps - you shouldn't have one without the other. Certainly not if you are serious about sound.
Though you can only see one above, I like many have a pair... a big pair (no pun intended). They are definately one of the loves of my life and I seriously use them all the time; I am addicted to listening to music through them. I would add that their full potential wasn't realised until I got a proper audio interface, however that is a whole other kettle of fish.
The point is, I have strongly been considering going down the route of speaker design. Is it possible to create a "studio accurate" pair of speakers at home is a question that keeps coming to mind, though Im not sure about the academic validity of such a project.
Lights
Bare with me. This post has been composed over the first few weeks, simply because though I had thought a lot about what I wanted to do, I hadn't really got any closer to making my mind up by the start of play.
Ableton Live is my tool of choice for creating music & sound. It is built around a programming language known as Max/MSP, which can also be accessed through an aplet within ableton, cleverly named, "Max for LIve", or M4L as it is more affectionately known.
It is something that I have wanted to get a lot more in-depth with; it is viewed as somewhat of a dark art in the world of bedroom producers. I'm sure to a hardcore programmer it is basic as hell, but there you go. I started wondering about ways that I could incorporate all of my wants for this year into one, using this magical piece of software...
Well the common theme throughout all of what I have said just now is audio reactivity, at least in its analogue sense so far. Both speakers and acoustic treatments are audio reactive. Max for live opens a world of possiblity when it comes to audio reactivity, from within the digital domain this time. Like I said though, I want to BUILD something this year. Then it hit me.
Until I am told it is a bad idea, or can't be a good question made of, this is the plan. Real-time, audio reactive lights, where the colour is mapped to the frequency spectrum, and the user can choose beetween either Amplitude, Peak Detection or RMS data to scale the dimming effect. In the box, my aim is to build a Max for Live app, that splits the incoming audio data (sits on the masterbus) into 8 bands, representative of the 8 audible octaves of sound. Each of the bands (filters) can then be modulated on the way in.
Though the aim will be to keep the centre point of the band pass stable, two modulation parameters will be useful at the input stage. The Q point can be modulated to give a more specific band reaction, it can also be given a reaction threshold - if its not loud enough, it wont activate anything. The system is much the same throughout as a classic vocoder, however I am looking for other data at the output stage.This data can be scaled and spat out as MIDI data, which in turn can run DMX.
In parallel I will run a chain that collects the Amp, Peak & RMS data from the overall mix, compensates for the enevitable delay incurred by the chain above, and hopefully the combines to be spitting out 11 glorious channels of MIDI data, ready to run the second stage of the project.
DMX and MIDI, sitting in a tree, K-I-S-S-I-N-G... Shit joke. Anyways, they do love each other and there are a lot fo programmers and electricians that have built a wealth of great toys to play with. So I am going to buy four reals of RGB LED's, a power supply and a DMXis MIDI to DMX converter in order to build the reactive lights themselves.
The real idea behind this, is to hopefully create the feeling of sitting within a metering system. I want to explore the possibilities of calibrating a system like this in such a way that it served to benefit a music producers working environment. I have spent a lot of time thinking about this and, though technically challenging in a few ways, I believe it is not un-doable. It is also something I am passionate about, or certainly have become so in the last few weeks. Here are a few rudimentary examples of what I am talking about in video form, however they don't really get across fully what I have in my head. They will certainly give you a glimpse of the poissibilities...
Colour Organ
Peak Reactive.
Anyways, this has been a long post so I will leave it there for now. To surmise, I am starting my research into the uses of RGB LED lights as a full room metering system for audio production, using Max for Live and outboard MIDI/DMX convertes/LED's, that will sit behind my acoustic treatments. A fully audio reactive room is the goal, fun times ahead!
Subscribe to:
Posts (Atom)