Loading Multiple Files Into Buffers

The other day I was helping  friend with some Supercollider code involving buffers.  Normally in Supercollider you can load a buffer with code that looks like the following

b = Buffer.read(s, “file extension”);

This simply loads a buffer onto the server and assigns it to b.  Then we can use PlayBuf to play it back

SynthDef(“buffer”, (arg bufnum, numchannel = 1;

var in;

a = PlayBuf.ar(numchanel, bufnum, rate, trigger, startpos, loop, donAction:2);

Out.ar(0, a);

}).send(s, \bufnum, b);

In the previous code bufnum is the buffer number assigned to the file.  At the send of the synthdef we use \bufnum, b to assign the buffer number to argument bufnum. doneAction is an argument that controls when the synth gets freed from the server or not.  Look at the doneAction help file to read about them.  There are 14 of them.


If only 1 sound file is being used loading the buffer is very easy.  If we have multiple files then this method is not going to work.  To load a file full of sounds the object Buffer is not even used.  Instead “SoundFile” is used which is a little counter intuitive.  The following code shows how to load many sounds from a file into individual buffers.

b = SoundFile.collect(“filepath/*”, s)

This in turn will load every sound in the file into a separate buffer and give it a number value.  It is important to add the * at the end of the path otherwise Supercollider will load a blank array.  Then we can say


to load a window with each buffers information.

Using the inspect window we can then get all of the buffer numbers to be used in the PlayBuf code.


b = SoundFile.collect(“filepath/*”, s)

SynthDef(“buffer”, (arg bufnum, numchannel = 1, rate = 1, ;

var in;

a = PlayBuf.ar(numchanel, bufnum, rate, loop: 1, donAction:2);

Out.ar(0, a);


a = Synth(“buffer”, [\bufnum, 1, \rate, 1.3]);



A Fun Little Supercollider Patch

The other day I was playing around with Sine tones and arrays in Supercollider and came up with this little program that creates partials from a fundamental frequency. Nothing special really, but when the volume of the harmonics is controlled by a max statement and sine tone things get interesting.  Here is what the code looks like

SynthDef(“sinetone”, {arg fund = 100, osc = 100;
var sound;
sound = Mix.ar(
Array.fill(12, {
arg count;
var harm;
harm = count +1*fund;
SinOsc.ar(harm, mul: max([0,0], SinOsc.kr(osc)))
Out.ar(0, sound);

a = Synth(“sinetone”)

The code gives you control of the fundamental frequency with the argument fund and the volume frequency with the argument osc.  To make the sound more interesting SinOsc.ar can be replaced with any kind of generator.  Sine tones give a more “harmonious” sound while Ugens like LFNoise0.ar give a more helicopter sound.




Creating L-Systems in Supercollider

L-systems also known as Lindenmayer Systems is a rule based parallel rewriting system.  Every L-system has an axiom (starting point) and series of rules.  The rules get replaced into one another to grow the L-system or in other words works with the ideas of formal grammars.  L-systems were first introduced by Aristid Lindenmayer to describe growth models in plants.  Since then they have evolved further into fractal like approaches, Turtle Graphics, arts and music.

I first discovered musical aspects of L-systems when I was getting my masters degree and ended up writing my thesis about ways of interpreting the resultant string of characters as well as using the image as guides for music composition.  When I was researching this, I found it very difficult to produce a complex L-system whose output string I could use directly in a synthesis environment.  Max/MSP has certain capabilities but also  can only hold a maximum number of values in a table or collection and they all have to be indexed which is not efficient if the string is 4000 characters long.  Supercollider though has the abilities to generate the string as well as refer back to it for controlling parameters or musical rules.

The following code shows how I put together an L-system string calculator. For this code to work the wslib Quark has to be installed. To do this type Quarks.gui and select the wslib.  I have also used this L-system sting calculator in 2 sound installations titled Reflective Tunnel and Reflective Catacombs.  Reflective Tunnel was set in the tunnels at Wesleyan University in Connecticut and used the harmonics of a cello split to different rules in the L-system.  Then the L-system was played back in the tunnel with pseudo-random envelopes on all the frequencies to create layers of sound that phased and reflected off the concrete walls. In Reflective Catacombs the space was sonically mapped to obtain the resonant frequency and assigned to rules in an L-system and pseudo-random envelopes to play back in the space.  Both installations created shifting atmospheres and interesting beating patterns.
// boot server if needed
Server.default = s = Server.internal.boot

// system to substitute rules
~substitute = { |string, dict|
var str= “”;
string.do { | c |
var val = dict[c.asSymbol] ? c;
str = str ++ val;

// fill a dictionary with entries
~myDictionary = Environment.new;
~myDictionary[\a] = “B – CG”;
~myDictionary[\b] = “A*F-D”;
~myDictionary[\c] = “E/G + F”;
~myDictionary[\d] = “G – BC”;
~myDictionary[\e] = “G/E – BC”;
~myDictionary[\f] = “A -BC/E”;
~myDictionary[\g] = “A -DF/G”;

a = “a” //axiom

//change a to whatever the starting rule is. The number before collect is the number of iterations

~history =5.collect { a = ~substitute.value(a, ~myDictionary).postln };


Once this code is run the stream gets assigned to ~history that can be assigned to a variable and put into a routine  or printed as a long list.

k = ~history

t = Routine({

k.do { | c| c ;

if(c.asString == “F”, {Synth.new(“FM”, [\carrierfreq, [34, 100, 673, 589, 10, 2, 7, 45, 234, 45, 4357, 1239, 3240].choose.postln])});


This routine reads one character in the L-system every 0.35 seconds and looks to see if it is an F.  If it is it selects a frequency at equal probability from the list and assigns it to the carrier frequency of the FM synth definition.


For interest in my L-system interpretations and mappings to compositions my thesis can be found at https://www.academia.edu/24235602/THE_MUSICAL_MAPPINGS_OF_L-SYSTEMS



Scaling Numbers for the Axoloti

The Axoloti does not have a built in scale the way Max/MSP or PD does, however, that doesn’t mean you are stuck with 0-64 or -64-64.  Good old fashion math goes a long way when using the Axoloti and trying to control different parameters.

For unipolar dials, they go from 0-64 which can be used in its unaltered state for a variety of things, but to expand the range or scale the range dividing by 64 and then multiplying by the highest number you want will scale the dial to the range of 0 – N.  From here we can use a convert i to convert from a fractional number (frac32) to an integer for different control.

The idiosyncrasy with this method is the larger the range the bigger the step.  This means that if the range is from 0 – 1000 the step is 7-8.  When the range is smaller, lets say 0-100, the step size is 1.  There is a point in which the numbers will start to turn negative.  I do not know why this happens. It seems to cap out at 999  If I figure it out, I will update this post.

For bipolar dials the range is from -64 -64.  This can increase the range of numbers or make the step size a little smaller.  To work with these sliders, by adding 64 it takes the range from 0 – 128.  Then by dividing by 128 and multiplying by the maximum number you want, you can get an appropriate range of output.

In the axoloti environment many of the inputs get added to the the dial from the object.  This means that the number the object is receiving is potentially more than what you think.  To compensate for this, I leave the dial that is built into the object at 0 so most of the control is from the dial.  I will then use the objects dial for fine tuning different parameters.


Screen Shot 2016-03-25 at 7.05.41 PM.png


I recently bought an Axoloti fro axoloti.com.  It is a small (3×7 inch) piece of hardware that renders audio from a designated program similar to pd.  This product is cheap 65 Euros and you can build an enclosure for it.  The environment is different from pd or max/MSP because every is ranged from 0 – 64 or -64 – 64.  Inside the environment is a code editor that allows the user to edit the object and save an instance of it.  Objects can also be created in a program like TextWrangler.  Most algorithms work the same way in axoloti as they do in max/MSP (ring modulation-> two signals being multiplied together) are the same.  That being said, certain objects don’t exist such as clip~, pong~, wrap~, scale and others.  This makes using the axoloti as much about composing and making sound as experimenting with ways of doing things.


On the left is a stereo input (no phantom power) and a stereo output.  Then there is a headphone jack, micro usb to power and transfer data, sdcard slot, usb host and midi in and out.  The sdcard is used to load patches onto the axoloti and use it in standalone mode.

Screen Shot 2016-03-22 at 11.38.31 AM.png

Here is basic ring modulation of two sine tone signals.  The math/*c is a gain control.  The red lines indicate audio rate while the blue is float.  There is also green being integer numbers and yellow which is boolean.  The program allows for immediate understanding of the type of signal that is being used to control different parameters.


The sample size is fixed to 48k.  There is only 8mb of SDram so doing anything spectral or FFT based is difficult.  The buffer size is also fixed at 16 samples.

While there are some big limitations with 8 mb of ram, the sound that can be produced has a certain analog feel and the overall capabilities  can create complex sounds and sequences.

More Axoloti patches to come…

Allpass Filter from 2 Comb Filters

An allpass filter is a filter used to pass all frequencies with equal gain but change the relationship in the phase.  This might not mean a lot, however, allpass filters are used in reverbs, vocoders and different kinds of delays.  They are also very easy to make in Max/MSP using Gen~.

This little post shows how to make an allpass filter in which the delay time is not set, meaning any amount of delay in samples can be used.  This allows the user to have various types of effects from one allpass filter.  Besides serving as a traditional allpass, it can create echo with some feedback as well as reverb and echo with each echo having reverb.  This is all controlled by the delay time.

Screen Shot 2016-03-21 at 4.29.14 PM

In the above patch an input is being added to a delayed version of itself.  The delay @feedback 1 takes the signal and delays it by a certain amount of samples.  The @feedback 1 argument is used to allow feedback to occur.  Out of the delay, the signal gets multiplied by a gain amount -1 which will cause the feedback gain and feedforward gain to be inverse from one another (cross fade between clean and effect).  The signal + feedback get multiplied by gain and added to the delay.

The description might be a little confusing, but to sum it up… a delayed signal is added to itself (feedback) and added to the feed forward signal.  The flow chart is as follows


Screen Shot 2016-03-21 at 4.41.30 PM

It is important to note that the feed forward takes place after the input gets multiplied by the feedback and the feedback happens before it gets added to the feedforward.  This small little patch can also be duplicated in series or parallel to create interesting reverb and echo.  Also, if the feedforward is left out it turns into a one-pole filter.


Calculating Spectral Flux

Spectral Flux is the measurement of when the spectrum changes or how much change occurs in the spectrum.  Flux can be programmed further into transient detection or onset detection.  In order to calculate the flux of a spectrum, the current frame and previous frame are compared (subtracted) and then in a program such as Max/MSP, square rooted to imitate a logarithmic scale.

The following short tutorial shows how to calculate spectral flux using objects native to Max/MSP.  To start we need to make a subpatch to use within the pfft~ object.  In this subpatch fftin~ 1 will be used to do the analysis and obtain the real and imaginary numbers.  These numbers will be connected to cartopol~ to be converted to phase and amplitude information.  FFTin~ to cartopol~ is generally used a lot for spectral processing in Max/MSP so remembering this 2 objects is useful and important.

The vectral~ object will be used at this point to track the amplitude information by frame.  The output of this is then subtracted from the current amplitude number and sqrt~ to make everything “logarithmic”.

The last part of the process is obtaining the analysis information as a number outside of the subpatch.  If fftout~ 1 is used, the program will think that you are resynthesizing a sound. Instead fftout~ 1 nofft is used to create an outlet for pfft~ and output the analysis number.  The nofft flag will not use an ifft to resynthesize and instead acts as a regular output.

Screen Shot 2016-02-04 at 12.59.53 PM.png