Creating L-Systems in Supercollider

L-systems also known as Lindenmayer Systems is a rule based parallel rewriting system.  Every L-system has an axiom (starting point) and series of rules.  The rules get replaced into one another to grow the L-system or in other words works with the ideas of formal grammars.  L-systems were first introduced by Aristid Lindenmayer to describe growth models in plants.  Since then they have evolved further into fractal like approaches, Turtle Graphics, arts and music.

I first discovered musical aspects of L-systems when I was getting my masters degree and ended up writing my thesis about ways of interpreting the resultant string of characters as well as using the image as guides for music composition.  When I was researching this, I found it very difficult to produce a complex L-system whose output string I could use directly in a synthesis environment.  Max/MSP has certain capabilities but also  can only hold a maximum number of values in a table or collection and they all have to be indexed which is not efficient if the string is 4000 characters long.  Supercollider though has the abilities to generate the string as well as refer back to it for controlling parameters or musical rules.

The following code shows how I put together an L-system string calculator. For this code to work the wslib Quark has to be installed. To do this type Quarks.gui and select the wslib.  I have also used this L-system sting calculator in 2 sound installations titled Reflective Tunnel and Reflective Catacombs.  Reflective Tunnel was set in the tunnels at Wesleyan University in Connecticut and used the harmonics of a cello split to different rules in the L-system.  Then the L-system was played back in the tunnel with pseudo-random envelopes on all the frequencies to create layers of sound that phased and reflected off the concrete walls. In Reflective Catacombs the space was sonically mapped to obtain the resonant frequency and assigned to rules in an L-system and pseudo-random envelopes to play back in the space.  Both installations created shifting atmospheres and interesting beating patterns.

UPDATE:  The code that was previously posted did not work with the current version of Supercollider.  I updated it and it seems to work now.  If it doesn’t work for you I would love to hear the error you received so I can update it again.

((
// system to substitute rules

~substitute = { |string, dict|

var str=””;

string.do { | c | var val = dict[c.asSymbol] ?? c ; str = str ++ val;
};
str.post;

~route = str.asStream;};
// fill a dictionary with entries

~myDictionary = Environment.new;

~myDictionary[\A] = “B–CG”;

~myDictionary[\B] = “A*F-D”;

~myDictionary[\C] = “E/G+F”;

~myDictionary[\D] = “G–BC”;

~myDictionary[\E] = “G/E–BC”;

~myDictionary[\F]= “A-BC/E”;

~myDictionary[\G] = “A-DF/G”;
)
//axiom change the letter in “” based on the rule you want to start with

a = “A”
//The number before collect is the number of iterations
~history = 7.collect {  a = ~substitute.value(a, ~myDictionary)};

Once this code is run the stream gets assigned to ~history that can be assigned to a variable and put into a routine  or printed as a long list.

k = ~history

t = Routine({

k.do { | c| c ;

0.35.wait;
if(c.asString == “F”, {Synth.new(“FM”, [\carrierfreq, [34, 100, 673, 589, 10, 2, 7, 45, 234, 45, 4357, 1239, 3240].choose.postln])});

 

This routine reads one character in the L-system every 0.35 seconds and looks to see if it is an F.  If it is it selects a frequency at equal probability from the list and assigns it to the carrier frequency of the FM synth definition.

 

For interest in my L-system interpretations and mappings to compositions my thesis can be found at https://www.academia.edu/24235602/THE_MUSICAL_MAPPINGS_OF_L-SYSTEMS

 

 

Scaling Numbers for the Axoloti

The Axoloti does not have a built in scale the way Max/MSP or PD does, however, that doesn’t mean you are stuck with 0-64 or -64-64.  Good old fashion math goes a long way when using the Axoloti and trying to control different parameters.

For unipolar dials, they go from 0-64 which can be used in its unaltered state for a variety of things, but to expand the range or scale the range dividing by 64 and then multiplying by the highest number you want will scale the dial to the range of 0 – N.  From here we can use a convert i to convert from a fractional number (frac32) to an integer for different control.

The idiosyncrasy with this method is the larger the range the bigger the step.  This means that if the range is from 0 – 1000 the step is 7-8.  When the range is smaller, lets say 0-100, the step size is 1.  There is a point in which the numbers will start to turn negative.  I do not know why this happens. It seems to cap out at 999  If I figure it out, I will update this post.

For bipolar dials the range is from -64 -64.  This can increase the range of numbers or make the step size a little smaller.  To work with these sliders, by adding 64 it takes the range from 0 – 128.  Then by dividing by 128 and multiplying by the maximum number you want, you can get an appropriate range of output.

In the axoloti environment many of the inputs get added to the the dial from the object.  This means that the number the object is receiving is potentially more than what you think.  To compensate for this, I leave the dial that is built into the object at 0 so most of the control is from the dial.  I will then use the objects dial for fine tuning different parameters.

 

Screen Shot 2016-03-25 at 7.05.41 PM.png

Axoloti

I recently bought an Axoloti fro axoloti.com.  It is a small (3×7 inch) piece of hardware that renders audio from a designated program similar to pd.  This product is cheap 65 Euros and you can build an enclosure for it.  The environment is different from pd or max/MSP because every is ranged from 0 – 64 or -64 – 64.  Inside the environment is a code editor that allows the user to edit the object and save an instance of it.  Objects can also be created in a program like TextWrangler.  Most algorithms work the same way in axoloti as they do in max/MSP (ring modulation-> two signals being multiplied together) are the same.  That being said, certain objects don’t exist such as clip~, pong~, wrap~, scale and others.  This makes using the axoloti as much about composing and making sound as experimenting with ways of doing things.

axolotibuild

On the left is a stereo input (no phantom power) and a stereo output.  Then there is a headphone jack, micro usb to power and transfer data, sdcard slot, usb host and midi in and out.  The sdcard is used to load patches onto the axoloti and use it in standalone mode.

Screen Shot 2016-03-22 at 11.38.31 AM.png

Here is basic ring modulation of two sine tone signals.  The math/*c is a gain control.  The red lines indicate audio rate while the blue is float.  There is also green being integer numbers and yellow which is boolean.  The program allows for immediate understanding of the type of signal that is being used to control different parameters.

Limitations

The sample size is fixed to 48k.  There is only 8mb of SDram so doing anything spectral or FFT based is difficult.  The buffer size is also fixed at 16 samples.

While there are some big limitations with 8 mb of ram, the sound that can be produced has a certain analog feel and the overall capabilities  can create complex sounds and sequences.

More Axoloti patches to come…

Allpass Filter from 2 Comb Filters

An allpass filter is a filter used to pass all frequencies with equal gain but change the relationship in the phase.  This might not mean a lot, however, allpass filters are used in reverbs, vocoders and different kinds of delays.  They are also very easy to make in Max/MSP using Gen~.

This little post shows how to make an allpass filter in which the delay time is not set, meaning any amount of delay in samples can be used.  This allows the user to have various types of effects from one allpass filter.  Besides serving as a traditional allpass, it can create echo with some feedback as well as reverb and echo with each echo having reverb.  This is all controlled by the delay time.

Screen Shot 2016-03-21 at 4.29.14 PM

In the above patch an input is being added to a delayed version of itself.  The delay @feedback 1 takes the signal and delays it by a certain amount of samples.  The @feedback 1 argument is used to allow feedback to occur.  Out of the delay, the signal gets multiplied by a gain amount -1 which will cause the feedback gain and feedforward gain to be inverse from one another (cross fade between clean and effect).  The signal + feedback get multiplied by gain and added to the delay.

The description might be a little confusing, but to sum it up… a delayed signal is added to itself (feedback) and added to the feed forward signal.  The flow chart is as follows

 

Screen Shot 2016-03-21 at 4.41.30 PM

It is important to note that the feed forward takes place after the input gets multiplied by the feedback and the feedback happens before it gets added to the feedforward.  This small little patch can also be duplicated in series or parallel to create interesting reverb and echo.  Also, if the feedforward is left out it turns into a one-pole filter.

 

Calculating Spectral Flux

Spectral Flux is the measurement of when the spectrum changes or how much change occurs in the spectrum.  Flux can be programmed further into transient detection or onset detection.  In order to calculate the flux of a spectrum, the current frame and previous frame are compared (subtracted) and then in a program such as Max/MSP, square rooted to imitate a logarithmic scale.

The following short tutorial shows how to calculate spectral flux using objects native to Max/MSP.  To start we need to make a subpatch to use within the pfft~ object.  In this subpatch fftin~ 1 will be used to do the analysis and obtain the real and imaginary numbers.  These numbers will be connected to cartopol~ to be converted to phase and amplitude information.  FFTin~ to cartopol~ is generally used a lot for spectral processing in Max/MSP so remembering this 2 objects is useful and important.

The vectral~ object will be used at this point to track the amplitude information by frame.  The output of this is then subtracted from the current amplitude number and sqrt~ to make everything “logarithmic”.

The last part of the process is obtaining the analysis information as a number outside of the subpatch.  If fftout~ 1 is used, the program will think that you are resynthesizing a sound. Instead fftout~ 1 nofft is used to create an outlet for pfft~ and output the analysis number.  The nofft flag will not use an ifft to resynthesize and instead acts as a regular output.

Screen Shot 2016-02-04 at 12.59.53 PM.png

 

Real-Time Granular Synthesis

Granular synthesis is a tool that takes a sound and chops it into little pieces.  Each piece has its own window and can potentially have its own panning, duration, silence and transposition.  Granular synthesis in music is analogues to pointillism in painting.  Granular synthesis ca also be seen as a cloud of small fragmented sound akin to throwing a handful of sand into the air and each piece of sand is a sound.  There are many ways to design a granular synthesizer.  In this short post I will demonstrate one simply way to achieve granular synthesis in real-time.

In this patch the incoming sound will be windowed.  The length of the window will be set by the user or randomly chosen.  To start we will make a buffer called window.  A message box with the statement sizeinsamps 1024, fill 1, apply hamming, will create a hamming envelop in the buffer.  Any envelope can work and the user can substitute their own.  We will be using index~ to read back this buffer.  Connected to index will be a line~ object with a message saying 0, 1024 $1.  The 1024 is the number of samples and can be user defined.  The $1 is time and can be set in a variety of different ways.

I am using tapin~ and tapout~ to add a little bit of delay to the signal, but this step is not necessary.  To trigger a grain I am using a metro that is connected to a random object.  Every time the random object picks a number it also uses that number to set the metro time in ms as well as the window time.  The random can have any range or a controllable range however + 1 is needed to prevent 0 from occurring.

In the below example I am also panning using 2 scale objects on sqrt.  The metro is also triggering a random value that controls where in space the grain is being played.  This means that each grain has a different space.

Screen Shot 2015-11-02 at 3.27.10 PM

In this example feedback can be added after the *~ that index~ is connected to.  This will create a more complex sound by adding granulated iterations back to the signal to be granulated.

Looking at Sound with Jitter

When I am designing sound I use a sonogram and oscilloscope a lot.  the sonogram gives me the spectrum of the sound while the oscilloscope gives me the wave form.  I mainly use the oscilloscope to look for dc offset or in the case of noisy sounds, look for periodicity.  Max/MSP has a built in scope (scope~) but the user cannot pause it or freeze it so therefore I made my own.

To do this I used jit.catch~ to grab audio and turn it into a matrix.  If you don’t know what that is, think of it as an xy plain with a certain amount of dimensions.  The image below is how cycling74 shows what a jitter matrix is.  As you can easily see it is ARGB plains with a dimension of 8×6.  Each box holds a value between 0 and 255.

image007

Once the sound is converted into a matrix we can use jit.graph to graph the information from the matrices.  Jit.graph has 4 different modes which can be used to visualize the waveform in different ways and by adding @frgb you can change the color of the waveform.  The last argument we have to add to the jit.graph object is @clearit 1.  This refreshes the screen every frame and makes it so the wave does not write over itself.  Lastley, a qmetro is used to trigger a new frame to appear.  This acts to slow down the frames and or freeze one.  This qmetro is the entire reason I made my own scope.  It gives me the power to pause and look at an instance of the sound.

Screen Shot 2015-10-14 at 12.22.50 AM

Here is the max patch that I made.  I am using a basic ring modulation and running it through jit.catch~ at mode one (it says mode 0, but @mode 1 is the default).  Qmetro 2 sends a bang every 2 milliseconds, therefore making jit.catch~ release a frame every 2 milliseconds.

*If you like these little tutorials or if there is something you want to see please let me know.