TidalCycles

I am always looking for new ways of producing sound and music. As part of my recent search I stumbled upon a fun and powerful program called TidalCycles. It is a text based program that runs alongside a Supercollider quark called SuperDirt an is made for live coding. This means that the sounds and effects are processed with SCsynth but the programming is done elsewhere. TidalCycles uses Haskell as its main language, however, it is very easy to pick up the basics and get started.

TidalCycles works via loops. This can be thought of as 1 cycle per second is one loop per second. The loop is always the same duration so as more sounds get put in the sequence they happen faster. Here is a little code that shows the basics of making a sound.

d1 sound “bd”

This code plays a bass drum sound once per cycle on d1 (dirt 1). d1 is kind of In TidalCycles the d stands for dirt but it is really like saying play a sound on voice 1. There are 9 possible voices.

To double the beat the code would change to

d1 sound “bd*2”

and says play the bass drum sample twice per cycle.

d1 sound “bd/2”

will play the bass drum sound once every 2 cycles or twice as slow. This is all very easy little 1 liners. The following example shows something a little more complex.

d1 $ every (irand 4) (fast 10) $ every (irand 10) (fast (irand 2)) $ sound
“[808bd*2 bd*9 [808bd*10 bd*17 ]] [808bd*5 [sn sn*6 [bd*3 bd*6] sn*134 ] sn cp*9]”

To break down this example

Every random amount of cycles will trigger the rhythm to be played 10 times faster. In this case it will randomly select between 1 cycle and 4 cycles. Every random 10 times the rhythm will be played either at normal speed or twice as fast.

For this sequence there are 2 steps. The first one is [808bd*2 bd*9 [808bd*10 bd*17 ]] while the second is [808bd*5 [sn sn*6 [bd*3 bd*6] sn*134 ] sn cp*9].

The first step can be broken down into 3 smaller steps and on the third step there are 2 smaller steps because of the []. If we disregard the *n part of the sounds and turn it into [808bd bd [808bd bd]] The rhythm is kind of like a triplet with [8th 8th [16th 16th]].

In the second main step “[808bd*5 [sn sn*6 [bd*3 bd*6] sn*134 ] sn cp*9]” disregarding the *n again there are 4 steps with the second step having a nested rhythm. In terms of rhythmical notation this collection would be kind of like 16th [32nd 32nd [64th 64th] 32nd] 16th 16th.

When we add the *n after the sample the rhythm technical remains the same in terms of sound groupings, however, because of the fast repetition of the samples, the timbre changes. On top of that the randomized speed quickly changes the overall speed of the cycle adding more unpredictability.

Note: While I tried to write what the rhythmical duration is, the way that TidalCycles makes all the samples fit into one cycle makes the rhythm when trying to dictate a little wobbly. The main point is that TidalCycle scales the sounds faster and slower to make everything fit. As someone who likes finding odd ways of generating rhythm TidalCycles approach to rhythm opens up some possibilities that I have not thought about before.

If you want me to write about something involving TidalCycles please email me and I will try my best. Also, if you enjoy reading my posts, reading my research, looking and listening to my music and photography please consider donating to my gofundme on my main bio page.

Loading Multiple Files Into Buffers

The other day I was helping  friend with some Supercollider code involving buffers.  Normally in Supercollider you can load a buffer with code that looks like the following

b = Buffer.read(s, “file extension”);

This simply loads a buffer onto the server and assigns it to b.  Then we can use PlayBuf to play it back

SynthDef(“buffer”, (arg bufnum, numchannel = 1;

var in;

a = PlayBuf.ar(numchanel, bufnum, rate, trigger, startpos, loop, donAction:2);

Out.ar(0, a);

}).send(s, \bufnum, b);

In the previous code bufnum is the buffer number assigned to the file.  At the send of the synthdef we use \bufnum, b to assign the buffer number to argument bufnum. doneAction is an argument that controls when the synth gets freed from the server or not.  Look at the doneAction help file to read about them.  There are 14 of them.

 

If only 1 sound file is being used loading the buffer is very easy.  If we have multiple files then this method is not going to work.  To load a file full of sounds the object Buffer is not even used.  Instead “SoundFile” is used which is a little counter intuitive.  The following code shows how to load many sounds from a file into individual buffers.

b = SoundFile.collect(“filepath/*”, s)

This in turn will load every sound in the file into a separate buffer and give it a number value.  It is important to add the * at the end of the path otherwise Supercollider will load a blank array.  Then we can say

b.inspect;

to load a window with each buffers information.

Using the inspect window we can then get all of the buffer numbers to be used in the PlayBuf code.

s.boot

b = SoundFile.collect(“filepath/*”, s)

SynthDef(“buffer”, (arg bufnum, numchannel = 1, rate = 1, ;

var in;

a = PlayBuf.ar(numchanel, bufnum, rate, loop: 1, donAction:2);

Out.ar(0, a);

}).send(s)

a = Synth(“buffer”, [\bufnum, 1, \rate, 1.3]);

a.free;

 

A Fun Little Supercollider Patch

The other day I was playing around with Sine tones and arrays in Supercollider and came up with this little program that creates partials from a fundamental frequency. Nothing special really, but when the volume of the harmonics is controlled by a max statement and sine tone things get interesting.  Here is what the code looks like

SynthDef(“sinetone”, {arg fund = 100, osc = 100;
var sound;
sound = Mix.ar(
Array.fill(12, {
arg count;
var harm;
harm = count +1*fund;
SinOsc.ar(harm, mul: max([0,0], SinOsc.kr(osc)))
})
)*0.7;
Out.ar(0, sound);
}).send(s)

a = Synth(“sinetone”)

The code gives you control of the fundamental frequency with the argument fund and the volume frequency with the argument osc.  To make the sound more interesting SinOsc.ar can be replaced with any kind of generator.  Sine tones give a more “harmonious” sound while Ugens like LFNoise0.ar give a more helicopter sound.

 

 

 

Creating L-Systems in Supercollider

L-systems also known as Lindenmayer Systems is a rule based parallel rewriting system.  Every L-system has an axiom (starting point) and series of rules.  The rules get replaced into one another to grow the L-system or in other words works with the ideas of formal grammars.  L-systems were first introduced by Aristid Lindenmayer to describe growth models in plants.  Since then they have evolved further into fractal like approaches, Turtle Graphics, arts and music.

I first discovered musical aspects of L-systems when I was getting my masters degree and ended up writing my thesis about ways of interpreting the resultant string of characters as well as using the image as guides for music composition.  When I was researching this, I found it very difficult to produce a complex L-system whose output string I could use directly in a synthesis environment.  Max/MSP has certain capabilities but also  can only hold a maximum number of values in a table or collection and they all have to be indexed which is not efficient if the string is 4000 characters long.  Supercollider though has the abilities to generate the string as well as refer back to it for controlling parameters or musical rules.

The following code shows how I put together an L-system string calculator. For this code to work the wslib Quark has to be installed. To do this type Quarks.gui and select the wslib.  I have also used this L-system sting calculator in 2 sound installations titled Reflective Tunnel and Reflective Catacombs.  Reflective Tunnel was set in the tunnels at Wesleyan University in Connecticut and used the harmonics of a cello split to different rules in the L-system.  Then the L-system was played back in the tunnel with pseudo-random envelopes on all the frequencies to create layers of sound that phased and reflected off the concrete walls. In Reflective Catacombs the space was sonically mapped to obtain the resonant frequency and assigned to rules in an L-system and pseudo-random envelopes to play back in the space.  Both installations created shifting atmospheres and interesting beating patterns.
// boot server if needed
Server.default = s = Server.internal.boot
(

// system to substitute rules
~substitute = { |string, dict|
var str= “”;
string.do { | c |
var val = dict[c.asSymbol] ? c;
str = str ++ val;
};
str;
};

// fill a dictionary with entries
~myDictionary = Environment.new;
~myDictionary[\a] = “B – CG”;
~myDictionary[\b] = “A*F-D”;
~myDictionary[\c] = “E/G + F”;
~myDictionary[\d] = “G – BC”;
~myDictionary[\e] = “G/E – BC”;
~myDictionary[\f] = “A -BC/E”;
~myDictionary[\g] = “A -DF/G”;
)

a = “a” //axiom

//change a to whatever the starting rule is. The number before collect is the number of iterations

~history =5.collect { a = ~substitute.value(a, ~myDictionary).postln };

 

Once this code is run the stream gets assigned to ~history that can be assigned to a variable and put into a routine  or printed as a long list.

k = ~history

t = Routine({

k.do { | c| c ;

0.35.wait;
if(c.asString == “F”, {Synth.new(“FM”, [\carrierfreq, [34, 100, 673, 589, 10, 2, 7, 45, 234, 45, 4357, 1239, 3240].choose.postln])});

 

This routine reads one character in the L-system every 0.35 seconds and looks to see if it is an F.  If it is it selects a frequency at equal probability from the list and assigns it to the carrier frequency of the FM synth definition.

 

For interest in my L-system interpretations and mappings to compositions my thesis can be found at https://www.academia.edu/24235602/THE_MUSICAL_MAPPINGS_OF_L-SYSTEMS

 

 

Scaling Numbers for the Axoloti

The Axoloti does not have a built in scale the way Max/MSP or PD does, however, that doesn’t mean you are stuck with 0-64 or -64-64.  Good old fashion math goes a long way when using the Axoloti and trying to control different parameters.

For unipolar dials, they go from 0-64 which can be used in its unaltered state for a variety of things, but to expand the range or scale the range dividing by 64 and then multiplying by the highest number you want will scale the dial to the range of 0 – N.  From here we can use a convert i to convert from a fractional number (frac32) to an integer for different control.

The idiosyncrasy with this method is the larger the range the bigger the step.  This means that if the range is from 0 – 1000 the step is 7-8.  When the range is smaller, lets say 0-100, the step size is 1.  There is a point in which the numbers will start to turn negative.  I do not know why this happens. It seems to cap out at 999  If I figure it out, I will update this post.

For bipolar dials the range is from -64 -64.  This can increase the range of numbers or make the step size a little smaller.  To work with these sliders, by adding 64 it takes the range from 0 – 128.  Then by dividing by 128 and multiplying by the maximum number you want, you can get an appropriate range of output.

In the axoloti environment many of the inputs get added to the the dial from the object.  This means that the number the object is receiving is potentially more than what you think.  To compensate for this, I leave the dial that is built into the object at 0 so most of the control is from the dial.  I will then use the objects dial for fine tuning different parameters.

 

Screen Shot 2016-03-25 at 7.05.41 PM.png

Axoloti

I recently bought an Axoloti fro axoloti.com.  It is a small (3×7 inch) piece of hardware that renders audio from a designated program similar to pd.  This product is cheap 65 Euros and you can build an enclosure for it.  The environment is different from pd or max/MSP because every is ranged from 0 – 64 or -64 – 64.  Inside the environment is a code editor that allows the user to edit the object and save an instance of it.  Objects can also be created in a program like TextWrangler.  Most algorithms work the same way in axoloti as they do in max/MSP (ring modulation-> two signals being multiplied together) are the same.  That being said, certain objects don’t exist such as clip~, pong~, wrap~, scale and others.  This makes using the axoloti as much about composing and making sound as experimenting with ways of doing things.

axolotibuild

On the left is a stereo input (no phantom power) and a stereo output.  Then there is a headphone jack, micro usb to power and transfer data, sdcard slot, usb host and midi in and out.  The sdcard is used to load patches onto the axoloti and use it in standalone mode.

Screen Shot 2016-03-22 at 11.38.31 AM.png

Here is basic ring modulation of two sine tone signals.  The math/*c is a gain control.  The red lines indicate audio rate while the blue is float.  There is also green being integer numbers and yellow which is boolean.  The program allows for immediate understanding of the type of signal that is being used to control different parameters.

Limitations

The sample size is fixed to 48k.  There is only 8mb of SDram so doing anything spectral or FFT based is difficult.  The buffer size is also fixed at 16 samples.

While there are some big limitations with 8 mb of ram, the sound that can be produced has a certain analog feel and the overall capabilities  can create complex sounds and sequences.

More Axoloti patches to come…

Allpass Filter from 2 Comb Filters

An allpass filter is a filter used to pass all frequencies with equal gain but change the relationship in the phase.  This might not mean a lot, however, allpass filters are used in reverbs, vocoders and different kinds of delays.  They are also very easy to make in Max/MSP using Gen~.

This little post shows how to make an allpass filter in which the delay time is not set, meaning any amount of delay in samples can be used.  This allows the user to have various types of effects from one allpass filter.  Besides serving as a traditional allpass, it can create echo with some feedback as well as reverb and echo with each echo having reverb.  This is all controlled by the delay time.

Screen Shot 2016-03-21 at 4.29.14 PM

In the above patch an input is being added to a delayed version of itself.  The delay @feedback 1 takes the signal and delays it by a certain amount of samples.  The @feedback 1 argument is used to allow feedback to occur.  Out of the delay, the signal gets multiplied by a gain amount -1 which will cause the feedback gain and feedforward gain to be inverse from one another (cross fade between clean and effect).  The signal + feedback get multiplied by gain and added to the delay.

The description might be a little confusing, but to sum it up… a delayed signal is added to itself (feedback) and added to the feed forward signal.  The flow chart is as follows

 

Screen Shot 2016-03-21 at 4.41.30 PM

It is important to note that the feed forward takes place after the input gets multiplied by the feedback and the feedback happens before it gets added to the feedforward.  This small little patch can also be duplicated in series or parallel to create interesting reverb and echo.  Also, if the feedforward is left out it turns into a one-pole filter.