Why is KARMA different than an arranger keyboard, or an algorithmic composer?
(Stephen Kay explains - from a forum discussion)
KARMA (Kay Algorithmic Realtime Music Architecture) is a parameter-based approach to generating musical effects, rather than a musical data based approach. It is not a system that plays back prerecorded MIDI phrases through Note Transposition Tables, like most arranger systems (or phrase generation systems). However, it is also not a system where you set up parameters, press a button, and sit back and listen to the "music" it generates. It is not the type of system that is supposed to "evolve" a piece all by itself, according to parameter settings. One of the ideas behind KARMA is that the user is always in control of how the music is generated. If you want the rhythmic complexity to increase at a certain point, well then, you hook up the parameter(s) that can cause that to happen to a real-time control, and you twist it at the point that you want it to happen. Although a timeline kind of input control could be a possibility in the future, the basic idea of KARMA presently is real-time. (And actually, you can achieve the timeline kind of thing simply by recording your control movements into a sequencer. But again, the user determines what changes and when it changes.) You cause it to start by triggering it – you determine which notes it plays and in which key by the input notes you give it. You determine when it changes activity or settings by changing the parameters at a certain point – be it a scene change (which can change a whole slew of parameters at the same time), or moving one parameter.
In its simplest form, this "parameter-based approach" is similar to what we have come to expect from the term "arpeggiator": you play some notes on a keyboard, you have a few parameters you can adjust, and notes come out in some kind of pattern. For those who must compare KARMA to an arpeggiator, I like to say that KARMA is like "multiple arpeggiators on steroids, after being given adrenaline injections, and doing LSD." Really, I feel it has very little in common with any conventional arpeggiators, and I think you will agree after reading the architecture description that follows. But if it makes you comfortable to think of KARMA as "an arpeggiator on steroids", then by all means do so. It can certainly arpeggiate, in just about any flavor known to man, if that’s what you’re looking for.
Theory
First, a little of the theory behind KARMA: A performance of a musical phrase can be thought of as having many different "attributes" that determine the overall effect of the resulting music. For example, you might say a musical phrase has a "rhythm" attribute, which is the rhythm with which the notes are being played; a "duration" attribute, which is the length of the notes (independent of the rhythm); a "cluster" attribute, which is the number of notes being played at the same time in various places of the musical phrase ("chords"); a "velocity" attribute, which is the volume/accent with which the notes are played; a "pan" attribute, which is the spatial location of the notes in a stereo or three-dimensional field, etc.
Typically, music that has been recorded or sequenced has all of these attributes pre-determined and fixed in relation to each other. A specific note is to be played with a specific rhythmic value for a specific length of time, at a specific volume/velocity level, at a specific location in a stereo field, with the sound of a specific musical instrument, and these relationships remain fixed. For example, in most if not all auto-accompaniment instruments, to achieve a variation in the accompaniment pattern the instrument essentially switches to a different pre-recorded sequence of musical events (again with specific relationships that are fixed in the data).
In KARMA, the various aspects of a musical phrase have been divided into separately controllable attributes. Each of these attributes is controlled by a separate group of parameters, which can be individually varied or changed in groups by the user in real-time as the music is being generated. A grouping of all the parameters related to generating a single musical effect is referred to as a "Generated Effect," or "GE." A single GE is comprised of over 400 separate parameters that essentially specify a configuration of the musical algorithms in the core engine. Another way of looking at it is that a GE is a "blueprint" for a type of musical effect, such as a guitar riff, a sax riff, a bass line, or a burst of impossible-to-play notes repeating and transposing as they go. It specifies the rhythm the notes will be generated with, the general direction of movement, the types of intervals to be played sequentially, the range and length of the phrase, etc. There are groups of parameters that control the rhythm of the notes, the movement of the pitches of the notes, the velocity of the notes, the panning (which is simply CC 10, so any CC can be specified and controlled), the number of notes at a time to generate, the duration of the notes; there are parameters that allow generated notes to be repeated and melodically transposed; there are parameters for controlling automatic pitch bending effects; there are parameters that control three different envelopes which can be applied to tempo, velocity, pitch, duration, and any MIDI control change message, and more. Many parameters or groups of parameters may be controlled by uniquely musical randomizations. However, it is important to note that the actual pitches of the notes themselves are not specified (in most cases); they are supplied or determined by the user’s real-time input, which may be provided in one of several ways, such as by a MIDI instrument and a musician in real-time, or by a MIDI sequence or guide track played through KARMA. Therefore, a chord voiced one way can produce a completely different result than the same chord voiced a different way.
Back to arrangers
So, an arranger keyboard generally has this sort of architecture: It has a bunch of musical data patterns, written in a certain key/chord (without getting into a whole technical discussion of how this works, Cmaj7 is typical, because data fitting a maj7 chord has notes in all positions of the diatonic scale, and is easily transposed to other chord types). The patterns (with the exception of drums and percussion) are run through note transposition tables (NTT) in real-time. The results of chord analysis performed on the chord you play in the "control" area of the keyboard determines which table(s) are used to modify the notes before they come out. In other words, say you play a chord that is determined through chord analysis to be "Fmin7". The musical pattern data (in the key of Cmaj7) first has the notes run through an NTT table for min7, which maps all major thirds to minor thirds, and all major sevenths to dominant 7ths (among perhaps others). The data comes out of that table sounding like Cmin7. Then, because the analyzed chord is Fmin7, all of the notes are transposed up by +5 (or down by –7, depending on a "wrap point"), yielding the pattern playing in Fmin7.
The key word here is "pattern" – it’s a pattern, and it never changes, and if you sit on a chord for 30 minutes, it’s going to play that same pattern over and over and over. Some of the more sophisticated systems have different patterns for different chord types. So when you play a min7 vs. a maj7 for example, not only does it transpose the data, but it picks a different pattern first. But regardless of how clever they are, it’s still just a fixed midi recording playing back over and over, in an endless loop.
One of the KARMA GE Types: Generated-Riff
KARMA takes a completely different approach. I’m going to describe one of the GE Types (there are currently four):
When the GE Type = Generated-Riff, a "Note Series" is the heart of the effect. A Note Series is a collection of notes that are supplied by the user as input notes, further refined and extrapolated based on the Note Series parameters. To make a simple example, you play a chord in a control area of the keyboard: {C,E,G,B} (Cmaj7). You have provided four notes as input to the system. If Note Series "Replications" is 3, those 4 notes are replicated 3 times yielding 12 notes. If Note Series "Interval" is +12, then each Replication is transposed by +12, yielding {C, E, G, B, C+12, E+12, G+12, B+12, C+24, E+24, G+23, B+24}, or a 3 octave arpeggio. However, these are not generated (yet). This is simply the creation of the Note Series. Those 12 notes (pitches and original supplied velocities) are simply arrayed in memory without any associated rhythmic or duration values (or anything else, for that matter), where they can be indexed according to the rest of the GE parameters.
As you will notice from the screenshot below taken from a recent development version of KARMA Software, the KARMA GE editing window has a row of triangular buttons down the left side, each of which select a "panel" of editing parameters, corresponding to a particular attribute of the phrase, or other particular feature. These separate groups of parameters operate independently to contribute to the generation of the phrase or effect as a whole. Shown in this screen shot is the Note Series panel, and it is displaying the example that I discussed above – a 3 octave arpeggio created from 4 input notes {C,E,G,B}. Of course, way more interesting and complex Note Series can be created with all the parameters shown – this is purposely a simple example.
More screen shots of the older 1.x version of KARMA MW can be found [here].
Next, the parameters of the Rhythm Panel specify at what musical intervals a note will be selected from the Note Series. You use the Rhythm Panel’s Rhythm Pattern grid to set up a pattern of rhythmic values (i.e. {8th, 16th, 16th}), that can include randomizations (more on all of that another time). Other parameters in the Rhythm Panel may modify it or further qualify its operation. But the Rhythm Pattern specifies when the next note will be selected from the Note Series. Note that it does not specify which note.
The Index Panel parameters are in control of the specific pitch/vel that is selected from the Note Series. An Index Pattern may be constructed, with the values indicating a movement of the index from one Note Series note to another – so an Index Pattern of {1,1,1,-2} indicates "from the first index specified, move to the next index +1, then the next index +1, then the next index +1, then jump back two indexes –2, and repeat." In KARMA speak, this is referred to as indexing. Very complex patterns can be therefore specified with rather minimal index patterns. Options for randomly selecting indexes are also provided. But note that the Index Parameters specify movement within the current Note Series, which may change at any given time as the user provides new input notes. So different input notes, yielding a different Note Series, with the same Index Parameters, may yield something completely different.
At this point, the Cluster Panel parameters come into play: we’re positioned at a certain index in the Note Series, but how many notes to generate? A Cluster Pattern provides the answer: it specifies an independently looping pattern of "cluster sizes", ranging from 1 to 10, along with the usual KARMA random options. So a Cluster Pattern of {1} simply generates a string of single notes; a Cluster Pattern of {4,1,1,1} generates a cluster (chord) of 4 notes followed by 3 single notes. Of course, this only specifies the number of simultaneous notes; the Rhythm Pattern and Index Pattern are doing the work of when and which notes will be generated.
So we’re indexing around to different locations inside the Note Series according to the Index Pattern, at rhythmic intervals determined by the Rhythm Parameters, and choosing one or more indexes to be generated according to the Cluster Parameters. Other parameter groups then come into play:
The parameters of the Duration Panel specify the length of the note that will be generated, independent of the rhythmic value (if desired). A Duration Pattern may be constructed of rhythmic values, or other options may be used.
The Velocity Panel parameters control the velocities of the notes as they are generated. The original velocities of the notes that make up the Note Series can be utilized or ignored, and then a Velocity Pattern of additive/subtractive velocity offsets can be applied to the notes, yielding accents in the generated notes, making some notes louder or softer, making notes disappear altogether, and other options.
The parameters of the CCs Panel allow one or two CCs to be defined (i.e. pan, filter frequency, resonance), and then a value specified for each note or cluster. A CCs Pattern containing the usual KARMA random options can be defined, controlling two different CCs in a variety of options. For example, using CC 10 (pan), each note as it is generated can be placed at a specific location in a stereo field.
New for the OASYS, the WaveSeq parameters allow up to 16 waveforms (selected from the thousands of internal waveforms) to be specified as a waveform pattern, along with other options, and change the synth’s waveform with each note or cluster. One note can be an e.pno, one note a trumpet, followed by 3 notes of a bass synth, then a cymbal hit followed by 2 marimba notes…
It is useful to note that all of the patterns mentioned here can be different numbers of steps, and loop independently of each other, so you can have a 4 step Rhythm Pattern with a 5 step Velocity Pattern with a 7 step Cluster Pattern, etc. – great for getting some cyclic things going in and out of phase with each other. Furthermore, each pattern has the ability to be randomized in unique and useful ways, while it is playing in real-time.
So, in quick form:
- GE Type Generated Riff: the input notes create an extrapolated Note Series;
- The Rhythm Parameters determine when an index of the Note Series will be selected and a note will be generated;
- The Duration Parameters determine the length of the generated notes that result;
- The Index Parameters determine which indexes of the Note Series will be operated upon;
- The Cluster Parameters determine how many indexes will be generated at a given moment;
- The Velocity Parameters determine the velocities of the generated notes;
- The CCs Parameters may output one or two CCs per note that do things like pan the notes around, change the filter frequency, etc.;
- The WaveSeq Parameters (new for KARMA 2 as implemented in the OASYS and M3) allow a different waveform to be specified for each note as it is generated, so one note can be a bass, one note can be a piano, one note can be a cymbal hit.
And then, on top of that, there are some other things that operate sort of independently, or in addition to the above basic algorithm:
- 3 different envelopes can be set up to run on any CC, in addition to non-CC operations such as tempo, velocity, duration, repeat time. The envelopes can be triggered when the GE itself is triggered (i.e. by the hand hitting the keyboard), can be triggered at different parts of the phrase (i.e. every 4 beats), or can be triggered by every single note that is generated.
- Melodic Repeat can be added to the phrase, which allows complex patterns of repeated notes to be generated from each note that is generated according to the above explanation.
- Pitch bending effects can be added to each note, in a variety of shapes and options.
- Note Remapping (new for OASYS) can be applied at "the end of the note generation chain," and remap any note to any other note (or filter them out), turning drum grooves into different drum grooves, turning major riffs into minor riffs, etc.
But wait, there’s more! There’s the Phase Pattern: take all of what I mentioned above, all of the various patterns and parameters for the different musical attributes, and call that "Phase 1", then duplicate it all and call that "Phase 2", so that you can set up two completely different sets of parameters. The Phase Pattern controls a timed switching between these two sets of patterns, so you can specify something like {1,1,1,2} and then have it play Phase 1 for three bars of 4/4 followed by Phase 2 for 1 bar, in a repetitive loop. Each Phase can have its own time signature, can trigger envelopes at the start, can apply pitch bending and melodic repeat independently to that occurrence of the Phase, and more.
So that’s a quick overview of one of the GE types. There are three others: GE Type = Generated-Gated uses some of the same algorithms, but is made for creating the "chopped and sliced pad" effects; GE Type = Generated-Drum has special options for drum patterns, while GE Type = Realtime is a completely different beast that requires its own explanation.
The end result of all that can be configured to do a lot of things. Of course, it can be configured to create full "grooves" consisting of drums, bass, keys, gtr etc. Some people equate this to "auto-accompaniment", but it’s not, really. Some of the patches in the Karma Music Workstation were simply 4 complex arpeggiations playing off of each other – that doesn’t sound like AA at all. You can really set it up to do anything you want. You can use just one KARMA module at a time and get some really great techno/dance/electronic effects, for example.
Now, if you read all that, you deserve an award of some kind. Congratulations!