Tag Archives: Musical expression

Logic Pro Scripter editor

Feels So Good on LinnStrument with Logic Pro Scripter

Exploring polyphonic expressiveness

In the Domo Arigato Tempo Rubato article we discussed that each note pad on the LinnStrument playing surface has three dimensions of musical expression:  Moving your finger along the X-axis varies pitch, moving it on the Y-axis influences timbre, and varying its pressure on the Z-axis controls loudness.  Given that each note pad has three dimensions of control, and each dimension has a resolution of 128 values, there is much expressiveness to be explored in each note being played.  Because LinnStrument is a polyphonic instrument, you can play several notes simultaneously which further increases the potential for expressiveness.  The trade-off is that the more fingers you’re simultaneously employing, the less focused you can be on the expressiveness of a given note.

As part of my musical journey with LinnStrument, I’m exploring ways to exploit more fully both its expressive and polyphonic capabilities.  One tool that I’m using for this purpose is the Logic Pro Scripter MIDI plug-in.  Scripter enables a developer to write Logic Pro extensions in JavaScript that process MIDI events as well as generate them.  To help me grok the Logic Pro Scripter API I created the quick reference located in the following section.

Logic Pro Scripter API quick reference

The following tables in this quick reference includes information gleaned from the Apple Logic Pro Effects manual, example scripts such as Guitar Strummer included with Logic Pro, and the following file from the Logic Pro X installation on my Mac.

/Applications/Logic Pro X.app/Contents/Frameworks/
Scripter  – Global attributes and functions
NeedsTimingInfo:boolean Defining NeedsTimingInfo as true at the global scope enables the GetTimingInfo() function
ResetParameterDefaults:boolean Sets UI controls to default values
HandleMIDI(Event) This function is called each time a MIDI event is received by the plug-in, and is required to process incoming MIDI events. If you do not implement this function, events pass through the plug-in unaffected.
ProcessMIDI() This function is called once per “process block,” which is determined by the host’s audio settings (sample rate and buffer size). This function is often used in combination with the TimingInfo object to make use of timing information from the host application. To enable the GetTimingInfo feature, add NeedsTimingInfo = true at the global script level.
ParameterChanged(integer, real) This function is called each time one of the plug-in’s parameters is set to a new value. It is also called once for each parameter when you load a plug-in setting.
Reset() This function is called when the plugin is reset
Trace(value) Prints a message to the console that represents the supplied value of any type
GetTimingInfo():TimingInfo Retrieves a TimingInfo object, which contains timing information that describes the state of the host transport and the current musical tempo and meter.
GetParameter(string):real Returns a given parameter’s current value. GetParameter() is typically called inside the HandleMIDI() or ProcessMIDI() functions.

Event – Base class for all events
send() Send the event
sendAfterMilliseconds(ms:real) Send the event after the specified value has elapsed
sendAtBeat(beat:real) Send the event at a specific beat in the host’s timeline
sendAfterBeats(beats:real) Similar to sendAtBeat(), but uses the beat value as a delay in beats from the current position.
trace() Prints the event to the plug-in console
toString() Returns a string representation of the event
channel(integer) Sets MIDI channel 1 to 16. Note: Event.channel is an event property, rather than a method, so it may be used in expressions such as (evt.channel == 1) where evt is an instance of Event)

Note – Base class for note events
Note() Constructor
toString() Returns a String representation of the Note event.

NoteOn – Represents a note on event
NoteOn(Event) Constructor
pitch(integer) Pitch from 1–127
velocity(integer) Velocity from 0–127. A velocity value of 0 is interpreted as a note off event, not a note on.

NoteOff – Represents a note off event
NoteOff(Event) Constructor
pitch(integer) Pitch from 1–127
velocity(integer) Velocity from 0–127

PolyPressure – Represents a Polyphonic aftertouch event
PolyPressure(Event) Constructor
pitch(integer) Pitch from 1–127
value(integer) Pressure value from 0–127
toString() Returns a String representation of the PolyPressure event.

ControlChange – Represents a ControlChange event
ControlChange(Event) Constructor
number(integer) Controller number from 0–127.
value(integer) Controller value from 0–127.
toString() Returns a String representation of the ControlChange event.

ProgramChange – Represents a ProgramChange event
ProgramChange(Event) Constructor
number(integer) Program change number from 0–127
toString() Returns a String representation of the ProgramChange event.

ChannelPressure – Represents a ChannelPressure event
ChannelPressure(Event) Constructor
value(integer) Aftertouch value from 0–127
toString() Returns a String representation of the ChannelPressure event.

PitchBend – Represents a PitchBend event
PitchBend(Event) Constructor
value(integer) 14-bit pitch bend value from -8192–8191. A value of 0 is center.
toString() Returns a String representation of the PitchBend event.

Fader – Represents a Fader event
Fader(Event) Constructor
value(integer) Fader value from 0–127
toString() Returns a String representation of the Fader event.

TimingInfo – Contains timing information that describes the state of the host transport and the current musical tempo and meter
playing:boolean Value is true when the host transport is running
blockStartBeat:real Indicates the beat position at the start of the process block
blockEndBeat:real Indicates the beat position at the end of the process block
blockLength:real Indicates the length of the process block in beats.
tempo:real Indicates the host tempo.
meterNumerator:integer Indicates the host meter numerator
meterDemoninator:integer Indicates the host meter denominator.
cycling:boolean Value is true when the host transport is cycling
leftCycleBeat:real Indicates the beat position at the start of the cycle range
rightCycleBeat:real Indicates the beat position at the end of the cycle range

MIDI – Contains class-level variables and functions (you don’t instantiate MIDI).
_noteNames:string[] Contains names such as C and G# for all 128 MIDI notes
_ccNames:string[] Contains names such as Expression and Sustain for all 128 MIDI controller numbers
noteNumber(string) Returns the MIDI note number for a given note name. For example: C3 or B#2. Flats not permitted.
noteName(real) Returns the name for a given MIDI note number.
ccName(real) Returns the controller name for a given controller number
allNotesOff() Sends the all notes off message on all MIDI channels
normalizeStatus(real) Normalizes a value to the safe range of MIDI status bytes (128–239)
normalizeChannel(real) Normalizes a value to the safe range of MIDI channels (1–16)
normalizeData(real) Normalizes a value to the safe range of MIDI data bytes (0–127)
_sendEventOnAllChannels(Event) Sends a given event to all MIDI channels

Leveraging Logic Pro Scripter for accompaniment

(click to see larger view)

Using the Guitar Strummer script that comes with Logic Pro as a starting point, I made modifications that achieve behavior including the following:

  • Allow selection of a music key signature and keyboard split point (see image of UI nearby)
  • Allow chord mode selection via a switch on LinnStrument (or pedal) that maps to a control change message.  Primary modes currently consist of vanilla (major/minor/dim) vs. jazzy (maj7/min7/dom7/half dim7) chords.
  • When a single note below the split point is pressed, that note is output.  In addition, a chord is output whose root is that note and appropriate to the chosen key signature.  The chord is voiced (inversion, etc.) in a manner that assures minimal movement from the previous chord.
  • When two notes in the same octave below the split point are pressed, the higher note is the root of the chord output, and the lower note is output as well.  This technique facilitates playing so-called slash chords.
  • When two notes an octave apart below the split point are pressed, the tonality toggles in most cases from major to minor and minor to major.

Anyway, that’s some of the functionality that currently exists.  To give you a feel for the JavaScript code used in this script, here are most of the contents of the HandleMIDI() function, which is called at runtime whenever a MIDI event is received:

function HandleMIDI (event) {
  if(event instanceof NoteOn) {
    LAST_NOTE_EVENT = event;
    if(event.pitch <= KEYBOARD_SPLIT_POINT && chordsEnabled) {
    else {
  else if (event instanceof NoteOff) {
    if (event.pitch <= KEYBOARD_SPLIT_POINT || 
        ACTIVE_NOTES.indexOf(event.pitch) !== -1) {
    toggleTonality = false;
    play7th = false;
  else if (event instanceof ControlChange && 
           event.number === 64) {
    if (event.value <= 63 ) { 
      if (PEDAL_A_ENABLED) {  
        PEDAL_A_ENABLED = false; 
    else { 
      if (!PEDAL_A_ENABLED) {
        PEDAL_A_ENABLED = true; 
        if (ACTIVE_NOTES.length > 0) {
  else {

As a demonstration of the features outlined above, here’s a one-minute video of the first few measures of Feels So Good by Chuck Mangione.  The LinnStrument is split, with a grand piano synth on the left and a trumpet synth on the right.


James Weaver
Twitter: @JavaFXpert

Why Y?

Guest article by Roger Linn about exploring LinnStrument Y-axis

Inventor of LinnStrument: Roger Linn

Inventor of LinnStrument: Roger Linn

James Weaver’s most recent article entitled Domo Arigato Tempo Rubato contains an overview of musical expression and some corresponding expressive capabilities of LinnStrument.  That article includes a brief discussion about making expressive variations in timbre on LinnStrument by moving your fingers along the Y-axis.  James reached out to me to shed additional light on Y-axis expressiveness.

For LinnStrument and other expressive instruments, the value of sensing finger pressure (Z-axis) and left/right (X-axis) movement is pretty clear: pressure controls note loudness and left/right movement controls pitch variations.  However, many people are somewhat flummoxed by the concept of controlling timbre via forward/backward finger movements (Y-axis) within one of LinnStrument’s 200 note pads.

What’s timbre? Pronounced tam-ber, it is defined by Oxford Dictionaries as…

“the character or quality of a musical sound or voice as distinct from its pitch and intensity”

In the context of LinnStrument, timbre refers to variations in tone, all of which are musically useful at any note loudness or pitch.  For example, bowing a violin near the bridge results in a sharper tone than bowing near the neck.  Or the tone of a flute can be changed by mouth position or a sax by bite pressure.  Taken together, a skilled performer’s subtle control of loudness, pitch and timbre is a big part of what makes a great instrumental solo great.

Here’s a video I made that demonstrates real time variation in loudness, pitch and timbre, using the Polysynth instrument in the new version of Bitwig Studio coming this summer:

In this video, finger pressure controls a combination of volume and filter frequency, left/right movement controls pitch, and forward/backward movement controls the timbre of the sound source, which in this case is a pulse wave oscillator.  Notice how the timbre changes from thin to full as I move my finger forward and backward, and how the combined variation in loudness, pitch and timbre makes the sound very expressive.  Now consider that what you’re hearing is the simplest synthesizer possible, consisting merely of an oscillator, filter and volume control and nothing else. This would sound roughly like an old telephone dialtone when played from a regular MIDI keyboard’s on/off switches.

So what can you control with the Y-axis?  Ideally you’ll want to use it to vary the fundamental timbre of the source waveform.  If you know a little about MIDI and synthesis, LinnStrument normally sends Y axis information using MIDI Control Change 74 messages.  Here are some ideas for how to control timbre in your sound generator from these CC74 messages:

  • For basic analog synthesis, modulate the pulse width of a pulse oscillator.  This changes the harmonic content of the pulse waveform between a thin and full tone.  If you have Logic Pro X, you can hear what this sounds like.  Download our LinnStrument project file from the LinnStrument Support > Getting Started page. Set your LinnStrument to the “One Channel” settings described in section 4 of that page, then select the track in the Logic file entitled “Simple 3D Pulse Synth”.
  • Also for basic analog synthesis, modulate the level of hard oscillator sync, which creates dramatic changes to the timbre.
  • Additionally for basic analog synthesis, assuming you’re using pressure to modulate the filter frequency, use the Y-axis to modulate the filter resonance.
  • For sampling, you can’t change the fundamental timbre of a sample, but you can use the Y-axis to vary the balance between two or more source samples. For example, one could be a soft sax tone and the other a harsh sax tone. Or one could be a sax sound and another a violin sound.
  • For FM (frequency modulation) synthesis, use the Y-axis to vary the frequency of the modulating oscillator, which changes the timbre of the carrier oscillator.

In summary, using the Y-axis to vary timbre during performance adds a lot of expression and emotion to your musical performance. Given that nature has graciously given this particular universe three dimensions, why not use them all?


Roger's signature

Roger Linn
Roger Linn Design

Domo Arigato Tempo Rubato

An overview of musical expression, and some corresponding expressive capabilities of LinnStrument.

Image from wikipedia.org

Image from wikipedia.org

“The bow can express the affections of the soul: but besides there being no signs that indicate them, such signs, even were one to invent them, would become so numerous that the music, already too full of indications, would become a formless mass to the eyes, almost impossible to decipher.”

-Giuseppe Cambini

One of the great joys of playing an instrument is expressing thoughts and feelings through it. When played in solitude, a musical instrument can act as a relief valve for accumulated stress. When played in public, it can serve as a medium for expressing musical ideas and emotions. Music played expressively can even elicit emotional responses in the listener.

Of course, instruments vary in their capabilities for expressiveness, and there are many facets of musical expression. Let’s take a look at some of these facets.

Understanding musical expression

As Giuseppe Cambini articulated in the quote cited previously, it’s not practical or even desirable to notate all the “affections of the soul” in written music. I would add that expressing ones own soul, and often the soul of the composer, is the joy and privilege of the performer. Facets of this expression include pitch variations, dynamics, timbre, and phrasing.

Pitch variations

On instruments that support it, one very effective means of expression is the act of varying of a note’s pitch while it’s being played. Common examples of this include portamento (pitch bending and sliding), and vibrato (pulsating change of pitch). In a recent conversation with Roger Linn (inventor of LinnStrument) the discussion turned to musical expression as it relates to pitch variations. Here is an excerpt of what he had to say on the subject:

“Subtle variations in pitch are, I think, the largest part of how we identify a particular performer’s style. If someone’s playing, for example, a guitar, the volume and timbre of a note can’t be changed by the performer after it’s plucked. The only thing that can be changed is the pitch. Most people that are familiar with rock guitar music would be able to identify the vibrato of Jimi Hendrix from Eric Clapton or Jeff Beck as being unique and different after only a couple of notes. But the truth is that they were all playing the same guitar, a Fender Stratocaster, through the same amplifier, a Marshall. What makes them unique and different is a particular style, in large part characterized by subtle pitch variations produced with string bends and vibratos.”

On LinnStrument, pitch variations are accomplished very naturally by moving your fingers along the X-axis as if each row is a string. To perform vibrato, wiggle your finger horizontally. To bend or slide a note, move your finger along the row to the desired ending pitch.  The following brief video of the flute solo intro in The Marshall Tucker Band “Can’t You See” demonstrates slight pitch variations and vibrato on LinnStrument:

Let’s move from discussing pitch variations to examining the use of volume variations, more formally known as dynamics, for musical expression.


Another very effective means of musical expression is to vary the volume (loudness) of notes, which is often referred to in musical terms as dynamics. Some dynamics such as pianissimo (very soft), and sforzando (forceful accent) are concerned with the relative volume of notes when first played. Other dynamics such as crescendo (gradually becoming louder), and tremolo (pulsating change of volume) indicate changes in volume while a note is being played.

Most instruments allow you to play a note at a desired volume, but not all of them allow you to vary the volume of a note as it is being played. For example, you can vary the initial volume of a note on the piano with the velocity of a key press, but the subsequent volume of the note is not usually under your control. By contrast, most wind instruments give you continuous control of volume.

On LinnStrument, volume variations are accomplished very naturally by varying the downward (Z-axis) pressure of your fingers. To increase the loudness of a note, press harder on the note pad. To perform tremolo, repeatedly increase and decrease pressure on the note pad.

Now that we’ve discussed varying pitch (X-axis), and volume (Z-axis), we’ll move on to varying timbre (Y-axis) for a third dimension of musical expression.


A sort of catch-all category, timbre is what makes two notes that have the same pitch and volume distinguishable from each other. For example, a note played on a trumpet has a much different timbre than that same note played on a cello.  Timbre is often referred to as tone color or texture, and characterized by terms such as bright, warm, and harsh.

Varying the timbre of notes is a very effective means of musical expression, as evidenced by many of the flute solos that Ian Anderson of Jethro Tull has recorded, such as his flute solo from “My God.”

On LinnStrument, timbre variations are accomplished very naturally by rolling your fingers forward or backward on the Y-axis. The resulting variation in sound is dependent upon the corresponding feature in the synthesized instrument.

Please take a look at this video of Roger Linn demonstrating on LinnStrument the three dimensions of expression (pitch, volume, and timbre) discussed to this point.

There are, of course, more facets of musical expressions than just varying pitch, volume, and timbre. One of these facets, known as phrasing, is concerned with varying note durations:


One of the most natural ways to express oneself musically is to intuitively vary the duration of notes, shaping the notes in time. This concept is known as musical phrasing, and one of the core ideas is to use “stolen time” (tempo rubato in Italian) from some notes and give it to other notes.

So, domo arigato tempo rubato, portamento, vibrato, pianissimo, sforzando, crescendo, tremolo, timbre, et cetera, for enabling musical expression!

James Weaver
Twitter: @JavaFXpert