Redesigning how to perform using Ableton Live
Both are focused on live improvisation. I like to evolve Shoeg gigs according to my momentum and the surrounding mood, and Ensemble gigs depending on the feedback of the other two members, Andreu and Anna.
During these last three years, I’ve played all my gigs with most of the sounds coming from my laptop and generated in the Ableton Live DAW (I have been using this setup since version 6 released in 2007). Ableton Live is an excellent tool for live sets, and this ability is what sets it apart from other similar products (remember: “Live”). Session view (only followed by Bitwig’s one) is systematic and easy to use. It hosts samples, virtual synths, inputs and outputs in tracks in a matrix similar to those in spreadsheets, where every cell houses a clip (an audio or MIDI sequence).
On the other hand, managing (‘play’ doesn’t fit very well here) those amounts of buttons and sliders in Ableton Live while performing is almost impossible with laptop tools, a trackpad and a keyboard. You can easily figure this out by trying to modify two sliders or activate some clips at same time, realizing that mouse pointer can only *point* one screen pixel at a time. This is why control surfaces emerge: hardware tools like Push or Launchpad were born to emulate session view grid. Combined with other tools doing the same with faders and knobs, hardware control surfaces allow user to modify many live set parameters at the same time in real time with his/her fingers (this actually can be done with any analog synth since s.XV pipe organs).
The next technological leap are apps for touch screens -tablets, smartphones, phablets and other mutations-, that further push the possibilities for using hardware control surfaces, letting user build a tailored environment. These have clearly some limitations, mainly because some gestures don’t work properly on tactile surfaces (like rotary controllers, there are some workarounds but they even bring out further complications), but also because of inherent CPU and memory tablet limitations. The big winner in this field is Apple: within its iPad has settled a platform which offers simplicity, flexibility and enough power to build apps like Conductr or TouchAble.
That being said, CPU and memory limitations also exist in laptops. Even though they’re more powerful than tablets, they can only manage a finite amount of virtual synth polyphony, effects and samples all at the same time (running at the same time and loaded into RAM memory, which allows real time access to these items… more or less). There’s no more treacherous thing in a live set than reaching the process limit of your laptop’s CPU and taking the resulting cracks and glitches with poker face. Lastly, I just want to point out another not so obvious limitation related to the screen (especially remarkable in laptops): having too many clips and tracks implies lots of arranging work and a complex live set map to dive through, resulting in a limitation of certain live choices.
Less is more is a good principle to follow and limitations spur us creatively in some evil ways, but achieving your studio sound with variations for live sets isn’t by no means a far-fetched scenario with nowadays computers anymore. We need tools to organize live sets in DAWs in more wise ways and to optimize resources.
Recycling ideas coming from spreadsheets and tools for live mixing and editing video and music.
Subdivisions have been always helpful for storytelling: scenes, chapters, movements… Without any intention of imposing structures, live sets can also be divided in parts not necessarily concatenated. We can implement this idea in Ableton Live enabling multi-session projects, each session view corresponding to every part. This concept is not only suitable for live sets, but also for studio projects where you need to separate elements and split projects in parts. Accordingly, it’ll be also useful a folder system related to bigger/longer sections.
In short, I propose to organize Ableton Live projects with two level subdivisions: folders, corresponding to big/long sections or songs, and parts, to shorter sections.
In Ableton Live, each part is distributed in the UI like spreadsheets in an Excel book. We go from a very complex live set to a subdivided live set with less elements on the screen.
In my live sets I usually use an iPad with Conductr, allowing me to create control modules for each part. It’s a useful, quick and light way to play live. Extending this idea, we can add a second tablet to this system, which enables features to use this project organization mentioned before.
The second tablet hosts the preview -following the concept of previews in a live video mixer- where we can select next part in our set and check information related to the parts, a cue monitor and transition settings. Thus, the left tablet has the current part (present), and the right one, the preview (future). When a transition is performed, the left tablet will update controls associated to the next played part.
This way of performing forces Ableton to load and unload channels and plugins into memory while playing, so it’s important to know how much CPU power is required to play the next part and needed time to be ready to play the loading part. Other helping data can be BPM, signature or tone (metadata?) associated to each part in order to execute an interesting transition between parts. It also requires previous work in parts preparation.
The possible transitions also have been inherited from the field of video editing: fade/crossfade, cut (quantized or not) and link (quantized or not), consisting of inserting a clip between parts (as happens in 1:51 in Oneohtrix Point Never’s Americans).
To sum up, I propose an Ableton Live tool enabling users to wisely arrange their live sets and leave the door open to structure alterations in real time.
While writing this text, some ideas have been left by the wayside, like reusing common channels in different parts (saving redundant memory loads), using tablet CPUs (as DSPs or other features) to save laptop CPU power, or ever more bizarre ideas, like implementing content-aware scaling to extend or shorten parts quickly.
Revision of the text by Zach Campbell