Hi. I am assuming you are familiar with digital audio workstations. They have mostly replaced large studio installations used to create produced music. The fundamental model that VST reproduces is that of its namesake, a “virtual studio” where you send sequence data or audio into modules which output audio.
In a physical studio, audio is recorded to tracks. Instruments are things held by humans, or modules which sit in racks. Audio is carried along cables through other modules and into a mixing board, which feeds the tape machine tracks.
In a computer, there are no such limits. When VST was created, the realization was made that the effect modules, which used to sit in racks, could simply be put directly onto the tracks (which were themselves modeled after the mixer and tape machine configuration in studios) without needing an explicit modular system to wire them together.
At this point, the line between discrete sound modules, tape tracks, and audio sources was removed. However, most audio software packages have not taken any steps to further the move away from the traditional studio model. Perhaps people are slow to change, or the creators of audio software are afraid of models which challenge the existing paradigm. Perhaps they are afraid of looking like an academic research project instead of a useful tool to musicians.
The paradigm adopted by nearly every digital audio workstation is as follows:
A track is an abstract carrier for a signal of sequence data (in the form of discrete messages, usually MIDI or a generalization of such used internally by the sequencer) or a stream of audio (a continuous buffer of sampled wave data.) A clip is an encapsulation of sequence or audio data, which resides in a track or an abstract storage device (such as an object browser or media pool.) A clip in a track signifies what data will be enumerated by the track over a period of time.
Waves and messages in clips, clips in tracks.
Now things diverge somewhat among digital audio workstations. In some, the tracks themselves represent a modular or directed acyclic routing of the contents they enumerate. Cubase, Sonar, REAPER, Ableton Live, Podium, Studio One, Renoise and most other packages follow this concept to some degree, some more powerfully than others. The flow of data through instruments and effects is controlled by the placement and routing of track objects. A track with an instrument plugin and then an effect plugin inside of it represents the flow of data from the clips in the track to the instrument plugin and then to the effect plugin, and finally to the mixer.
The routing may be illustrated as a tree structure of tracks, or a directed acyclic graph of modules with wires, or a matrix of routing points. Though it can take on different visual representations, they are all functionally the same. A song is a tree structure where, as the playhead (time) advances, data is injected into nodes by the tracks that represent them.
Sequence data and audio data is arranged temporally by discrete clips on tracks. Moving and inserting clips changes where in time the data will be injected into the signal tree through the tracks which hold the clips. If you want to logically associate clips together (like if you want them to stay relative to each other when you move one), you must use a grouping function provided by the sequencer interface. In effect, the way in which you may arrange the sequence is physically tied to the routing of the tracks.
There is another, less common model. In fact, most people are not aware of it, even those who use the software every day. This is the model that Logic and FL Studio adopt. Though versions of Logic 8 and newer have added features which attempt to fool the user into being ignorant of its existence, this model remains in place behind the scenes.
In Logic, tracks do not represent objects in a graph at all. You can create multiple tracks which cause a plugin with a single routing to inject data into its routing. What is a track in Logic, then? How do you place audio wave data onto a track in Logic?
To answer this question, we must first look at the part of Logic that fewer and fewer of its users know about: the Environment. The Logic Environment is an editor separate from the rest of the application, and which is a part of every project. It is a modular routing of objects which carry discrete messages and audio streams. If you open the Environment editor, it looks something like Reaktor, Max/MSP, Microsoft Visio, or another data-flow application. When you insert an instrument plugin in Logic, it creates a node in this graph, which you must connect to a mixer or another object with wires. Eventually the signal will flow to the output of your audio interface, where the audio it carries can be heard. Effect plugins have input and output ports for routing audio through them. There are objects for modifying MIDI messages, creating faders which control arbitrary parameters, and more.
What is the difference between this modular routing and that of Cubase, REAPER, and others?
The objects in Logic’s Environment do not represent tracks. You cannot create a track from within the Environment. In other words, the Logic Environment is unable to produce any sound on its own. There is no concept of time in the Environment, only potential signal flow.
To make Logic produce sound, you must create a track in the sequencer.
A track in Logic has no modular routing at all. There is no signal flow between tracks. You cannot rout a track to another track, because tracks have no input or any concept of receiving data. A track has only a single assignment: any object in the Environment.
A track in Logic contains clips, like many other sequencers. If you create a track and do not assign it to an object in the Environment [1] then the clips in the track will have no effect on the output of the song.
If you assign the track to an instrument plugin and place a clip which contains MIDI data on the track, then the plugin will receive that MIDI data when the song plays over the clip, and sound generated by the instrument plugin will come out of your speakers or headphones.
If you place a clip of audio data on a track which is assigned to an object that only knows how to work with MIDI, you will not hear the audio. If you place the audio clip on a track which is assigned to the fader on a mixer, you will still hear nothing. You must create an ‘audio object’ in the Environment, connect it to the input on a mixer or other object, and assign the track with the audio clip to it. Then, the audio object will play the audio clip on the track at the appropriate time.
What is the point of this? It seems like a needless complication.
It is true that a track may be assigned to only one object. However, one object need not be associated with only a single track. You can freely create as many tracks as you like, all assigned to the same object. If the object is a plugin, and the tracks all contain MIDI clips, then the plugin will receive the MIDI from all of these clips.
In Cubase, REAPER, Podium and most others, you can do the same thing by creating tracks which have MIDI sends to a single track with the instrument plugin on it. In REAPER and Podium, you can create the tracks as children of the track which contains the plugin, and you would not need to explicitly use sends to pipe MIDI data across. The signal would flow towards the root of the tree, through its parents.
So far, the two models of our digital audio world are distinct but functionally capable of the same things. In a moment, they will not.
In music recorded from live musicians in a traditional setup, most of the performance and nuance comes from the players themselves. That is to say, the complexity is inherent in the recordings. If a recording is bad, it is recorded again, or parts dubbed over. Skilled performers can create interesting-ness and complexity quickly and naturally. Rock, classical, and many other types of music have been and will continue to be produced this way to good effect. A guitarist can play an emotive and powerful line from his instrument right into a single recording in a single clip.
In much of electronic music, the complexity comes not from a performer with a physical instrument or a controller, but the sequencing in a digital audio workstation (or years ago, sequencer modules.) In a complex song with lots of edits, sequences of data, audio clips, effect automation and other stuff, the complexity stored as discrete resources in the digital audio workstation quickly outgrows what you would typically find in a rock project. A song I have worked on for a month could have several hundred tracks, thousands of edits, and too many routings to count by hand or even understand at once.
Dealing with this complexity becomes important when working on music. A musician wants to stay nimble and creative so he can make music, and the complexity wants to make him a janitor of his own project file in the software.
As the amount of stuff in a project increases, it becomes more and more difficult to work with. Creating a chorus section which is repeated in two places in a song, but not identically, usually means copying the clips from one place to the next, and then changing them as necessary in the second location. [2] Now we have a problem — if the musician later finds something that must be changed in all of the choruses, like a mistaken popping noise from a bad edit, she has to remember, somehow, to change it in both places.
Now imagine this problem, but a thousand times worse, and with every single action you perform in the digital audio workstation. This is what an electronic musician faces when working. He sweats it like a guitarist sweats his fingering and picking.
Most digital audio workstations provide ways to reduce the pain. Grouping clips, grouping tracks, grouping parameters. Takes, templates, savestates, presets. Bouncing, freezing, track-to-track recording, render targets. [3] They all seek to reduce the problems and complexity introduced by associating the expression of a song’s signal routing with the semantics of its sequencing.
Why has Logic, which has been technically inferior to many other digital audio workstations for years, remained in use, often begrudgingly, by so many electronic musicians? Why do several notable popular dance musicians use a tool like FL Studio, which is a toy in most regards, when more stable, technically competent and less juvenile alternatives exist, and for cheaper?
Logic has track clips. Clips which contain tracks.
In Logic, you can create a track which is not assigned to any object, and create a type of clip which, when opened, shows you your project again, but in a new window. If you look at the Environment in this new track clip, it is the same Environment as outside of the clip. Change the Environment in the track clip and it changes outside of track clip, because it is the same Environment as the rest of your project. But in this track clip, all of your existing tracks are gone. You can create a new track, and assign it to anything in the Environment, because it’s the same Environment. But the tracks outside of the clip are not present. It’s as if you have a new project in your existing project, but they share the same routing. Whatever you put in this track clip will be played just like the rest of your project whenever the playhead reaches it, even though the track that the track clip is on is not assigned to any object, because the tracks within that clip are indeed assigned to objects.
This makes no sense.
What on Earth is the sequencer in Logic?
It is the sequencer. There is no other way to define it, because it follows no model consistent with placing objects physically in a routing. The Environment is a thing which represents some concrete notion of routing. The sequencer is something entirely separate from the Environment, which represents something bizarre, without the rules we are used to dealing with intuitively in the physical world.
I don’t think I could overstate how useful this is. It allows for a real separation of data which is related to the semantics of time (the musical sequence, edits, etc.) and data which is related to signal flow and audio routing (the connections between audio objects, plugins, effects, etc.)
If I have a melody and a harmony which play on separate instruments, I can simply create two tracks next to each other, assign them to the two separate instrument objects wherever they may be in the routing, and put the sequence data clips on the tracks. The tracks can be put inside of a single track clip, so that it contains semantically related musical data, and none of the routing, which is useless complexity when arranging data temporally.
In FL Studio, replace Logic’s track (folder) clips with patterns, and you have a similar model. The routing is set up in the mixer and channel editor, and patterns contain the sequence data. Two patterns or more can be played at the same time in the playlist editor (the thing which holds patterns), and can contain any arbitrary sequence data. Though they cannot be nested like in Logic, even this incomplete version is enough to allow for very fast editing and arranging of electronic music.
Many people complain that FL Studio’s pattern/playlist concept makes no sense. They are correct. It does not make any sense. That is why it is powerful.
In Cubase, REAPER etc., a song is the sequence data itself in the routing.
What is a song in Logic and FL Studio? It is the Cartesian product of the sequencer and the routing data.
If Cubase, REAPER, Sonar, Podium, Live and Studio One allow you to build a song in one dimension, then Logic and FL Studio allow you to build in two dimensions.
[1] Rather, you assign it to the null object, which does nothing, as all tracks must be assigned to one object.
[2] If you are a programmer, this should have made you cringe.
[3] To their credit, some sequencers (Cubase and Samplitude in particular) have gotten quite good at this, at the price of nearly incomprehensible feature bloat.
(I am cross-posting this to the REAPER forums.)
That’s a long read. š
I don’t know either of those two programs inside-out, so I can’t comment much on it, apart from these two things:
In Cubase, REAPER, Podium and most others, you can do the same thing by creating tracks which have MIDI sends to a single track with the instrument plugin on it. In REAPER and Podium, you can create the tracks as children of the track which contains the plugin, and you would not need to explicitly use sends to pipe MIDI data across. The signal would flow towards the root of the tree, through its parents.
It’s the other way around in Podium, though – the signal flows upwards, so MIDI or audio data does not get passed from parent to child. But some kind of feature to easily layer synths, or otherwise send MIDI data to multiple tracks has been requested.
It allows for a real separation of data which is related to the semantics of time (the musical sequence, edits, etc.) and data which is related to signal flow and audio routing (the connections between audio objects, plugins, effects, etc.)
As the amount of stuff in a project increases, it becomes more and more difficult to work with. Creating a chorus section which is repeated in two places in a song, but not identically, usually means copying the clips from one place to the next, and then changing them as necessary in the second location. [2] Now we have a problem — if the musician later finds something that must be changed in all of the choruses, like a mistaken popping noise from a bad edit, she has to remember, somehow, to change it in both places.
How do these observations relate? You can separate devices (e.g., routings, instruments, effects) from events (e.g., MIDI, automation) in many programs, like Energy XT or Mu Lab, but how would that help with the “bad edit” problem… Usually you’d have linked events (phantom copies in Podium), but if every chorus part should be a little different, I don’t see a way around keeping separate events.
Could be great to have some more pattern-oriented mode in Podium, like we just discussed in another thread.
@thcilnnahoj wrote:
It’s the other way around in Podium, though – the signal flows upwards, so MIDI or audio data does not get passed from parent to child. But some kind of feature to easily layer synths, or otherwise send MIDI data to multiple tracks has been requested.
Yes, that’s what it was implying š It flows from the children through their parents, towards the root, just like Podium.
@thcilnnahoj wrote:
How do these observations relate? You can separate devices (e.g., routings, instruments, effects) from events (e.g., MIDI, automation) in many programs, like Energy XT or Mu Lab, but how would that help with the “bad edit” problem… Usually you’d have linked events (phantom copies in Podium), but if every chorus part should be a little different, I don’t see a way around keeping separate events.
Because sequences aren’t tied physically to the track layout, you can do things that do not make intuitive sense when laying them out. Let me give you an example. Let’s say I have two chorus parts that are slightly different, like I mentioned before. In Podium, because I have to have the sequence events (audio clips, midi clips, automation) on the actual tracks that need to play them, there is no way to encapsulate the entire chorus into one clip. I have to group all of the related clips together (which may be split up across lots of tracks) and then copy all of them each as individual copies. Even if I have made a grouping out of the clips in Podium, I am not copying the chorus as a single object. I have to copy it as a collection of individual objects. I can make phantom copies of them individually, but I can’t make a phantom copy of all of the clips at once as a single object. So for example, If there is only one clip in the second copy of the chorus that I am changing, like a single drum fill for example, I have to copy the entire chorus and then change the one clip with the drum fill. Now I have to worry about keeping my two copies of the chorus in sync.
In Logic, I can put the entire chorus (even though it’s made up of multiple clips across multiple, unrelated tracks) inside of a single clip, except for the one part that will be different between the two copies, the drum fill. So picture this: the entire chorus inside of one clip, and the two drum fills that are different outside of that clip in their own separate clip. Now I can make a phantom copy of the entire chorus, except for the drum fill, which is exactly what I want.
Does that make sense? It’s kind of hard to explain with words.
Thanks for the input, tumult.
@tumult wrote:
In Logic, I can put the entire chorus (even though it’s made up of multiple clips across multiple, unrelated tracks) inside of a single clip, except for the one part that will be different between the two copies, the drum fill. So picture this: the entire chorus inside of one clip, and the two drum fills that are different outside of that clip in their own separate clip. Now I can make a phantom copy of the entire chorus, except for the drum fill, which is exactly what I want.
Does that make sense? It’s kind of hard to explain with words.
I think I understand. Basically you want a phantom copy feature for a selection of track events within a section of the timeline.
As I understand it: The feature you talk about here, does not actually reduce the number of tracks needed in the arrangement. Instead it requires an additional track for placing the folder clips/events that will be the link to the other layer of clips. It is not a tool to play layered sequences, as overlapping sequences will cut out each other. It is an organizational tool for syncing the arrangement of clips within a section on the timeline, and having changes made in one section appear automatically in phantom sections.
About the track overview: If your composition style involves having hundreds of tracks that each contain microscopic components, then I think the Podium track tags and nestable group tracks are a helpful tool for keeping an overview of the tracks.
To make something like this fit into the Podium track structure, I would experiment with extending the use of marker events, instead of adding a new folder event type. I could add a feature to link marker events together, so that any edits made within one marker section will automatically be replicated in the other linked marker sections. For example: If you have four repeated sections, each divided with a marker at the start of the section, you’d select all four markers, use the link command (button or menu), and then any edits to a section that is identical in the other sections, will be replicated. This means edits to phantom sequence events (the actual sequence event and not just the contents of the phantom sequence) or adding new events will be replicated. Changes made to unique events that are not part of the other sections will not be replicated. That should allow you to make synced edits to section repetitions, while still having unique fills in the individual sections that will not be affected by the edits.
That could actually work quite well, I think :-k . And it would not require changes to the Podium engine in any way. This is entirely a UI tool for managing repeated patterns on the timeline.
Oh, and while I remember it. Speaking about markers:
Some day I’ll extend the markers to have a collapse button, like the circular one used on group tracks. That can be used to collapse part of the timeline that you don’t work on currently.
@Zynewave wrote:
I think I understand. Basically you want a phantom copy feature for a selection of track events within a section of the timeline.
As I understand it: The feature you talk about here, does not actually reduce the number of tracks needed in the arrangement. Instead it requires an additional track for placing the folder clips/events that will be the link to the other layer of clips. It is not a tool to play layered sequences, as overlapping sequences will cut out each other. It is an organizational tool for syncing the arrangement of clips within a section on the timeline, and having changes made in one section appear automatically in phantom sections.
Yes! That is exactly what it does. Because Logic tracks actually do not have any signal flow (they only send instructions to the objects in the environment, which do have signal flow), you can create these track folders which hold a whole new set of tracks within them. So you can use it to hide parts of your arrangement from other parts, or make phantom copies of parts from one place to another. But the actual signal-carrying objects might not support polyphony (like audio objects, which only play one audio clip at a time) so if you end up playing two folder clips at once where there are tracks sending to the same objects at the same time, then events will start getting cut off.
@Zynewave wrote:
To make something like this fit into the Podium track structure, I would experiment with extending the use of marker events, instead of adding a new folder event type…
This sounds cool. I think I understand how it works. When editing inside of linked marker regions, any edits to events that line up identically with events in the other regions will have their edits propagated. That would definitely go a long way towards making it easier to deal with complex stuff.
Would it be possible to associate events to markers, so that when I click a marker, it automatically selects some events in the timeline? One of the benefits of clip folders from Logic (and regular patterns from FL Studio) is that I can treat them as a single logical unit when arranging, without having to worry about making individual selections. So if I have a phrase I need to copy around a bunch, but it’s made up of stuff on a bunch of different tracks, I can just grab the clip folder that contains it and phantom copy it around as much as I need to without having to worry about what’s actually inside of it.
You would have to be able to overlap marker regions for this to work, though.
Actually, here is what I am picturing now: what if there was a separate mini-timeline for marker regions? So for example, if I have a chorus made up of synths and drums, and I want to separate the synths and drums into separate whole units that I can move around or phantom copy, I would make two overlapping marker regions over the chorus. Then, I would select all of the synth clips in the chorus, and link them to one of the markers. I would do the same for the drum clips, and link them to the other marker. Now, if I move either of the marker regions, the clips that were linked to them move as well. And I can phantom-copy the marker regions. This also has the added benefit of making it visibly obvious if you overlap two regions which have conflicting event data (trying to play two MIDI leads into an instrument for example) right in front of your face. In Logic and FL Studio, it can be hard to tell where two conflicting events are located.
So that sounds a lot harder to implement š You’d need a marker editor lane that expands to show marker regions that are stacked on top of each other.
Ok, sorry for ranting. š But I think you are on to something with using markers as logical units to work with. I think it has the potential to match the weird abstractions you can make with Logic clip folders.
And I think your idea for linked markers is great already. I know I could do a lot of stuff with it, and it would cover a lot of the kinds of edits I already do.
Also, here’s what the current project I’m working on looks like in REAPER:
http://dl.dropbox.com/u/2316004/reaproj.PNG
I pretty much have to copy and paste everything around manually. It’s hard to work with.
@tumult wrote:
Would it be possible to associate events to markers, so that when I click a marker, it automatically selects some events in the timeline?
You can do that already with the bundle events command. Select all events you want linked (you can include the marker event if you like), and then “Bundle events” (Ctrl+N). Dragging any of the events in the bundle will then drag all the bundled events, and you can use the copy drag to create phantom copies. The dropped copy will become a new bundle. Use Ctrl+N again to unbundle events.
The bundling feature also allows you to create separate bundles for handling synths and drums, as in your example. Each event can only be included in a single bundle though, so the marker event cannot be included in more than one bundle.
To further explain the idea with marker linking:
Say you have 4 copies of a 4-bar section, each with a marker event at the start. You can link the 4 marker events to sync the 4-bar patterns. You could however instead link only the first and third marker, and by doing so sync two 8-bar sections. In other words, the length of the synced sections are not determined by the distance between markers, but the minimum distance between linked markers.
Yes, bundling is helpful, like grouping. But I cannot phantom copy an entire bundle at once, only the individual pieces of it. If I phantom copy a bundle, I cannot edit the second copy and have it apply to the first, since it individually phantom-copied the bundle’s contents, not the bundle as a whole (you already know this though.)
http://forum.cockos.com/showpost.php?p=509232&postcount=52
This person made an insightful post.
@tumult wrote:
@thcilnnahoj wrote:
It’s the other way around in Podium, though – the signal flows upwards, so MIDI or audio data does not get passed from parent to child. But some kind of feature to easily layer synths, or otherwise send MIDI data to multiple tracks has been requested.
Yes, that’s what it was implying š It flows from the children through their parents, towards the root, just like Podium.
Ah, yes, I misread. You wrote about feeding multiple MIDI events to one instrument – I thought it was about sending MIDI data from one track to multiple instruments, which can’t be done currently.
In Logic, I can put the entire chorus (even though it’s made up of multiple clips across multiple, unrelated tracks) inside of a single clip, except for the one part that will be different between the two copies, the drum fill. So picture this: the entire chorus inside of one clip, and the two drum fills that are different outside of that clip in their own separate clip. Now I can make a phantom copy of the entire chorus, except for the drum fill, which is exactly what I want.
Does that make sense? It’s kind of hard to explain with words.
Okay, I think I get it too. So I guess Logic would display this chorus as a single meta-event, which you can then “open” to edit its contents? All the events inside it (if they’re phantom copies) would still remain linked with their counterparts on the normal arrangement level. Plus, the meta-events themselves can be phantom copied… sounds good.
@Zynewave wrote:
Some day I’ll extend the markers to have a collapse button, like the circular one used on group tracks. That can be used to collapse part of the timeline that you don’t work on currently.
I hope Some Day arrives soon. 8)
@tumult wrote:
http://www.propellerheads.se/reason5/index.cfm?fuseaction=get_article&article=blocks
Sound familiar?
Yes. It has already been discussed over in this topic:
http://www.zynewave.com/forum/viewtopic.php?t=2312&postdays=0&postorder=asc&start=15