Frits, it’s your own fault for mentioning it ;)! So now I wanted to share my suggestions for audio-stretching. I don’t know how practical to implement, nor how practical to users, they would be, but still I wanted to mention them.
– Pitch-shifting
– Audio-stretching based on markers
First of all, this might go without saying, but it would be good to not only have audio-stretching, but pitch-shifting. I know that some people consider this the same, or something like that… I don’t really know the mechanics, but my point is that it would be good to be able not only to stretch audio out or shrink it, but also to keep it the same length and shift its pitch too.
Audio-stretching would be more useful to me if markers could be attached to audio events that could be moved around as well, instead of just the start and end of the audio event. A marker would basically be the end of the first and start of the next, but allow both to be moved simultaneously (more or less).
Why? Let’s say a vocal track was put in, but the timing wasn’t quite right. Apart from the general wisdom of just rerecording it :P, making markers at certain points means that the audio event ends could stay in the same position, but one side could be stretched and the other shrunk. If you insert lots of points, you can see you could shift the timing of individual portions of a track!
I suppose a visual analogy would be like a gradient? Like in Gimp or Photoshop, you can set multiple points along it and move them, which shift the gradient stretching.
Does that make sense? I can try harder to explain it if not. Anyway I think that this would be useful! I’d love to hear other’s thoughts on these two points.
@druid wrote:
Does that make sense?
Yes, I think this feature will be expected. I believe most other hosts have this feature as well.
What I have been thinking about, is whether it’s best to implement this as markers inside the sound, or as multiple sliced sound events on the track. When the mouse is placed over one of these slices, a new drag box (in addition to the existing four drag boxes in each corner) can be shown in the middle of two joined events. Dragging this would resize/stretch both events. This would also work for note and curve sequence events.
I’ve added something atop my post here (not the original one though) since the other stuff is longer and possibly not useful.
From a usability perspective, I think it would be better to move the split in the middle, meaning that no new functions have to be memorised by the user. It seems to me to be more intuitive. Markers inside the sound event is something new to learn how to do, after all.
I’m not sure if there would be advantages between one way or another for sound quality or anything else though, and on that note, my original post:
Obviously it would be easier to split the sound files (I mean within the engine, not how the user handles it) but would there be an audio advantage in not internally splitting them and treating them separately, but trying to calculate it as one smooth process? Is that even possible? What I mean is, if they’re treated as separate, couldn’t there be cases where the last sample of the previous one does not match up to the next one, and therefore not seamlessly continue?
If that is the case, and you programmed it to make sure it was seamless, it would make more sense I think for markers inside the sound. Otherwise, if internally it was being split anyway, nothiing wrong with your idea about stretching the middle split at all.
I’m not sure if this makes sense as I think, practically speaking, my concern is about comparison of internal function to visual application, but that may have no bearing on general usability.
I think I’d like it better if it was done by setting ‘stretch markers’ in the sound editor rather than with separate sequences, mostly because you wouldn’t have to zoom in on the whole arrangement for a little delicate editing.
Live does it like this since version 8, though I haven’t tried it. Don’t know about other hosts.
But I’m not aversed to the idea Frits suggested either.
Are you planning to include time-stretching now (I imagine that’d be a pretty huge task) or are you just collecting ideas? π
@druid wrote:
Obviously it would be easier to split the sound files (I mean within the engine, not how the user handles it) but would there be an audio advantage in not internally splitting them and treating them separately, but trying to calculate it as one smooth process? Is that even possible? What I mean is, if they’re treated as separate, couldn’t there be cases where the last sample of the previous one does not match up to the next one, and therefore not seamlessly continue?
It depends on the type of time stretching algo applied. Some algos does not allow sample precise repositioning, due to the fact that the algo processes the audio in small variable sized buffers of samples. So I guess that would make the splitting of sound events on tracks difficult to implement, without getting gaps in the sound. Using markers inside a sound allows for a rough estimated non-sample precise stretching.
One advantage with the splitting of events on tracks, is that you can use the same sound/loop several places and stretch it differently. If the stretching is done with markers inside the sound, then that stretching will be applied whenever you use the sound.
@thcilnnahoj wrote:
Are you planning to include time-stretching now (I imagine that’d be a pretty huge task) or are you just collecting ideas? π
Just collecting ideas.
Weeelllll…. I don’t know. π So are the algorithms that allow markers in audio (in order to stretch sections of the audio file differently) only good for rough estimates, or can they be production quality?
I’m not familiar with the algorithms out there, nor the practicality and quality of them, so I can’t really comment further…
@druid wrote:
Weeelllll…. I don’t know. π So are the algorithms that allow markers in audio (in order to stretch sections of the audio file differently) only good for rough estimates, or can they be production quality?
Often the best quality can be gained if you allow rough stretching to markers. Note that it’s often only a few millisecs variance, so barely audible. If you try to force the stretching to sample precise boundaries you can get reduced quality.
Fritz,
I really appreciate all the work you have been doing on the application, it really is becoming easier to work with and looking great! I say this because the ever discontent masses must wear you good nature thin. Of course, I am bumping this thread to wear another layer off. The activity that takes me out Podium is correcting the timing of performances using the stretching functionality of another application. It would be a great benefit to me to have this capability in Podium.
Thanks for your time,
Paul
@pernst wrote:
Fritz,
I really appreciate all the work you have been doing on the application, it really is becoming easier to work with and looking great! I say this because the ever discontent masses must wear you good nature thin. Of course, I am bumping this thread to wear another layer off. The activity that takes me out Podium is correcting the timing of performances using the stretching functionality of another application. It would be a great benefit to me to have this capability in Podium.
Thanks for your time,
Paul
Exquisite wording π
I’m working my way towards it. Before I start on time-stretching, I’d like to finish some lose ends I have with the track management. I can’t give a time-estimate yet.