Can you add a small attack fade in and release fade out to the beginning and end of the slices to correct clicks?
Crossfading is applied to the detected build-up of the transient. A minimum crossfade time would however be good to avoid clicks with abrupt transients. Maybe I’ll add a “Minimum slice crossfade time” user setting to the beat slice dialog.
Testing the waters here as I don’t know how far you want to go with it but…might there be some way to re arrange slices as well?
The individual slices are just ordinary sound events, so you can use the usual tools and commands for arranging sound events. I don’t intend to add any special editing commands for beat sliced sounds. Not at this point at least.
Yes. Just locate the already imported plugin in the device list on the project start page. Right click, and select “New Instance” for each additional instance you want.
Please Frits, could you change that in the following Releases?
I’ve put it on my notepad, so I’ll look at it when I’m done with the major audio stuff.
One of the next major features that I’ll be looking at is MIDI remote controller support. This includes the possibility to start/stop Podium playback with a user-definable MIDI message. Hopefully I’ll find time to look at this some time this summer.
No. But I may add that feature in the future.
@acousmod wrote:
Concerning the quality of time-stretching algos, it seems that there is a free Dirac LE version available.
See this thread :
http://www.cockos.com/forum/showthread.php?t=6272&highlight=diraclePerhaps that it can be another option for non realtime rendering ?
I did look at Dirac LE a long time ago, but it only supports 44.1 or 48kHz samplerates, and max 2 channels which are not even phase-locked. The licensed Dirac version are not cheap either.
The audio input/output mapping objects will not automatically adjust to changes in interface selection. I could add a menu command to the project page that will modify the audio mapping objects to the current interface selection. Something to do after time-stretching is implemented. For the time being you’ll have to manually create new mapping objects for the additional channels in your Echo Audio Fire. All you need to edit in the mapping properties is the name and the starting channel number.
@darcyb62 wrote:
If you have the ability to select the level of quality, latency should be less an issue. As an example, if you are doing a quick sketch and don’t need the quality, turn the quality down and go for the reduced latency. I could see this as being useful when laying down the initial tracks. When mixing latency is less an issue and you can turn the quality back up.
What I have done in zPitch now, is check whether the plugin is running in offline processing mode (bounce render), and if so use the best quality possible. The latency is not an issue when rendering. That way the plugin can still be used realtime with a relatively low latency.
@Zynewave wrote:
I don’t intend to add more parameters to zPitch…
I take that back. I’ve now added a mix parameter, which controls the ratio of dry/wet signal. Set it at ~50% and you can sing harmony with yourself, with the pitch parameter controlling the harmony voice.
@acousmod wrote:
Will the pitch and the time-stretching work also with multichannel files in future versions ?
If you mean the time-stretch that is going to be integrated in Podium, then yes. Up to the full 32 channels, with all channels phase-locked.
I also plan to extend both zPEQ and zPitch to be multi-channel capable, but for that I’m waiting for Steinberg to release the VST3 spec.
