import of mp3 would be great too…
@Zynewave wrote:
That is too many options in my opinion. Being presented with a dialog with that many options/decisions would possibly discourage a lot of users. I think it makes sense to always adjust to zero crossing if not too far away from the detected transient point.
yes, you’re absolutely right. The slices start/end can be altered manually anyway. At least the automatic fade in/out curve on/off option might be “needed”…
THE Killer Feature?
what about automatic offline rendering/bouncing as chosable on/off per track. If on, a bounce enabled track get’s rendered offline as long as all tracks in the chain serving as source for this tracks remains unaltered. An open plugin panel for example means “get’s altered” and thus the track is processed realtime.When the panel is closed the track could be rendered offline… You need to scan the processing chain down to the sources and flag the sinks up the chain if they need to be processed realtime or could be rendered offline if no source can be altered and switched to bounce mode automatically, if so desired… could be extended to regions only of a track that needs to be rendered again, if something get’s changed in a source within that same region…
thus, decreasing cpu time constantly on the fly, relying more on disk streaming performance … *ahem*
But, to be honest, the bouncing feature the way it is is already a genius’ work 🙂
@Zynewave wrote:
the audio event itself is cut some specified samples/ms _before_ that snap point. The region from the start of the new audio event (slice) to the snap point is applied a fade in curve. This is the was it’s done now, right?
Yes. I’ll consider adding zero-crossing alignment. Should both the fade-in start and the snap position be adjusted to nearest zero-crossing?
as the user chooses?
1) slice start at
I) cut point
a) “radiobutton”: fix negative offset of “(inputfield)S” samples left from transient detected
b) “radiobutton”: zero crossing left from transient detected or “(inputfield)S”, if no zero crossing found within range “(inputfield)S” left from transient
II) fade in
fade in range “(inputfield)F” (F <= S)
a) “checkbox”: fade in on zero crossing cut
b) “checkbox”: fade in on no zero crossing cut
(none checked means no fade in)
III) Snap Point
a) “radiobutton”: set at transient detected
b) “radiobutton”: set at slice start point
IV) fade out
fade out range “(inputfield)O” (%-value of slice length?)
a) “checkbox”: fade out on zero crossing cut of next slice
b) “checkbox”: fade out on no zero crossing cut of next slice
(none checked means no fade in)
what about an RMS curve of the material automatically generated and inserted as sequence on a (new) curve track? Your calculating transients anyway… might become handy when controlling for example a plugin’s filter cutoff by the RMS of an audio material? 😉
@Zynewave wrote:
I’m not sure I follow your thoughts about the snap point behaviour in beat slicing. Let’s discuss this when I have implemented the feature and you’ve had a chance to try it out.
I guess, I habe too much time, so I try to explain right before I even got a chance to try anything out 😉
ok
1) Slicing Method
now, a wave is automatically cut regarding the transients of the audio material and a sensitivity slider
may be Podium indicates first where it would cut the wave before it is eventually sliced with the user being able to adjust the cut indicators beforehand, to
– inactivate single cut’s, so the wave isn’t cut at this detected transient
– drag the cut indicators within the material etc.
– create cut indicators manually
– automatically create cut indicators regarding a groove template for example
– save/load groove templates so they can be used for slicing any other audio material, might not be percussive material 😉
– restrict transient detection around a certain range of quantize positions of a groove template
– etc…
2) Resulting slices
– are audio events itself, all audio events get a Snap Point
– the Snap Points of generated audio events (slices) is being set as close to the transient as possible
– the audio event itself is cut some specified samples/ms _before_ that snap point. The region from the start of the new audio event (slice) to the snap point is applied a fade in curve. This is the was it’s done now, right?
– what about the cut being done not at fixed sample before the snap point but dynamically at the rightmost null line crossing of the wave before the snap point, with the resulting audio event applied a fade in curve as well, or not, as ther user desires…
or, better namend Quantize Points are absolutely necessary when quantizing the slices, or audio events in general.
What about an extension for the start of a slice:
Currently, there’s a adjustable but fixed for all created slices fade in time bevor the sync point, right?
This coud be extended with an option to set the start point of a slice to the first crossing of the null line just before the sync point (possibly without fade in), or, if no such null crossing can be found within in specified time (possibly the fade in time?), the start point of a slice is the “fade in time” before the sync point?
also, an option to slice an audio event at given quantize values would be handy. Or creating a quantize pattern from the sliced material for quantizing other (Midi) sequences, etc etc etc…
First post here on the board for a long time now. I am constantly looking at Podium as a replacement for my Cubase SX3 and I feel like I’m about to switch… 😉
ok, I see, object oriented this is *ggg* would it be possible to drag and drop/move objects from one folder to another?
guess, I should dig a bit deeper then *g*
another? say I have a Keyboard Controller and want to control/play plug#1 and plug#2 with it… how is this done? Do I have to drag the MIDI mapping from track to track? Or do I have to create two MIDI mappings for my keyboard, assign it to the two plugs and mute the one I do not want to control? Are there Mutegroups? *g*
…using HALion as multitimbral plugin… in the wizard, creating a track with bounce for HALion C01 for example won’t create a track with bounce switch… why is this? Do I have to wrap it in a new bounce track?
what’s more, the available base channels should reflect the possible activated hardware channels, isn’t it? What about the drop down box showing the textual description of the out channels?
in the properties I had to adjust the target base channel from #1 to #25 (the SPDIF)… boy this is complex… there’s no help available, that describes the dialog windows a bit more detailed?