Topic: MIDI Recording Jitter Bug

Viewing 15 posts - 1 through 15 (of 18 total)
  • #2143
    zamrate
    Participant

    When I let my song play and record some MIDI notes, what will be played back does not sound the same as what I’ve recorded. This is a problem that seems to affect many sequencers on the PC (I tried Presonus Studio One, and the MIDI was completely off the way after recording, a bad joke considering they want to be “the new kid on the block”).

    Ok, well I did some tests, I’ll send you the wav file so you can see for yourself (left = original, right channel = played back).

    when playing back (compared to original)
    1st note: 217 samples too early
    2nd note: 167 samples too early
    3rd note: 160 samples too early
    4th note: 192 samples too early

    so it’s not just a “(negative) latency” but there’s really a big jitter involved too. optimally, the notes should be placed where they were heard, don’t you agree?

    PS: I just want to add that I’ve programmed my own mini MIDI-sequencer, and in this sequencer the notes are played back sample-accurately (at the position one heard them when playing the keys) after recording. I’ve tried with the same soundcard, same latency, same VSTi plugin, to make sure the test coditions are equal. So it’s not a matter of my system being not well tuned or anything like that, but it’s really a bug in Podium’s code.

    #17448
    Zynewave
    Keymaster

    It will depend on the ASIO buffer size and the sample rate.

    Windows provides MIDI messages with with 1 ms accuracy, so that’s 44 samples jitter (at 44100 sample rate), compared to what you play on the MIDI keyboard. On top of that is all the latency added by the MIDI controller itself and the transmission to the PC.

    Some hosts will try to play received MIDI messages as fast as possible, so they will queue up the events to be played at the start of the next ASIO buffer. This method will introduce a sample jitter for each note the size of the ASIO buffer. An ASIO buffer size of e.g. 4096 will add 93 ms jitter. Totally unusable. In Podium I add a latency of the ASIO buffer size to the monitored playback. That means the timing is steady and the jitter is minimal.

    That latency is only added for the realtime-playback. The MIDI messages are still recorded with the timing that the MIDI events arrived at, which explains why you see a negative latency when you compare realtime-monitoring with playback. It can be argued whether the recorded events should be placed as they are performed or heard. Some may perform on their MIDI controller as a response to the timing of the arrangement playback, and others may perform on the MIDI controller in anticipation of what their instrument will sound like.

    #17449
    zamrate
    Participant

    well, you say the jitter is “minimal”, but there should not be any at all. as we see in my case there was 217-160 = 57 samples jitter. That’s quite considerable, so why do you say it’s minimal after i took time to collect and present you this data?

    If the delta amount of samples of 1st note 2nd note 3rd note and 4th note would have been a constant of 128 samples (size of asio input buffer), there would have been 0 jitter and i’d say ok – it’s a philosophical question wether you want to have the incoming events played with the input latency of the soundcard added and at playback time notes played without the input latency of the soundcard. but having a jitter is just cheap and i even can’t understand why it is introduced, because at the time you get your midi messages (before even hearing them play), they have been given a timestamp (by your code) and so where does the jitter come from anyway?

    believe it or not, i made this midi-timing test because i was hearing with my own ears that something was wrong when the midi notes were being played back after recording. the timing is really bad! i think that if my ears can notice it, you should maybe reconsider your position.

    as i previously said, in my mini-sequencer, there is 0 samples jitter and 0 samples difference between played and recorded midi notes (and it just sounds “right” to my ears, too). if you want i can send it to you, so you can do some tests and then choose for yourself what’s better.

    #17450
    Zynewave
    Keymaster

    How did you make the recording of the realtime monitoring vs. playback?

    Is the latency/jitter the same with different VSTi’s? Some instruments may have an internal buffering where they process notes and LFO’s etc. at certain sample intervals, for reasons of CPU optimization. The fact that a constant latency is added for the realtime monitoring may cause the notes to fall in different processing intervals within the synth, resulting in jitter.

    #17451
    zamrate
    Participant

    >How did you make the recording of the realtime monitoring vs. playback?

    I just used my Delta 1010LT Monitoring Input (sent you the WAV via email, did you get it?) and recorded that one in Cool Edit. I pressed record and play in podium while having a Audio Loop as reference. Then I played the notes during the recording of one bar and let podium play the same bar another time. Then, in cool edit, layed the second bar “under” (on the second channel) the first bar, adjusted them for the beat samples to be 100% aligned to each other and then I checked the sample offsets of the midi notes.

    >Is the latency/jitter the same with different VSTi’s? Some instruments may have an internal buffering where they process notes and LFO’s etc. at certain sample intervals, for reasons of CPU optimization. The fact that a constant latency is added for the realtime monitoring may cause the notes to fall in different processing intervals within the synth, resulting in jitter.

    I used Discovery2 VSTi. I specifically tested if it this synth is 100% sample-accurate and it is (at least as exact as 4 samples sample-accurate).

    If my procedure would not have been accurate, I could not have verified that my sequencer is sample-accurate anyway.

    #17452
    zamrate
    Participant

    Frits, I did some further research on the subject and the results I got are more than worrying.. Can’t believe my own eyes:

    Out of 3 commercial “big boys”, only one sequencer has no midi recording jitter (on the sequencer side! ofcourse windows’ own jittering is always involved but that’s the case for everybody) and sample-accurate midi recording (sample accurate = note starts at position it was heard when playing the keyboard while recording).

    – worst was presonus studio one (completely off way, not usable, but they know it and are working on a fix if i believe their forum)
    – 2nd worst was cubase sx 3 (100 samples jitter + 400 samples off timing… – who would have thought it? such as classic is not even able to record midi properly! shame on steinberg!)
    – 3rd worst of the 4 i’ve tested was podium (see results above)

    and the only sequencer (of those 4) which did do the job well was ableton live 7 – 100% sample-accurateness.

    FYI tests were performed at 128 samples latency on windows xp.

    PS: Also checked FLStudio 9 now. Interestingly FLStudio has not jitter, but a constant negative delay (notes come too early after recording) of 588 samples. but even that constant delay is very bad. and i don’t see why flstudio is putting everything 588 samples (= much more than ouput latency of 128 samples) back. weird.

    #17453
    UncleAge
    Participant

    zamrate, I’d like to duplicate your test. Can you give a detailed signal chain for me to reproduce?

    #17454
    zamrate
    Participant

    Very easy to do this MIDI playback vs. recording jitter/latency test with Podium or any other app, but you need a soundcard with an internal input (like my Delta1010LT’s “monitoring input”):

    1) In Podium, on a first track put a short click sound on every beat of one bar and set your latency to let’s say 128 samples with ASIO driver

    2) In Podium, loop that bar at 125 BPM

    3) create another track with a VSTi that has sample-accurate timing (!!) (I used Discovery2, but there are many synths that not have sample-accuracy, so watch out)

    4) In Discovery2 make a high pitch short sawtooth sound that also sounds like a short click (Attack set to 0)

    5) Now, load a audio recording app, such as Cool Edit and start recording the virtual input of your soundcard (so you start recording what comes out of Podium actually)

    6) Back in Podium, put on REC and Press Play, so you now hear your audio click playing which you let play for one bar. After the start of the second bar play Discovery2 notes between the audio click for one bar, so you have: click, note, click, note, click, note, click, note. Still let play Podium one bar (so it plays the bar you just recorded) then press stop.

    7) Back to you audio editing app now which recorded the audio: Delete everything from the beginning of the recording until the start of the first click of the “recording bar” where you started hitting/recording notes. Also delete the right channel’s waveform (everything is then zero on the right channel). Now cut the “playback bar” (the one podium was playing directly after you recorded your bar) audio on the left channel and put it at the start of the right channel SO THE 4 CLICKS MATCH SAMPLE-ACCURATELY ON LEFT & RIGHT CHANNEL NOW (USE ZOOM TO CHECK!).

    8 ) Still In your audio editing app, check the distances between the 4 recorded notes and the 4 playback notes. Just as the 4 reference clicks do match, the notes should also match because what you heard during recording should also be what you hear during playback, no matter what any person might tell you. It is the only way you can achieve what you want.
    Here’s a reason why what you heard during recording should match what you hear during playback: if during playing/recording a drum groove in a looped bar, you thought: “wow that was a good groove” and then playing it back your drum groove notes are suddenly not placed exactly like you heard them played at recording time, then it’s frankly impossible to ever achieve what you want – it will in that case be more or less a matter of luck if the final result is what you wanted or not. For drums, even 1 ms displacement (earlier or later) makes a difference sometimes, and 1 ms is “just” 44 samples @ 44.1kHz sample rate!

    #17455
    michi_mak
    Participant

    regarding your 1ms on drums theory : you are aware that 1ms equals a distance of not more than 34cm / 14″ , are you? so sitting not properly in front of your monitors does the same harm!!
    :-k

    #17456
    zamrate
    Participant

    if you don’t believe me, try putting the same bass drum sample sample-accurately over each other, and compare to the same bass drum sample “set over each other” with the second bass drum sample having a delay of 1 ms compared to the first bass drum sample. you’ll hear the difference, trust me – and with some dynamics (Compressor) added to the whole, you’ll hear the difference even more.

    #17457
    UncleAge
    Participant

    I got home a bit late last night and couldn’t try this out. But I did a similiar test for the devs at Presonus. I’ll get to it tonight I’m sure.

    In the meantime, I believe that you are discussing two different issues. Midi jitter and recording “what you hear” instead of “what you played” are two separate issues. And really there is the third issue of latency introduced at the host level, if any.

    Midi jitter I’ve seen discussed many, many times over at Gearslutz. Its a fact of life for most of us unless you can get the really good hardware and get everything synced to a good clock. I’ve even noticed different time stamps on the midi data depending on whether I use the USB port versus my midi input on my Pro40. It really is an issue that I should give more attention to.

    Recording “what you hear” instead of “what you played” has been discussed to death over at the Ableton forums. This is because the Abe’s made a decision a long time ago to have the app record what you hear. And as far as I know they are the only major/minor host that does that. A lot of users don’t agree but the Abe’s went to great lenght to explain why they had to do it that way to make the Live app work, well, “live”.

    Now if Podium is introducing delay (not the plugins) then that would be an issue to address on its own as well.

    I’ll give your test a shot tonight. Thanks for the reply.

    #17458
    UncleAge
    Participant

    And I would also add that yes, I do have problems with the midi timing in S1 as well. I am still working with the devs to iron it out.

    #17459
    zamrate
    Participant

    Midi jitter I’ve seen discussed many, many times over at Gearslutz. Its a fact of life for most of us unless you can get the really good hardware and get everything synced to a good clock. I’ve even noticed different time stamps on the midi data depending on whether I use the USB port versus my midi input on my Pro40. It really is an issue that I should give more attention to.

    I am not talking about the jitter or latency introduced by the hardware or the drivers etc… There is no way to get rid of it. I’m talking about sequencer-introduced jitter.
    When during recording, you heard the sound of your VSTi that you played coming out of the speakers at song time T, it should also come out of the speakers at song time T again upon playback.
    Some sequencers will play it back at T-k (-k being a constant negative latency) but others (like Cubase) will play it back at T-k-r , r being a random number that may vary in between a few hundred samples. So r is a jitter introduced by the sequencer, and I’d really like to know the reason for it being introduced! If you don’t believe me, do the tests.

    At the time the ASIO processing picks up the MIDI events that came in, they already have been given a timestamp (by the OS or yourself), be it accurate or not, and that timestamp should be the one and only reference for the placement of the notes in the audio stream, and that placement should be the same during recording and playback. This has nothing to do with factors such as USB latency/jitter, soundcard quality etc. The timestamp of the event MUST be the one and only reference and it should be used as sample-accurate reference in a good sequencer. If it was heard at sample X in the song during recording, it should also be heard at sample X during playback. Period.

    #17460
    zamrate
    Participant

    Or to put it differently: the wave coming out of your speakers when playing it back should be exactly the same wave that came out of your speakers when you recorded.

    #17621
    zamrate
    Participant

    Any forecast when this will be fixed? Ableton & EnergyXT are so far the only sequencers I tried that don’t mess up the MIDI timing..

Viewing 15 posts - 1 through 15 (of 18 total)
  • You must be logged in to reply to this topic.
© 2021 Zynewave