[LAD] on the soft synth midi jitters ...

Jeff McClintock jef at synthedit.com
Wed Oct 6 21:12:32 UTC 2010

> I'm also a bit puzzled by people complaining about jitter. I don't have
> any exceptional kit, but in reality I can't say I've ever noticed it.
> Latency yes, but that's easily corrected with a bit of post record
> nudging.

Cubase is particularly bad when playing a soft-synth live, esp with larger
audio buffer sizes, because even though VST supports sample-accurate MIDI,
all note-ons are sent with timestamp of zero (the exact start of the
It's like trying to play drunk, like glue in the keys, I keep looking at my
fingers thinking "did my finger slip off that note?".

Playing a pre-recorded MIDI tract is different, timestamps are then

 Why did Steinberg implement it like this?, I think it's a misguided attempt
at reducing latency. It's doesn't, the worst case notes are still delayed
exactly one 'block' period. There's no upside.
 It's far better to have small latency and no jitter because your brain will
compensate very accurately for consistent latency, you will instinctively
hit the keys a fraction early. All will sound fine.
 Jitter is baked-in timing error, once it's in your tracks you can't get it
out. Latency can always be compensated for and eliminated later.

The right way is to timestamp the MIDI, send it to the synth delayed by one
block period. Since audio is already buffered with the same delay, you will
get perfect audio/MIDI sync.

IMHO - After writing my own plugin standard, sample-accurate MIDI is no more
difficult to support than block-quantized MIDI.
Jeff McClintock

> Message: 8
> Date: Tue, 5 Oct 2010 21:22:23 +0100
> From: Folderol <folderol at ukfsn.org>
> Subject: Re: [LAD] on the soft synth midi jitters ...
> To: linux-audio-dev at lists.linuxaudio.org
> Message-ID: <20101005212223.5a7fbb61 at debian>
> Content-Type: text/plain; charset=US-ASCII
> On Tue, 5 Oct 2010 22:00:11 +0200
> fons at kokkinizita.net wrote:
> > On Tue, Oct 05, 2010 at 02:50:10PM +0200, David Olofson wrote:
> >
> > > Not only that. As long as the "fragment" initialization overhead can
> be kept
> > > low, smaller fragments (within reasonable limits) can also improve
> throughput
> > > as a result of smaller memory footprint.
> >
> > 'Fragment initialisation' should be little more than
> > ensuring you have the right pointers into the in/out
> > buffers.
> >
> > > Depending on the design, a synthesizer with a large number of voices
> playing
> > > can have a rather large memory footprint (intermediate buffers etc),
> which can
> > > be significantly reduced by doing the processing in smaller fragments.
> >
> > > Obviously, this depends a lot on the design and what hardware you're
> running
> > > on, but you can be pretty certain that no modern CPU likes the
> occasional
> > > short bursts of accesses scattered over a large memory area -
> especially not
> > > when other application code keeps pushing your synth code and data out
> of the
> > > cache between the audio callbacks.
> >
> > Very true. The 'bigger' the app (voices for a synth, channels for
> > a mixer or daw) the more this will impact the performance. Designing
> > the audio code for a fairly small basic period size will pay off.
> > As will some simple optimisations of buffer use.
> >
> > There are other possible issues, such as using FFT operations.
> > Calling a large FFT every N frames may have little impact on
> > the average load, but it could have a big one on the worst case
> > in a period, and in the end that's what counts.
> >
> > Zyn/Yoshimi uses FFTs for some of its algorithms IIRC. Getting
> > the note-on timing more accurate could help to distribute those
> > FFT calls more evenly over Jack periods, if the input is 'human'.
> > Big chords generated by a sequencer or algorithmically will still
> > start at the same period, maybe they should be 'dispersed'...
> >
> > Ciao,
> I'm all in favour of a bit of dispersal.
> When I started out with a Yamaha SY22 and Acorn Archimedes it was all
> too easy to stuff too much down the pipe at once. However, doing some
> experimenting, I was surprised at how much you could delay or advance
> Note-On events undetectably although it depended to some extent on the
> ADSR envelope.
> I don't need to do that any more, but old habits die hard, so if I'm
> copy-pasting tracks I tend to be deliberately a bit sloppy.
> I'm also a bit puzzled by people complaining about jitter. I don't have
> any exceptional kit, but in reality I can't say I've ever noticed it.
> Latency yes, but that's easily corrected with a bit of post record
> nudging.
> --
> Will J Godfrey
> http://www.musically.me.uk
> Say you have a poem and I have a tune.
> Exchange them and we can both have a poem, a tune, and a song.

More information about the Linux-audio-dev mailing list