I suspect that LMMS needs more buffering than provided
by
'typical' jack period settings wich could be anyhting between
say 64 and 1024 samples. Now if LMMS does have its own buffer
between the processing code and the Jack interface then things
should just work fine *IF* that buffer is used correctly - this
would mean that not only it has to have at least the size needed
to let LMMS work at ease, but that also it needs to be pre-filled
(with silence) up to that size before starting. If this is not
done, the buffer is in effect useless. It could be that it's just
that what's missing.
I am told, that if I were willing to set up Jack to frames/period of
2048, that lmms would work fine with Jack. And this works fine as long
as live work is entirely out of the picture. But for live work, that
much buffering deletes a component from usability.
I do understand how quality Jack interfacing could be a surpassingly
difficult challenge for a developer, most especially a group of
developers, of an insular yet modular project like lmms. I could
imagine that it might require reworking of the audio output code for
each module, given that the whole was never designed for jack in the
first place; and there are a lot of modules. Culturally speaking,
within many FOSS developer groups, such changes are very difficult to
enact short of a fork. I have also heard tell that qt4 is unhelpful in
general latency-wise, but this does not explain why ALSA output works well.
J.E.B.