[LAU] rtirq

Len Ovens len at ovenwerks.net
Thu Mar 19 20:37:18 UTC 2015


On Thu, 19 Mar 2015, Ralf Mardorf wrote:

> On Thu, 19 Mar 2015 09:07:34 -0700 (PDT), Len Ovens wrote:
>> So I have to ask myself if what you are hearing is just the
>> effects of a slow standard MIDI transport before the info even gets to
>> the computer.
>
> You can do an experiment, assumed you still own old computers and tapes.
>
> Tape synced to the Computer by SMPTE (Atari) or by Click (C64). Record
> a MIDI synth and after that record the same synth on another tape track.
> This will double the synth sound, all you get is a phasing, that
> doesn't move.
>
> Do the same with a Linux or Windows PC. Record a track with Qtractor or
> Cubase and after that record the same external synth on another
> Qtractor or Cubase track. Sounds do not start at the same time, there's
> always audible shift, comparable to slow early reflections and the
> phasing is moving.

That is the first explanation of this that makes sense to me. Thank you. I 
do not know if it is possible to fix this in Linux or at least in the 
sequencing SW we have. In a machine that only deals with MIDI (Atari or 
C64), each midi event has it's own time stamp or possition based on the OS 
clock (whatever the sequencing program is using for a time base). In a 
machine that deals with audio, that time base is an audio buffer length 
which may contain more than one midi event, but may not contain all midi 
events that are meant to be together. Not only that, but when the same 
midi goes in a cycle, there is no guarentee that the events that were 
within one buffer length with again remain within the same buffer length 
and this may go in and out of sync as midi and it's time signature may 
form a beat with the audio media clock. This would be the moving phase you 
hear.

Perhaps setting jack up for 16/3 at 48k would solve that. 16 samples seems 
to be close to one MIDI byte and most events are three bytes... though 
with a chord running status would take 9 bytes and make seven or even 
possibly 6 (I don't remember if active sensing resets running status). In 
any case each midi byte should be aligned with the number of samples that 
best fits that one byte. I don't know is 16/2 would be better or not (read 
I don't wish to spend the brain power thinking about it).

> It's not a limitation of MIDI.

It is some of both. A faster MIDI would not solve things unless Linux 
audio was done differently. Assuming time stamping had to be done at each 
byte would fix this. Using media clock still seems like the right thing to 
do because in general, the media clock goes with a project from computer 
to computer. Basically, you are telling me that an audio buffer of 128 
samples is too long for good MIDI sync. This can be fixed in two ways: Use 
a short audio buffer or decouple midi from the audio buffer completely and 
run midi processing and time stamping separately. The first can be done by 
anyone who has a machine that can run at 16/2 or 16/3 (maybe even 32/2 
would be ok) xrun free. The second would require redoing the sequencer SW 
and possibly the ALSA midi drivers (I don't know enough about them to 
say).

> However, I wont discuss this again, I just want to know if RTC is
> needed in the rtirq config for audio (ALSA/jackd), assumed jackd
> doens't start with "--clocksource OR -c h(pet)". And should rtirq
> config include an entry for HRTIMER/HPET? Is it possible to add
> HRTIMER/HPET to the rtirq config (e.g. in addition to RTC) and what is
> the name of such an entry?

I can't answer if RTC is needed. HRTIMER is probably snd_hrtimer, but I 
don't know if elevating the priority of that alone would help because of 
all we discused above. Also the priority of snd_mpu401_uart and the 
snd_seq* group of modules may suffer as well... But none of that matters 
if the midi read/write clock/timestamp is related to the audio buffer 
which is how any jack connected application would do things. Jack does 
allow a midi port to set which sample a midi event belongs to, but does 
that timing make it past jack? Do sequencers use this possibility or just 
send all the midi stuff right now for each graph knowing there are lots of 
delays in there anyway?

Maybe those fussy people who decided aes67 should support at least 1ms 
latency are right.

--
Len Ovens
www.ovenwerks.net



More information about the Linux-audio-user mailing list