[Jens M Andreasen]
I should have included some kind of man page:
msg[0] = midi_status;
msg[1] = midi_data_1;
msg[2] = midi_data_2;
msg[3] = sample_offset; // counting from most recent rendering.
Personally I can ignore sample_offset, but to
others it is a
prerequisite.
... and I realize now that I should not totally ignore sample_offset. I
would probably try to change the internal status to fit with 64/32/16
samples, or whatever is the rendering fashion of the day.
And one more thing: The host should do the sorting and interleaving of
events from those devices it listens to. Not that it will break anything
if it doesn't, but if it do, then everything will be so much more tight
and solid. That is to say that the client(s) need not to bother about
time, except for beeing aware that time might not be 100% linear.
agree about the sorting of events (which btw is mandatory for the
host with run_synth() following <dssi.h>).
however, i insist that the midi_msg() idea deters performance by
forcing the host to split audio blocks (cache coherence and more
indirect function calls). in effect, run_synth() demands less from the
plugin (no MIDI byte stream parser with error correction mechanisms
needed), thus results in more readable event parsing code and leaves
the decision about sub-cycle length to the plugin, which is a good
thing.
it also seems worth noting that forcing an event's audio frame offset
into 8 bits is a bit meagre. it's not uncommon to run at 2+ k frames /
audio cycle, and some interfaces and systems won't go below 512
frames/cycle in the first place. off-line processing may even involve
a lot longer audio cycles.
tim