[LAD] "enhanced event port" LV2 extension proposal

Krzysztof Foltman wdev at foltman.com
Wed Nov 28 14:17:42 UTC 2007

Lars Luthman wrote:

> Using a number -> URI mapping host feature is a good idea. With one byte
> for the event type in each event header you get 256 different types per
> plugin instance which is probably more than enough. With two bytes you
> get 65536 different types per plugin instance which is certainly more
> than enough. It's not really an argument against having a completely
> generic event transport though.

256 types is more than enough, in my opinion. Especially if MIDI takes
just 1 type, not 128 :)

>> As I said, you didn't even provide any way to transfer the ownership of
>> the buffer to the plugin, so that it doesn't have to copy it.
> Actually he did. Just pass a pointer and a byte size in the buffer and
> type it with the URI http://this.buffer.is.yours.dude/ .

How does the host know if the buffer has or has not been acquired
(owned) by the plugin? With my approach, a plugin can either ignore data
completely, or copy it into safe place, or increase reference count so
that host doesn't free it until plugin has finished with it.

> I'd prefer to have the host define the URI -> number mappings itself so
> it doesn't have to remap them for every different plugin it has loaded,

It surely is a potential problem. While remapping can be really trivial,
it's not efficient and perhaps should be avoided.

On the other hand, how often do we send exactly the *same* event buffer
to different plugins (think Set Parameter messages!)? Is that a typical
or untypical case? What are example scenarios where that problem might

But, if we put whole MIDI stuff into a single event type, I'm fine with
host-assigned numbers (which would be given to plugin on instantiation).

> etc) so the host knows which events it needs to be able to handle (and
> can refuse to load the plugin if it doesn't recognise an event type),

Actually, a correct behaviour for the host would be to ignore the fact
that plugin handles some events that host doesn't. The fact that plugin
supports - say - video, doesn't mean that host must have anything to do
with that video.

It's a bit different if we're talking abot plugin's _output_ buffer
here, but I guess there can be a way to make a plugin not send unwanted
event types. Or to skip unrecognized event types.

> I'd also prefer to have a single event type for MIDI and have the status
> byte as part of the event data instead of having one event type for each
> MIDI message type (note on, note off, aftertouch, what have you).

We might. But then we lose generality and reserve a huge block of event
identifiers for an outdated crappy standard ;) Who uses polyphonic
aftertouch these days, anyway? :)

I thought of, at least, type ranges (like, 0x80-0x8F is note off
ch1..ch16). That might reduce RDF bloat, but don't know what others will
think about it.

Anyway, single event type for MIDI is OK for me.

> Also, in your proposal a single event type always has to have the same
> size and there is no way to say that an event is smaller than 3 bytes
> without possibly using the longData thing.

Losing those 2 bytes on 1-byte events is really acceptable to me :) We
need padding anyway, don't we?

To clarify the issue: shortData means that the event data may be up to 3
bytes long, not that it must be 3 bytes. In case it's less than 3 bytes,
the remaining bytes are used for padding/alignment.

But, anyway, the shortData/mediumData thing in my proposal could have
been done in a much better way (see below!).

> I'd really prefer to have the
> event size explicitly in the event header. Something like this:
> struct LV2_EVENT_HEADER {
>   uint32_t timestamp; // timestamp
>   uint32_t size;      // event size
>   uint8_t event_type; // event type number
>   uint8_t data[7];    // event data
> };

A small modification: what about

  uint32_t timestamp;
  uint16_t size;
  uint16_t event_type;
  union {
    uint8_t data[8];
    float dataf[2];
    int32_t datai[2];
    IDataBuffer *ptr;

We don't need >65535 bytes for size, because copying THAT large blocks
in processing thread (no matter if host or plugin does it) is a bad
idea, just pass a pointer/object! 8 bytes of data is better than 7
bytes, because you can fit a 64-bit pointer there (on 64-bit machines).
Or an int32_t and a float ("set parameter" event).

I'm *slightly* against the size field, because it is another value that
must be inspected in event processing loop, even with trivial events
like MIDI.

But it's not worth arguing, I'm fine with keeping size there. 12 bytes
to inspect+4 to skip for average MIDI event is already slightly better
than Dave's 16 bytes + 3 bytes "on the side". And memory management gets
easier. Everybody wins :)

> The port buffer could contain a pointer to an array of these, and if
> size > 7 the subsequent array element is used to store data, if it's
> larger then 7 + 16 the one after that is also used to store data etc.

So you've basically improved the "mediumData" thing and merged it with
"smallData", by introducing size field. I think you are right.

The only other choice I've considered was to get the size information
from specific event_type (ie. if event_type == midi, then the plugin can
assume that size == 0 extra blocks, etc, if it's float_parameter, then
the plugin assumes size == 1, and the size for each event type is
defined in RDF).

But, comparing to that, keeping size in header is much better from
debugging perspective etc.

> This means that each event will need at least 16 bytes instead of 8, but
> I don't think that's a huge loss. If we really wanted to we could make
> the data array 3 bytes instead - would a 12 byte alignment be OK on all
> platforms?

No, if we want to store pointers in single blocks on 64-bit platform.
And I think we want that, right? :)

> point (assuming that we want subsample precision) but I'd prefer 32.32
> to 16.16. 16.16 would effectively limit the audio buffer size to 65536
> samples since you can't address events in buffers larger than that.

I don't think it's a serious problem. Huge processing buffers are not
very useful in practice. Having to call a function 10 times a second
instead of 1 time a second rarely makes a difference. Again, it would
harm the most common use case (realtime, low-latency audio processing)
to slightly benefit a rare use case (non-realtime or high-latency
processing, like song playback), so not worth it in my opinion.

Thanks for all the comments and improvements,

More information about the Linux-audio-dev mailing list