[LAD] "enhanced event port" LV2 extension proposal

Krzysztof Foltman wdev at foltman.com
Wed Nov 28 15:56:16 UTC 2007


Lars Luthman wrote:

> I guess my point was that I think this should be part of the semantics
> of the particular event type that needs it.

Well, making the common part common (for all "large" events) doesn't
hurt. And it can potentially make stuff easier for transparent bridging
over the network (when each "large" type implements an interface for
serialization/deserialization).

Again, the problem is perhaps that I just outlined the "large
events/interface pointer" idea and didn't present it fully.

> It might
> cause some duplication of code, but it keeps the basic event transport
> simpler.

I don't understand how. Maybe there's some misunderstanding about the
event kinds.

After your change, we basically have two event representations, "small"
(say, ep:shortData) and "large" (ep:longData).

- "small" events (<65536 bytes), where the content is stored directly in
the header (first 8 bytes of content) and following bytes, if necessary

- "large" events (arbitrary size), which are practically object
pointers; they're passed by reference (to IDataBuffer-derived interfaces)

The only interaction you'll have with "large" events will be:

- when you'll have to make specific use of them (where you just cast the
pointer to some IYourInterface pointer and call functions on that
pointer), like in case of video processing plugin

- when you're creating a network "bridge" plugin-host pair and have to
pass these events over the network (that's where you use the
hypothetical IDataBuffer interface's hypothetical serialization functions).

Serialization is necessary because "large" events can potentially refer
to some "foreign" objects, like handles to OpenGL textures in video
memory and what not :) You cannot assume that all of your "large" event
data will be conveniently placed in a single contiguous buffer in RAM,
because it might not be the most practical way of dealing with them.

If your data can be placed in a single contiguous buffer and there's not
much harm (or use) in copying it, "small" events are the way to go.

If you get the "large" event you can't use (because you don't handle the
specific event type), just skip the event (just as with "small" events).
It won't hurt and won't complicate the transport :) The same code skips
both "small" and "large" events of unrecognized type.

The only potential problem is what happens when the plugin returns an
object ("large" event) to the host in an output event buffer, and host
ignores it - but then, we could specify that host must explicitly call
"decrease refcount" function on every "large" event received from the
plugin, known or unknown, to avoid memory leaks. This is only a slight
inconvenience for host authors, and besides, plugins shouldn't return
unknown (not agreed by the host) events anyway :)

> You could have a MIDI source control two synth plugins to get a layered
> sound for example. In that case it would definitely be nice to not have
> to rewrite the event type in each event header for the different
> plugins.

Yes.

> lv2:optionalFeature and lv2:requiredFeature). For many things it may
> make sense to have them optional, but there could be plugins that e.g.
> don't do anything but filter or process MIDI events (arpeggiators,
> keyboard splitters) - loading such a plugin in a host that doesn't
> support MIDI events wouldn't make much sense.

Right. That's a good argument - filtering plugin list so that plugins
that are useless in a given context (because of required events) aren't
displayed.

> don't rub that in my face 10 years from now). Not sure I like a union
> instead of just a raw byte array, but I can live with it. I still think
> the IDataBuffer thing should be implemented separately in the event
> types that need it though.

Well, it *is* done that way, isn't it? It is used only for "large"
events, so it is implemented separately in the event types that need it! :)

In other words, "large" event is the "small" event with additional
clarification that the content is the interface pointer.

We could even call those events, "data" instead of "small", and "object
pointer" instead of "large". Again, a specific event type either declare
themselves as "data" or as "object pointer". So, MIDI and Set Parameter
event types would be "data", video, waveform and OpenGL geometry event
types would be "object pointers".

I also think events-as-objects have quite a lot of potential when it
comes to, say, host-plugin or inter-plugin interactions :)

> Right. I think it's a bit ugly to have a discrepancy between the max
> number of frames in a buffer and the max frame that you can sync an
> event to, but if there are good practical reasons for it I guess I can
> live with that too. It just feels a bit... odd.

I totally agree, it feels slightly ugly and odd, but I would keep it
like that. I see three remedies:

- expanding the structure (slow!)
- changing bit ratios for i:f parts (which makes access to specific
parts harder, and will cause complaints from subsample nazis ;) )
- or introducing a "65536 samples milestone" kind of event similar to
"clear" message in LZ compression format, separating events from
different 65536-sample "eras" :)

The third solution is relatively inexpensive - one extra event for 65536
samples - but I guess it could be added as a further extension, when it
starts being a practical (and not just style-related) problem.

If the plugin does not implement this extension, it cannot handle
buffers of more than 65536 samples - and that should be perfectly fine
in most cases. Hell, max buffer size in Buzz was 256 samples, pitiful by
today's standards, and it was still quite efficient.

Krzysztof



More information about the Linux-audio-dev mailing list