On Wed, 2007-11-28 at 14:17 +0000, Krzysztof Foltman wrote:
Lars Luthman wrote:
As I
said, you didn't even provide any way to transfer the ownership of
the buffer to the plugin, so that it doesn't have to copy it.
Actually he did.
Just pass a pointer and a byte size in the buffer and
type it with the URI
http://this.buffer.is.yours.dude/ .
How does the host know if the buffer has or has not been acquired
(owned) by the plugin? With my approach, a plugin can either ignore data
completely, or copy it into safe place, or increase reference count so
that host doesn't free it until plugin has finished with it.
I guess my point was that I think this should be part of the semantics
of the particular event type that needs it. An event type that will
potentially use huge buffers that would benefit from passing ownership
around could define its own method for doing that (passing function
pointers, plugin-writable flags etc as part of the event data). It might
cause some duplication of code, but it keeps the basic event transport
simpler.
I'd prefer
to have the host define the URI -> number mappings itself so
it doesn't have to remap them for every different plugin it has loaded,
It surely is a potential problem. While remapping can be really trivial,
it's not efficient and perhaps should be avoided.
On the other hand, how often do we send exactly the *same* event buffer
to different plugins (think Set Parameter messages!)? Is that a typical
or untypical case? What are example scenarios where that problem might
arise?
You could have a MIDI source control two synth plugins to get a layered
sound for example. In that case it would definitely be nice to not have
to rewrite the event type in each event header for the different
plugins.
etc) so the
host knows which events it needs to be able to handle (and
can refuse to load the plugin if it doesn't recognise an event type),
Actually, a correct behaviour for the host would be to ignore the fact
that plugin handles some events that host doesn't. The fact that plugin
supports - say - video, doesn't mean that host must have anything to do
with that video.
If the event types are lv2:Features the plugin automatically gets to
decide whether each particular type is optional or required (see
lv2:optionalFeature and lv2:requiredFeature). For many things it may
make sense to have them optional, but there could be plugins that e.g.
don't do anything but filter or process MIDI events (arpeggiators,
keyboard splitters) - loading such a plugin in a host that doesn't
support MIDI events wouldn't make much sense.
I'd really
prefer to have the
event size explicitly in the event header. Something like this:
struct LV2_EVENT_HEADER {
uint32_t timestamp; // timestamp
uint32_t size; // event size
uint8_t event_type; // event type number
uint8_t data[7]; // event data
};
A small modification: what about
struct LV2_EVENT_HEADER {
uint32_t timestamp;
uint16_t size;
uint16_t event_type;
union {
uint8_t data[8];
float dataf[2];
int32_t datai[2];
IDataBuffer *ptr;
}
};
We don't need >65535 bytes for size, because copying THAT large blocks
in processing thread (no matter if host or plugin does it) is a bad
idea, just pass a pointer/object! 8 bytes of data is better than 7
bytes, because you can fit a 64-bit pointer there (on 64-bit machines).
Or an int32_t and a float ("set parameter" event).
Fair enough. 16 bits for the size should be enough, and a 16 bit type
field gives room for more types than any host could ever need (OK, OK,
don't rub that in my face 10 years from now). Not sure I like a union
instead of just a raw byte array, but I can live with it. I still think
the IDataBuffer thing should be implemented separately in the event
types that need it though.
point
(assuming that we want subsample precision) but I'd prefer 32.32
to 16.16. 16.16 would effectively limit the audio buffer size to 65536
samples since you can't address events in buffers larger than that.
I don't think it's a serious problem. Huge processing buffers are not
very useful in practice. Having to call a function 10 times a second
instead of 1 time a second rarely makes a difference. Again, it would
harm the most common use case (realtime, low-latency audio processing)
to slightly benefit a rare use case (non-realtime or high-latency
processing, like song playback), so not worth it in my opinion.
Right. I think it's a bit ugly to have a discrepancy between the max
number of frames in a buffer and the max frame that you can sync an
event to, but if there are good practical reasons for it I guess I can
live with that too. It just feels a bit... odd.
--ll