On Sun, 2007-12-02 at 00:15 +0000, Krzysztof Foltman wrote:
Dave Robillard wrote:
What is the point of having two separate event
definitions when one will
do for both, though?
Perhaps, if we go 64-bit (although I still think it's
overkill!) it
might make sense to make a timestamp a sort of an union, so some event
types will use one int64_t member (say, 200 picosecond units) instead of
integer+fractional? Just cosmetics.
So we'd have something like:
struct LV2_EVENT_HEADER {
union {
int64_t timestamp64; // for event streams that use one large number
as timestamp, like OSC???
struct {
int32_t timestamp;
int32_t timestamp_fract;
};
};
uint32_t type; // don't see any other choice but int, opaque pointer
won't fit on x86-64, plus, index is all we need, given a good URI
mapping mechanism
uint32_t size;
};
Guess we could. I don't really want to mess up the struct, but
whatever. The number of bytes is all that actually matters, some URI
somewhere will define what they are. union can make things nicer, so
why not... not really relevant.
With 16-byte granularity for payload (ie. size is
rounded up to nearest
multiple of 16 to determine next header address). Or perhaps 8-byte,
winning some memory at cost of messier code.
I still don't see where you're getting all this messy code stuff.
Adding 8 to a pointer isn't any more or less messy than adding 16 to a
pointer. Or 4, or 2, or 3, or 17, or whatever. pointer + number. There
will surely be a macro to round any value up to whatever that number
should be. Alignment is purely a performance issue.
Why? The cons
are obvious, what are the pros? A few bits?
Using a small structure when a small structure is just fine.
Ignoring my 'extended' uses, halving the stamp size vs. Jack MIDI isn't
really fine. Dealing with that properly would be a PITA (a real one,
not a 'adding a different number' one ;) )
When you
have lots of events, this may be very important. In other situations, no
clear advantages of 8-byte struct over 16-byte.
A byte here and a byte there in the header makes
no difference to any of
this. We should try to keep it small as possible, yes, but it doesn't
affect what the using code looks like at all.
Well, let me try to rephrase it, because I sense a huge miscommunication
here.
When payload is always aligned to header size (8 bytes in my case), the
loop can look just like this:
for (size_t i = 0; i < count; i += (events[i].size+7) >> 3) {
// use events[i] here, cast to event type struct if needed
}
or (events[i].size+15)>>4 with 16 byte header
Yeah, because everyone is going to write the loop with (events[i].size
+7) >> 3 in it. Geeze. :P
A bitshift versus an addition is hardly significant.
Short, simple, efficient.
, weird looking, insignificant anyway.
Well... good
thing you don't have to use those stamps for the 'others'.
Again, I'm not proposing anyone use OSC style stamps in place of frame
stamps...
So we end up with two-three event stream types, each using different
timing scheme. Not as bad as it sounds, certainly, because each of these
types will be used for different things.
What I proposed was to use different event structure layout for those
event streams. As I said above, the differences between those streams
are so huge, that translation between 8-byte and 16-byte headers will be
the least of the problems :)
3 separate and incompatible extensions and event structs everyone has to
deal with so you can have a bit shift in a for loop and save a tiny
fraction of a nanosecond processing time per event...
somewhere is
not quite a convincing enough argument for me to be happy
dealing with 5 different event structs (and all the translation) instead
of 1 ;)
2 structs, not 5. And the translation will have to be there anyway,
unless you expect sample-based plugins to read OSC-style timestamps.
Frankly if anyone knows this would be a PITA, it's me, and it would be a
PITA. It would make the overall thing more complicated for no good
reason. Ringbuffering events around and changing the time base and such
would be extremely annoying (not to mention significantly more expensive
due to all the copying from struct type A to struct type B) versus just
using the same struct everywhere. Then you just change the time stamp
in place. Easy, fast.
In apps that do have to do things like this the performance hit of
copying structs around is way, way more significant than a shift here vs
an add there - in both space and time.
Bit... odd.
Sure, saves 2 bytes, but at the cost of throwing out that
OSC stamp compatibility (which I guarantee will be actually useful).
Plus... well, 2 bytes. Recentish chips can keep a few million of them
in cache. :)
By saving 2 bytes, it saves 8-16 bytes. Magic, isn't it? :)
Only because of your excessive padding and irrational desire to apply
the << operator as much as possible. Pad the data to 32 bits (or
whatever, OSC is 32 bits, maybe 64 is better), add the offset to the
data pointer to get the next event. A single addition is hardly slow.
If you want to store OSC messages transparently,
though, 32 bits
would
be nice.
Wouldn't it though? ;)
-DR-