Dave Robillard wrote:
More or less agreed. The size thing isn't really
an issue in practise,
just reaching for equivalence with OSC/Jack for equivalence's sake.
Dealing with a 24 bit number is pretty weird though. Worth it?
I prefer 16+16 to 24+8. I can see more uses for 16-bit types than for
24-bit sizes (basically, defining a new URI for every parameter exposed
by a plugin, you'll find people insane enough to try that). Perhaps you
can probably see the applications where 24+8 is more desirable. Decide
for yourself.
The 'free' payload is tempting since the
majority of events (at least
for now) are going to be 4 bytes of MIDI. OTOH, we probably want the
data itself aligned (consider e.g. ramp events)
We have 32-bit alignment for the data. Is 64-bit alignment worth
shooting for?
Yes, I think so. 64-bit platforms are getting more and more popular.
BTW, does reading a double from address misaligned by a dword involve
performance penalty? If not, we can say that we use alignment of
sizeof(void *) on a particular platform. Should be enough.
These structs don't actually 'exist', for
the nth time ;) so the
compiler doesn't take care of any padding/alignment etc. We're defining
Well, at some point we're going to end up with some structs anyway :)
And it would be (slightly) better if whatever binary representation we
agree will have a 1:1 C struct(s) counterpart, if you know what I mean.
Even if finding the address of the next struct is going to require some
pointer magic hidden in macros.
I prefer to use event->whatever than EVENT_GET_WHATEVER(event), if
possible :)
In other words, we can define the buffer content in two ways:
1. By specifying offsets of particular fields (including payload fields
for every event type) as numbers. We need to ensure that our
platform-independent binary representation is really platform
independent (ie. all payload types that contain pointers reserve 64 bits
for them, otherwise they get unusable in 64-bit environments because
pointer doesn't fit etc).
2. By defining it as C structures, and calculating offsets based on how
platform's compiler lays these structures out (ie. if header is 12
bytes, then a pointer follows, then on 32-bit architectures it gets
offset 12 and payload size is 4, but on 64-bit architectures it gets
offset 16 and the payload size is 12: 4 bytes padding plus 8 bytes
actual pointer).
As you see, I clearly prefer the solution 2, because the main advantage
of solution 1 (platform-independence of binary format) is not 100%
ensured anyway, unless every single event type used will be
platform-independent (and perhaps self-contained, ie. no pointers
outside of the buffer, no handles to local resources, etc.).
The platform-dependent binary layout of messages is not such an uncommon
thing. In fact, platform independent binary layout of structures is
usually used in network protocols, not APIs. What we're defining now is
more related to an API than a network protocol, isn't it? ;)
Krzysztof