You'll need all these fields at once when sending
control ramp events
to voices, so unions won't help. The struct is exactly 32 bytes, so
we're going to end up with 64 byte events if we add anything. On big
deal, maybe. (64 bit platforms will need an extra 4 bytes for 'next'
anyway.)
I think it is premature to worry about the size of events. Let's get what
data we need in there and optimize it after we're done arguing :)
Well, you do expect a beat synced effect to track the
*beat*, right?
Now, if you have two timelines running at different tempi, which one
do you track?
Actually, no. I can't think of anything that would track the beat if it
wasn't started on a beat. Assuming 4/4 time, and a note played on an eith
note between beats - what kind of effect would lock onto a true beat edge?
I really would expect anything to sync to a beat-width and do a (delay,
whatever), but that effect to be off by 1/2 beat because it was started by
1/2 beat off. Can you give me an example?
They need different event types, but that applies to
string and raw
data controls as well. And this just makes them easier to handle.
I want to limit the number of special case events. If we have to special
case this stuff, then I'm going to argue against controls again :)
Do you expect a synth to play anything but some
default frequency,
without pitch input? :-)
Umm, more or less, yes. A simple sample player with no pitch control would
just play the sample. A whitenoise generator would make noise but not have
any sense of pitch.