The way VST does it however, that wouldn't be
needed, since
timestamps are related to buffers. 0 == start of this buffer.
Might look nice to plugins, but I forsee minor nightmares in
multithreaded hosts, hosts that want to split buffers, hosts that
support different buffer sizes in parts of the net, hosts that
support multiple sample rates in the system, communication over
wire,... (Yet another reason why I think the VST event system is
a pretty bad design.)
Hmm.. I can see why this is tempting, it avoids the wrapping
problem, among other things. Are you sure its not better that way?
Wrapping is not a problem, so why avoid it? :-)
So time starts at some point decided by the host. Does the host pass the
current timestamp to process(), so plugins know what time it is? I assume
that if the host loops, or the user jumps back in song-position, time does
not jump with it, it just keeps on ticking?
I guess my only question is how do plugins know what time it is now?
Seriously, though, 32 bit is probably sensible, since
you'd really
rather not end up in a situation where you have to consider timestamp
wrap intervals when you decide what buffer size to use. (Use larger
buffers than 32768 frames in Audiality, and you're in trouble.)
Anything smaller than 32 bits doesn't save you any cycles, saves a WHOPPING
2 bytes of memory, and cause potential alignment issues to nix your 2 byte
savings. 32 bit is an obvious answer, I think.
worry about wrapping *at all* - which is not true. Use
32 bit
timestamps internally in a sequencer, and you'll get a bug report
from the first person who happens to get more than 2 or 4 Gframes
between two events in the database.
So start the timer at 0xffff0000 and force anyone testing to deal with a
wrap early on.