[LAD] "enhanced event port" LV2 extension proposal

David Olofson david at olofson.net
Fri Nov 30 10:23:52 UTC 2007


On Friday 30 November 2007, Krzysztof Foltman wrote:
[...several points that I totally agree with...]
> If you use integers, perhaps the timestamps should be stored as
> delta values. 

That would seem to add complexity with little gain, though I haven't 
really thought hard about that...

It seems more straightforward to just use sample frame offsets when 
sending; you just grab the loop counter/sample index. However, in the 
specific case of my "instant dispath" architechture, you'd need to 
look at the last event in the queue to calculate the delta - but then 
again, you need to touch that event anyway, to set the 'next' 
field... (Linked lists.) No showstopper issues either way, I think.

When receiving, OTOH, deltas would be brilliant! You'd just process 
events until you get one with a non-zero delta - and then you process 
the number of sample frames indicated by that delta. (Obviously, 
end-of-buffer stop condition must be dealt with somewhere. Adding a 
dummy "stop" event scheduled for right after the buffer would 
eliminate the per-audio-fragment check for "fragment_frames > 
remaining_buffer_frames".)


> Perhaps fractional parts could be just stored in events that demand
> fractional timing (ie. grain start event), removing that part from
> generic protocol.

That's another idea I might steal! ;-)

I'm not sure, but it seems that you'd normally not want to drive a 
sub-sample timestamped input from an integer timestamped output or 
vice versa. An output intended for generating grain timing would be 
concerned about generating events at the exact right times, whereas a 
normal control output would be value oriented.

This may not seem to matter much at first, but it makes all the 
difference in the world if you consider event processors. With pure 
values, you might want to add extra events or even regenerate the 
signal completely, but this would break down when controlling 
something that relies on event timing. Might be worth considering 
even in non modular synth environments, as you might want to edit 
these events with in sequencer. This is starting to sound like highly 
experimental stuff, though. :-)


> Perhaps we're still overlooking something.

I'd want to try actually implementing some different, sensible plugins 
using this before I really decide what makes sense and what doesn't. 
Granular synthesis is about the only application I can think of right 
now that *really* needs sub-sample accurate timing, so that's the 
scenario I'm considering, obviously - along with all the normal code 
that doesn't need or want to mess with anything below sample frames.


//David Olofson - Programmer, Composer, Open Source Advocate

.-------  http://olofson.net - Games, SDL examples  -------.
|        http://zeespace.net - 2.5D rendering engine       |
|       http://audiality.org - Music/audio engine          |
|     http://eel.olofson.net - Real time scripting         |
'--  http://www.reologica.se - Rheology instrumentation  --'



More information about the Linux-audio-dev mailing list