On Mon, Mar 08, 2004 at 04:18:11 +0100, Tim Goetze wrote:
ok, after going over tap_limiter.c in detail i see the
point of
communicating latency information to the host at runtime (and neither
a dedicated descriptor member, nor RDF will ever succeed in trying to
accommodate the behaviour).
using a dedicated CONTROL | OUTPUT port for this purpose is indeed
a very sensible option.
consequently, all we need do is document the "latency" port in
ladspa.h i think.
Yes, but we need to think carefully about the name, "_latency" or
".latency" might be a better choice, with a note that hosts should ignore
_/. prefixed ports that they do not understand.
When we thrashed it out Paul and I did consider this, but I cant remeber
why we went with plain "latency" in the end. Paul?
a much cleaner way to do this would be a
'get_latency()' method,
required to return the same figure throughout the plugin lifecycle.
we can still do that and not require the 'latency' port hack.
I dont think thats cleaner - it requires that atleast the avticate method
has been called (preferably run()) but without an enforced requirement for
it, whereas writing the value to a port physically requires that run() or
activate() be called. Also its inherently synchronous with the run()
cycle, being set during each call.
- Steve