Dave Robillard wrote:
> Why not make it satisfy most everyone by being extensible?
It *is* extensible. Note that commands 0x00-0x6F and 0x73-0x7F are
unused, so further extensions are free to define them (perhaps we need a
scheme for binding extension URIs to command numbers, to make it more
LV2-ey). And 0x72 command can be used for pretty much any data larger
than 8+2 octets.
While it's "efficient first" and not "generic first", so to speak, it
should be fine for the intended uses.
> The idea of a generic event port is not a bad one,
I think it's not just "not a bad one". The other possibility (multiple
event ports) is less efficient, and speed is crucial here. It's also
more complex, looking from plugin author's perspective. So I had little
choice.
> idea at all (no matter what, someone is going to want to put something
> in there you didn't think of.
Please don't jump to conclusions, and take more time to read and analyse
the proposal.
Of course, it is possible to add new event types with arbitrary length
data, and the limitation of 8 octets per extended block is not that bad,
because you can always fit an interface pointer (32-bit or 64-bit) there.
Just look how binary data extension is implemented.
Notice that I just took the approach of optimizing for most common case
(MIDI events), and tried to maximize functionality while keeping block
size small and constant (to avoid pointer arithmetic that was
complicating Lars' proposal a bit).
> Trying to pre-define things from the top down like this is un-lv2-ey).
Well, sometimes you need to find the right tradeoff between being
efficient (memory- and speed-wise) and generic. I think I've found an
acceptable tradeoff (definitely favouring speed, but not losing
generality and not very memory-hungry).
However, I had to make some assumptions about how it will be used
(mostly implemented by inexperienced people, mostly used for MIDI and
float parameters, seldom used for advanced stuff). Oh well, I'm
repeating myself here :)
I think those are correct assumptions, but you seem to have a different
angle for looking at those things. Well, it took me years (and
failed/inadequate designs) to grow out of the "everything should be as
generic as possible" approach, so I understand why you're doing that,
but I still prefer the priority-based optimization approach that I've used.
I still think my proposal could be improved, and I don't like some
decisions that I made (basically, I made them because the alternatives
looked even more nasty), but stripping off optimizations is not the way
to go, IMO.
> Something more appropriate (IMO) might be like:
> struct LV2_EVENT
> {
> ev_stamp_t time; ///< (ignoring the timestamp type issue)
> ev_type_t type; ///< (again ignoring type issue)
> size_t buf_size; ///< size of buf in bytes
> char* buf; ///< raw event data
> }
You're suggesting a "classic" textbook chunked data approach, which
works, no doubt. However, it has some problems with it, which might not
be considered very major, but seem to make my approach slightly more
favourable:
- too much data to be accessed in the most common use case (in 32-bit
environment, 16 bytes of header plus event data possibly in distant
memory); we don't need to save every byte of RAM, but when you need to
read and write twice as much RAM as you could, then maybe it's worth
rethinking it
- separation of event header and event data in the most common case; it
would be better not to cause cache thrashing too much
- it encourages memory fragmentation (experienced people will allocate
event data for all events in the same buffer, wonder about inexperienced
ones, one malloc per event data? :) )
- it doesn't deal with large data properly (because the plugin cannot
start "owning" the raw event data instead of copying it from the buffer
provided); imagine copying a video buffer in the process() function of a
plugin!
I'm not saying that approach is Really Bad - just that it's kind of a
pre-optimization version of my proposal (I made MIDI data very
efficient, float parameter data slightly less efficient, float parameter
data with deltas even less efficient, and binary data are pretty
inefficient :) ).
The fact that event has to be handled is annoying enough on its own :) -
I have to end the inner loop, store state information somewhere etc. - I
don't want some additional, unnecessary memory accesses which may throw
sample data and buffers out of the cache.
> (Obviously just a quick generic knock-off to get the idea across). In
> networkey terms, this is separating transport from contents, which is
> pretty firmly established as a Good Idea.
In network context, yes. However, _optimizing_ for uncommon case is not
a preferable approach to me.
The arbitrary binary data command (0x72) mentioned in my proposal can
give you practically everything you need, and can be used in a
network-transparent way, as long as data in the "binary data" chunks are
self-contained (don't refer to other buffers).
However, my proposal lacks any mechanism to be used for serializing
arguments of future commands defined by extensions.
Still, that problem was solved many times in history, by deriving extra
interfaces for the new commands from an interface that provides
reference counting and marshalling. IPersistStream type stuff, for the
victims of Microsoft APIs.
It is a bit complex (or at least not as trivial as plain MIDI), but it
would be only used in hosts and complex plugins that use extension
events, so I guess it'd be fine.
> I very strongly feel that if 'more than MIDI' events are going to be
> mixed with MIDI events in the same port (s/MIDI/whatever), then the
> event transport mechanism needs to be 100% event type agnostic.
On the other hand, "100% generic" means "almost 100% unoptimized". By
throwing away extra information, you often throw away the chances for
optimization, so to speak.
Instead of thinking in terms of MIDI vs non-MIDI, try thinking of "my"
event types as "short" (8 octets), "medium" (16 octets) and "large"
(arbitrary-sized blobs). The fact that all short events are MIDI events
is, I think, less important. It's also not set in stone.
> It's the same approach LV2 takes with ports, and it works beautifully there.
On the other hand, it deals with a trivial problem, and solves it in a
complex way. That's not an engineer's dream :)
Regards,
Krzysztof
> However, it (MIDI) also has its own age. And limitations. In particular,
the
> amount of per-note control is pitiful.
> .... In the meantime, maybe the MIDI guys will decide for us :D
They did, "Key Based Instrument Controllers" provide for per-note
controllers.
Granted MIDI remains stupid. MIDI is 15 different ways of saying "set this
control".
All of MIDI can be reduced to two messages: set-control (addressed by
Channel, Voice and Controller ID) .. and SYSEX - for data-dumps.
Jeff McClintock
Message: 13
Date: Fri, 30 Nov 2007 00:30:58 +0000
From: Krzysztof Foltman <wdev(a)foltman.com>
Subject: Re: [LAD] "enhanced event port" LV2 extension proposal
To: Dave Robillard <drobilla(a)connect.carleton.ca>, LAD
<linux-audio-dev(a)lists.linuxaudio.org>
Message-ID: <474F59C2.1050805(a)foltman.com>
Content-Type: text/plain; charset=ISO-8859-2; format=flowed
Dave Robillard wrote:
> I /really/ don't like screwing around with MIDI. Just make the events
> pure, raw MIDI. Jack MIDI events are 'just n bytes of MIDI', Alsa has
> functions to get at 'just n bytes of MIDI', and... well, it's just MIDI.
>
However, it also has its own age. And limitations. In particular, the
amount of per-note control is pitiful.
I can always use hacks to get around the limitations, or introduce a
per-note control via separate "set note parameter" event type. But hacks
are ... hacky, and the extended extension of extension for every single
feature is a bit inelegant too.
Anyway - so far, I have no code that would make use of this, so we might
keep it as plain MIDI. And then we have next 5 years to decide the
details of the feature. In the meantime, maybe the MIDI guys will decide
for us :D
Krzysztof
Hi
I'm trying to make a guitar tuner ladspa plugin.
I already wrote a first draft, but listplugins can't find my plugin in
the library :
$ listplugins
...
/usr/lib/ladspa/TunerUnit.so: <---
/usr/lib/ladspa/tap_doubler.so:
...
it could be because I've chosen a uniqueID already in use.
is there a list or way to register a uniqueID ?
here is a part of my _init() function:
g_psTUDescriptor->UniqueID
= 4053;
g_psTUDescriptor->Label
= strdup("tner");
g_psTUDescriptor->Properties
= LADSPA_PROPERTY_HARD_RT_CAPABLE;
g_psTUDescriptor->Name
= strdup("Universal Digital Tuner Unit");
g_psTUDescriptor->Maker
= strdup("Rémi Thébault");
g_psTUDescriptor->Copyright
= strdup("None");
By the way, for the moment I use FFTW for the frequency identification.
Is there a way to identify a frequency with quite high precision without
computing a big spectrum analysis ?
For a 0.7 Hz precision with fft, I need 65536 samples, what will take a
lot CPU load every 1.5 sec.
It is obvious that so much CPU Load is no acceptable for a small plugin
in a realtime application.
Rémi
On Friday 30 November 2007, Dave Robillard wrote:
> > That's why I'm using a Port as the smallest "connection unit",
> > much like LADSPA ports, so there is no need for an event type
> > field of any kind at all, let alone a URI.
>
> Ports /are/ the smallest "connection unit". But ports can /contain/
> events, and if we want multiple kinds of events in a single port,
> then the events themselves need a type field.
Sounds like there's a fundamental difference there, then. I'm using a
model where a port is nothing more than something that deals with a
value of some sort. There are no channels, voices, different events
or anything "inside" a port - just a value. An output port
can "operate" that value of a compatible input port.
Of course, that "value" could be anything, but I'm not explicitly
supporting that on the API level. If plugins want to use MIDI
messages, they're on their own when it comes to mapping of channels,
CCs etc. That stuff is really beyond the scope of my project, as one
wouldn't be able to configure and control such things normally.
> > The data in the events *could* be MIDI or whatever (the host
> > doesn't even have to understand any of it), but normally, in the
> > case of Audiality 2, it'll be modular synth style ramped control
> > events. That is, one port controls exactly one value - just like
> > in LADSPA, only using timestamped events with ramping info instead
> > of one value per buffer.
>
> The host might not have to (though in practise it usually does), but
> other plugins certainly do. You can't process events if you don't
> even know what they are.
Yes, obviously. I don't quite see what you think I'm trying to say
here. :-)
> > Extensibility is a non-issue on this level.
>
> OK, the event extension doesn't define your ramped control events,
> so you're not allowed to use them, ever, period.
>
> ... looks like extensibility is an issue at this level, eh? ;)
Right, but that's mostly about Audiality 2 anyway. There, if I for
some reason started with control events without ramping, I'd add
another "control events v2" port type. If that type happens to be a
superset of the first one doesn't really matter, as they're still not
compatible.
Where it makes sense, one can provide converters to/from other types,
but to the host (the low level machinery directly dealing with plugin
graphs, that is), those are just ordinary plugins with only one input
port and one output port.
> > What you do if you want
> > more stuff is just grab another URI for a new event based
> > protocol, and you get to start over with a fresh event struct to
> > use in whatever way you like. (In fact, as it is, the host doesn't
> > even have to know you'll be using events. It just provides a LIFO
> > pool of events for any plugins that might need it.)
>
> Sounds like you're thinking too hard.
Nah. I'm just in the middle of another project, and the Audiality 2
code isn't in a state where I could post that without just adding to
the confusion. And, I think we might have a terminology impedance
mismatch. :-)
> "Events" here are just a bunch of bytes in a flat buffer.
Mine are implemented as linked lists of small memory blocks, for
various reasons. (I've had a working implementation for years, so
I'll stick with that for now. Not saying it's the best or most
efficient way of doing it, but I have yet to figure out how to bend
flat buffers around my event routing model - or the other way
around.)
I did "hardwire" fixed point timestamps as those are closely related
to the whole deal with sample frames, buffers etc - but the data area
is indeed just a bunch of raw bytes.
> There is definitely no protocol here. Please, please don't
> say "protocol". That way lies painful digressed conversations,
> trust me.
I'm open to alternative terminology. :-)
What I'm talking about is just the name of "whatever goes on between
connected ports." I don't want the term to be too specific, as it
also covers LADSPA style audio buffers, shared buffers (which can
contain function pointers) and whatever else plugins might use to
communicate.
//David Olofson - Programmer, Composer, Open Source Advocate
.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
'-- http://www.reologica.se - Rheology instrumentation --'
Hi,
A fresh new release from the FreeBoB project is available. It is only a
maintenance release. It fixes a few bugs which were reported. If you
don't have problems with the your current version of libfreebob, there
is no reason to update to this one.
Available through the SourceForge release system at:
http://sourceforge.net/project/showfiles.php?group_id=117802
Have fun recording!
The FreeBoB Team
PS: Expect FFADO soon...
Hi,
A fresh new release from the FreeBoB project is available. It is only a
maintenance release. It fixes a few bugs which were reported. If you
don't have problems with the your current version of libfreebob, there
is no reason to update to this one.
Have fun recording!
The FreeBoB Team
Hi there,
This may be a very simple question, I am not sure.
I am hacking on a buzz-like tracker, aldrin. It is written in
Python, but the heavy lifting is done by libzzub, which uses
portmidi for midi implementation.
At the moment, midi in devices can be selected, midi out are a
bit buggy, but I think that should not be too difficult to sort
out. The main problem which I would like help with, is that the
midi inputs and outputs from aldrin do not show up in with
aconnect -lo or -li at all..
Is it possible to get portmidi to work with alsa correctly in
this way, or are midi connections always handled within the
program, connecting to alsa devices, but invisible to aconnect?
The source for both projects is at:
http://trac.zeitherrschaft.org/aldrin/browserhttp://trac.zeitherrschaft.org/zzub/browser
All the best,
James
I am looking for someone with device driver experience to implement an
audio-over-ethernet protocol on Linux. This is a short-term paid
position; the license and basic architecture of the driver is not yet
decided. If you're interested, get in touch with me.
thanks,
--p
Hello all,
A few weeks ago there was a discussion about proposed enhancements to
Lars Luthman's LV2 MIDI port extension (lv2-midiport.h). I've read the
suggestions that you gave, and tried to hack together a new version,
that would enhance the initial standard that Lars designed, by:
- addressing alignment-related issues
- using fixed-point integers for time values
- adding optional sub-channel addressing (for per-note control changes
and similar features)
- optionally allowing sending floating point automation events in the
same stream as MIDI events (because mixing several event streams at the
same time is inefficient)
- optionally allowing sending arbitrary data (similar to SysEx, but
fully binary safe and without having to pay to register at MMA :) )
- trying to keep it simple, efficient and easy to implement
Finding the right compromise is often an impossible task, so it won't
probably satisfy everyone. However, I think its advantages outweight
those of its drawbacks and limitations that I can currently see. The
basic MIDI-type events can be passed with minimal effort, however, the
more advanced features require more effort if you want to make use of
them (though they may be safely ignored).
Here we go - treat it as pseudocode sketch meant to get more feedback
from you, not as the compilable C code (that's why there is no license
or copyrights attached, in case anybody's wondering). In fact, there's
no actual *code* (as in: anything that translates into CPU instructions)
here, just data structures and declarations. Also, because it is not
meant to be actually used at this stage, I'm not assigning it any URI.
// data structures for proposed "enhanced event port" extension of LV2
audio plugin standard
// loosely based on Lars Luthman's lv2-midiport.h; most ideas come from
people on
// linux-audio-developers mailing list (David Olofson, Lars Luthman,
Stefan Westerfeld,
// Nedko Arnaudov and others)
// base event structure
// size: 8 octets
// if command < 0x80, one more 8-octet structure follows; the type of that
// structure is dependent on value of command
// the plugin that does not understand the given command can safely skip
the next 8 bytes
struct LV2_EVENT_HEADER
{
uint32_t timestamp; ///< fixed-point timestamp with 16 integer and 16
fractional bits; use of fractional part is optional
uint8_t command; ///< command byte - >= 0x80 for MIDI commands,
<0x80 for 16-octet extended commands
uint8_t arg1; ///< for MIDI commands that need it; should be 0
otherwise
uint8_t arg2; ///< for MIDI commands that need it; should be 0
otherwise
uint8_t subchannel; ///< optional, allows for control changes for
specific notes; writer must set to 0 if not used
};
// buffer of events, analogous to LV2_MIDI structure in Lars Luthman's
extension
struct LV2_EVENT_BUFFER
{
LV2_EVENT_HEADER *first; ///< pointer to the first event, the next
ones (if any) follow it directly
uint32_t event_count; ///< number of events in the buffer
uint32_t block_count; ///< buffer size (current or max, depending on
if it's input or output buffer) in 8-octet blocks
};
// the features below are optional and may be ignored by skipping the
8-octet extension block
// optional command 0x70: set parameter
struct LV2_EVENT_EXT_FLOAT
{
uint32_t lv2_ctl_port;
float new_value;
};
// feel _encouraged_ to ignore everything below this line ;)
// advanced optional command 0x71: set parameter delta
// specifies that until the next event the parameter
// is supposed to increase linearly by new_value_delta each sample
// this is for super-precise envelopes and other insane paranoid stuff :)
struct LV2_EVENT_EXT_FLOAT_DELTA
{
uint32_t lv2_ctl_port;
float new_value_delta;
};
struct LV2_BINARY_BUFFER;
struct LV2_BINARY_BUFFER_FUNCTIONS
{
// increase reference count and return data pointer
const char *(*ref_inc)(struct LV2_BINARY_BUFFER *self);
// decrease reference count
void (*ref_dec)(struct LV2_BINARY_BUFFER *self);
// get event type URI or MIME type
const char *(*get_type)(struct LV2_BINARY_BUFFER *self);
// get string encoding (if applicable)
const char *(*get_encoding)(struct LV2_BINARY_BUFFER *self);
};
struct LV2_BINARY_BUFFER
{
// pointer analogous to virtual method table pointer
LV2_BINARY_BUFFER_FUNCS *funcs;
// length of data in octets
uint32_t length;
};
// advanced optional command 0x72: data block
// in order to get to the data, ref_inc must be used before
// process function ends (and then use ref_dec to free the data)
// otherwise host will free or reuse it to avoid leaks
struct LV2_EVENT_EXT_BINARY
{
union {
uint64_t pad; // padding so that the structure is 64 bits long
LV2_BINARY_BUFFER *bb;
};
};
---
Have fun :)
Krzysztof
Hello again,
I've coded a module to use the fast_lookahead_limiter plugin. Upon
using the LADSPA plugin's run function, it segfaults. gdb reveals:
(gdb) step
runFastLookaheadLimiter (instance=0x80de1d8, sample_count=1) at
fast_lookahead_limiter_1913.xml:79
79 fast_lookahead_limiter_1913.xml: No such file or directory.
in fast_lookahead_limiter_1913.xml
Current language: auto; currently c
---
I can't find any information about this. I don't understand or know why
it needs to access the xml file. I don't what I am meant to do to make
sure it finds it, or even where it's meant to be.
There was no such problem implementing GLAME Butterworth filters. How
I'm meant to tell?
Cheers,
James.