On Friday 30 November 2007, Dave Robillard wrote:
> > That's why I'm using a Port as the smallest "connection unit",
> > much like LADSPA ports, so there is no need for an event type
> > field of any kind at all, let alone a URI.
>
> Ports /are/ the smallest "connection unit". But ports can /contain/
> events, and if we want multiple kinds of events in a single port,
> then the events themselves need a type field.
Sounds like there's a fundamental difference there, then. I'm using a
model where a port is nothing more than something that deals with a
value of some sort. There are no channels, voices, different events
or anything "inside" a port - just a value. An output port
can "operate" that value of a compatible input port.
Of course, that "value" could be anything, but I'm not explicitly
supporting that on the API level. If plugins want to use MIDI
messages, they're on their own when it comes to mapping of channels,
CCs etc. That stuff is really beyond the scope of my project, as one
wouldn't be able to configure and control such things normally.
> > The data in the events *could* be MIDI or whatever (the host
> > doesn't even have to understand any of it), but normally, in the
> > case of Audiality 2, it'll be modular synth style ramped control
> > events. That is, one port controls exactly one value - just like
> > in LADSPA, only using timestamped events with ramping info instead
> > of one value per buffer.
>
> The host might not have to (though in practise it usually does), but
> other plugins certainly do. You can't process events if you don't
> even know what they are.
Yes, obviously. I don't quite see what you think I'm trying to say
here. :-)
> > Extensibility is a non-issue on this level.
>
> OK, the event extension doesn't define your ramped control events,
> so you're not allowed to use them, ever, period.
>
> ... looks like extensibility is an issue at this level, eh? ;)
Right, but that's mostly about Audiality 2 anyway. There, if I for
some reason started with control events without ramping, I'd add
another "control events v2" port type. If that type happens to be a
superset of the first one doesn't really matter, as they're still not
compatible.
Where it makes sense, one can provide converters to/from other types,
but to the host (the low level machinery directly dealing with plugin
graphs, that is), those are just ordinary plugins with only one input
port and one output port.
> > What you do if you want
> > more stuff is just grab another URI for a new event based
> > protocol, and you get to start over with a fresh event struct to
> > use in whatever way you like. (In fact, as it is, the host doesn't
> > even have to know you'll be using events. It just provides a LIFO
> > pool of events for any plugins that might need it.)
>
> Sounds like you're thinking too hard.
Nah. I'm just in the middle of another project, and the Audiality 2
code isn't in a state where I could post that without just adding to
the confusion. And, I think we might have a terminology impedance
mismatch. :-)
> "Events" here are just a bunch of bytes in a flat buffer.
Mine are implemented as linked lists of small memory blocks, for
various reasons. (I've had a working implementation for years, so
I'll stick with that for now. Not saying it's the best or most
efficient way of doing it, but I have yet to figure out how to bend
flat buffers around my event routing model - or the other way
around.)
I did "hardwire" fixed point timestamps as those are closely related
to the whole deal with sample frames, buffers etc - but the data area
is indeed just a bunch of raw bytes.
> There is definitely no protocol here. Please, please don't
> say "protocol". That way lies painful digressed conversations,
> trust me.
I'm open to alternative terminology. :-)
What I'm talking about is just the name of "whatever goes on between
connected ports." I don't want the term to be too specific, as it
also covers LADSPA style audio buffers, shared buffers (which can
contain function pointers) and whatever else plugins might use to
communicate.
//David Olofson - Programmer, Composer, Open Source Advocate
.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
'-- http://www.reologica.se - Rheology instrumentation --'
Hi,
A fresh new release from the FreeBoB project is available. It is only a
maintenance release. It fixes a few bugs which were reported. If you
don't have problems with the your current version of libfreebob, there
is no reason to update to this one.
Available through the SourceForge release system at:
http://sourceforge.net/project/showfiles.php?group_id=117802
Have fun recording!
The FreeBoB Team
PS: Expect FFADO soon...
Hi,
A fresh new release from the FreeBoB project is available. It is only a
maintenance release. It fixes a few bugs which were reported. If you
don't have problems with the your current version of libfreebob, there
is no reason to update to this one.
Have fun recording!
The FreeBoB Team
Hi there,
This may be a very simple question, I am not sure.
I am hacking on a buzz-like tracker, aldrin. It is written in
Python, but the heavy lifting is done by libzzub, which uses
portmidi for midi implementation.
At the moment, midi in devices can be selected, midi out are a
bit buggy, but I think that should not be too difficult to sort
out. The main problem which I would like help with, is that the
midi inputs and outputs from aldrin do not show up in with
aconnect -lo or -li at all..
Is it possible to get portmidi to work with alsa correctly in
this way, or are midi connections always handled within the
program, connecting to alsa devices, but invisible to aconnect?
The source for both projects is at:
http://trac.zeitherrschaft.org/aldrin/browserhttp://trac.zeitherrschaft.org/zzub/browser
All the best,
James
I am looking for someone with device driver experience to implement an
audio-over-ethernet protocol on Linux. This is a short-term paid
position; the license and basic architecture of the driver is not yet
decided. If you're interested, get in touch with me.
thanks,
--p
Hello all,
A few weeks ago there was a discussion about proposed enhancements to
Lars Luthman's LV2 MIDI port extension (lv2-midiport.h). I've read the
suggestions that you gave, and tried to hack together a new version,
that would enhance the initial standard that Lars designed, by:
- addressing alignment-related issues
- using fixed-point integers for time values
- adding optional sub-channel addressing (for per-note control changes
and similar features)
- optionally allowing sending floating point automation events in the
same stream as MIDI events (because mixing several event streams at the
same time is inefficient)
- optionally allowing sending arbitrary data (similar to SysEx, but
fully binary safe and without having to pay to register at MMA :) )
- trying to keep it simple, efficient and easy to implement
Finding the right compromise is often an impossible task, so it won't
probably satisfy everyone. However, I think its advantages outweight
those of its drawbacks and limitations that I can currently see. The
basic MIDI-type events can be passed with minimal effort, however, the
more advanced features require more effort if you want to make use of
them (though they may be safely ignored).
Here we go - treat it as pseudocode sketch meant to get more feedback
from you, not as the compilable C code (that's why there is no license
or copyrights attached, in case anybody's wondering). In fact, there's
no actual *code* (as in: anything that translates into CPU instructions)
here, just data structures and declarations. Also, because it is not
meant to be actually used at this stage, I'm not assigning it any URI.
// data structures for proposed "enhanced event port" extension of LV2
audio plugin standard
// loosely based on Lars Luthman's lv2-midiport.h; most ideas come from
people on
// linux-audio-developers mailing list (David Olofson, Lars Luthman,
Stefan Westerfeld,
// Nedko Arnaudov and others)
// base event structure
// size: 8 octets
// if command < 0x80, one more 8-octet structure follows; the type of that
// structure is dependent on value of command
// the plugin that does not understand the given command can safely skip
the next 8 bytes
struct LV2_EVENT_HEADER
{
uint32_t timestamp; ///< fixed-point timestamp with 16 integer and 16
fractional bits; use of fractional part is optional
uint8_t command; ///< command byte - >= 0x80 for MIDI commands,
<0x80 for 16-octet extended commands
uint8_t arg1; ///< for MIDI commands that need it; should be 0
otherwise
uint8_t arg2; ///< for MIDI commands that need it; should be 0
otherwise
uint8_t subchannel; ///< optional, allows for control changes for
specific notes; writer must set to 0 if not used
};
// buffer of events, analogous to LV2_MIDI structure in Lars Luthman's
extension
struct LV2_EVENT_BUFFER
{
LV2_EVENT_HEADER *first; ///< pointer to the first event, the next
ones (if any) follow it directly
uint32_t event_count; ///< number of events in the buffer
uint32_t block_count; ///< buffer size (current or max, depending on
if it's input or output buffer) in 8-octet blocks
};
// the features below are optional and may be ignored by skipping the
8-octet extension block
// optional command 0x70: set parameter
struct LV2_EVENT_EXT_FLOAT
{
uint32_t lv2_ctl_port;
float new_value;
};
// feel _encouraged_ to ignore everything below this line ;)
// advanced optional command 0x71: set parameter delta
// specifies that until the next event the parameter
// is supposed to increase linearly by new_value_delta each sample
// this is for super-precise envelopes and other insane paranoid stuff :)
struct LV2_EVENT_EXT_FLOAT_DELTA
{
uint32_t lv2_ctl_port;
float new_value_delta;
};
struct LV2_BINARY_BUFFER;
struct LV2_BINARY_BUFFER_FUNCTIONS
{
// increase reference count and return data pointer
const char *(*ref_inc)(struct LV2_BINARY_BUFFER *self);
// decrease reference count
void (*ref_dec)(struct LV2_BINARY_BUFFER *self);
// get event type URI or MIME type
const char *(*get_type)(struct LV2_BINARY_BUFFER *self);
// get string encoding (if applicable)
const char *(*get_encoding)(struct LV2_BINARY_BUFFER *self);
};
struct LV2_BINARY_BUFFER
{
// pointer analogous to virtual method table pointer
LV2_BINARY_BUFFER_FUNCS *funcs;
// length of data in octets
uint32_t length;
};
// advanced optional command 0x72: data block
// in order to get to the data, ref_inc must be used before
// process function ends (and then use ref_dec to free the data)
// otherwise host will free or reuse it to avoid leaks
struct LV2_EVENT_EXT_BINARY
{
union {
uint64_t pad; // padding so that the structure is 64 bits long
LV2_BINARY_BUFFER *bb;
};
};
---
Have fun :)
Krzysztof
Hello again,
I've coded a module to use the fast_lookahead_limiter plugin. Upon
using the LADSPA plugin's run function, it segfaults. gdb reveals:
(gdb) step
runFastLookaheadLimiter (instance=0x80de1d8, sample_count=1) at
fast_lookahead_limiter_1913.xml:79
79 fast_lookahead_limiter_1913.xml: No such file or directory.
in fast_lookahead_limiter_1913.xml
Current language: auto; currently c
---
I can't find any information about this. I don't understand or know why
it needs to access the xml file. I don't what I am meant to do to make
sure it finds it, or even where it's meant to be.
There was no such problem implementing GLAME Butterworth filters. How
I'm meant to tell?
Cheers,
James.
Hi,
I'm attempting to use dlsym to dynamically load a ladspa plugin lib in
c++. I've found various web pages describing how to use dlsym in c++
but they do not seem entirely relevant to using ladspa (ie ladspa
plugins are mainly written in c) (??).
I've also looked at some ardour src, but somehow it seems to use, in a
c++ file, the c method which should not work (!?). I also looked within
hydrogen src, but that seems to use QT(?) to do the loading - and
delving into the QT src code does not appeal right now ;-)
Anybody know where could I find some simple c++ source examples for
loading ladspa plugins?
Cheers,
James
Christian Schoenebeck cuse at users.sourceforge.net:
>Am Donnerstag, 15. November 2007 18:51:30 schrieb Kjetil S. Matheussen:
>> *The msys terminal is a lot nicer than xterm
>> *msys/mingw uses fewer resources than cygwin
>> *mingw executables requires fewer dlls and are easier to
>> distribute
>> *mingw executables can be linked with non-cygwin dll's.
>>
>> disadvantages:
>> *msys/mingw is messy to set up. Check out the FAQ first.
>> *msys/mingw is very messy to upgrade.
>> (manually unpacking .tar files into the file tree)
>> *msys doesn't provide the same amount of unix tools.
>> *mingw executables can not be linked with cygwin dll's.
>
>That's not a real either/or question. Mingw is part of Cygwin as well:
>
> http://cygwin.com/packages/
>
>So you can easily setup and update both with just a few clicks in the
>cygwin
>install utility.
Sure, but then you have to use the -mno-cygwin option to gcc
when compiling, which sometimes can be tricky. So my comparison
above is still valid, and you don't need cygwin to run mingw.
The msys terminal is very nice, and I don't see any reason
to fire up the whole cygwin[1] beast to compile a native
windows app now and then, unless you need something in
cygwin which is not provided by msys of course.
I also think its messy to mix the mingw and cygwin
environments, since they are not compatible. Its easy to
link with libraries of the other type (ie. compiling
with mingw and link with cygwin libraries, or compiling
with cygwin and link with mingw libraries), and then
strange things can happen. Its better to keep the two
environments in separate installations if you can.
At least thats my experience.
[1] X running inside windows is required to make working
with cygwin somewhat comfortable
>One thing I wonder is whether mingw has any disadvantages regarding
>compiler
>optimizations?
No, that would be very strange.
> You know the term "minimalistic" in its name raises such
>doubts.
I wouldn't worry about that. Gcc is gcc.