I did some thinking last night, and I have an interesting problem, as
well as a nice solution - or so I think. Please tell me what you
think.
Consider the following situation:
* You have a plugin "A", which has 1:4:7 (Bay:Channel:Control)
(which is an Event Output) connected to some Event Input Port
named "P".
* Now, you want to connect output 1:2:9 (Bay:Channel:Control)
to that same Event Input Port "P".
So, what's the problem?
Well, as I've mentioned before, having separate Event Input ports for
Channels is probably an advantage in many most cases, since it avoids
queue splitting overhead, and reduces event size. (No need for a
"channel" field.)
Regardless of the above, any reasonably complex synth will most
probably have several "inner loops" working through the same number
of sample frames.
These two internal plugin designs have the same problem; you're not
running *the whole plugin* one sample at a time, through the whole
buffer. Instead, you're iterating through the buffer several times.
Now, the *problem* is that whenever you send an event from inside one
of these event and/or audio processing loops mentioned above, you
risk sending events out of order, whenever two loops send to the same
port! (Note that you can't know that without comparing a ton ofp
ointers every time a connection is made. The host just tells you to
connect some output, and gives you an Event Port pointer and a target
Control Index to send to.)
In theory, the problem is very easy to solve: Have the host throw in
"shadow event ports", and then have it sort/merge the queues from
those into a single, ordered queue that is passed to the actual
target port.
However, how on earth could the host know which outputs of a plugin
can safely be connected to the same physical port, and which ones
*cannot*?
Easy: Output Context IDs. :-)
Whenever the host wants to connect an output, it asks
plugin->get_context_id(bay, channel, output), and gets an int. The
actual values returned are irrelevant; they're only there so the host
can compare them.
How to use (plugin1 and plugin2 being the two plugins that have
outputs to be connected to the same physical event port):
struct XAP_cnx_descriptor
{
XAP_plugin *plugin;
int bay;
int channel;
int output;
};
/*
* When you're about to make a connection to an input event port
* that already has connections, use this to figure out whether
* or not you need to do shadow + sort/merge.
*/
int must_shadow(XAP_cnx_descriptor *from1, XAP_cnx_descriptor *from2)
{
int ctxid1, ctxid2;
if(from1->plugin != from2->plugin)
return 1; /* Yes, *definitely* --> */
if(!from1->plugin->get_context_id)
return 0; /* No, this plugin has only
* one sending context. -->
*/
ctxid1 = from1->plugin->get_context_id(from1);
ctxid2 = from2->plugin->get_context_id(from2);
return (ctxid1 != ctxid2); /* Only if ctx IDs differ. */
}
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`----------------------------> http://www.linuxdj.com/maia -'
--- http://olofson.net --- http://www.reologica.se ---
Kjetil and i have been boring the VST crew to death. so we took it
here :)
>> because when running a real-time low-latency audio system, the cost of
>> context switches is comparatively large. if you've got 1500usecs to
>> process a chunk of audio data, and you spend 150usecs of it doing
>> context switches (and the cost may be a lot greater if different tasks
>> stomp over a lot of the cache), you've just reduced your effective
>> processor power by 10%.
>>
>I dont believe you. I just did a simple context-switching/sockets
>test after I sent the last mail. And for doing 2*1024*1024 context
>syncronized switches between two programs, my old 750Mzh duron uses 2.78
>seconds. That should about 1.3usecs per switch or something. By
you didn't touch much of the cache, did you?
it doesn't matter how fast the actual switch is if each task wipes out
the L1 and L2 cache, forcing a complete refill of the cache, reload of
the TLB, etc. etc. the cost of a context switch is not just a register
store+restore. the cost of it depends on what has happened since the
last context switch.
try your "simple context switch test" with a setup in which each task
writes to about 256kB of memory.
we measured this extensively on LAD a year or two ago. both myself and
abramo and some others did lots of tests. we plotted lots of
curves. the results were acceptable but not encouraging. yes, faster
processors will decrease the time it takes to save and load the
registers. but just as for much DSP code these days, other issues
often dominate over raw CPU speed; the slow downs caused by the TLB
being invalidated as a result of switching address spaces and the
cache invalidation (for the same reason) are dramatic.
>I'm not talking about jack tasks, I'm talking about doing a simple plug-in
>task inside a standalone program, the way the vst server works.
i don't understand how the vst server works. perhaps you can explain
it.
--p
(Oops. Replied to the direct reply, rather than via the list. Please,
don't CC me - I'm on the list! :-)
---------- Forwarded Message ----------
Subject: Re: [linux-audio-dev] XAP: Pitch control
Date: Wed, 11 Dec 2002 18:05:57 +0100
From: David Olofson <david(a)olofson.net>
To: Nathaniel Virgo <nathaniel.virgo(a)ntlworld.com>
On Wednesday 11 December 2002 17.50, Nathaniel Virgo wrote:
> On Wednesday 11 December 2002 3:41 pm, David Olofson wrote:
> > On Wed, Dec 11, 2002 at 12:40:18 +0000, Nathaniel Virgo wrote:
> > > I can't really say I can think of a better way though.
> > > Personally I'd leave scales out of the API and let the host
> > > deal with it, sticking to 1.0/octave throughout, but I can see
> > > the advantages of this as well.
> >
> > Problem with letting the host worry about it is that the host
> > would normally not understand anything of this whatsoever, since
> > the normal case would be that a sequencer *plugin* controls the
> > synths. It would be a hack.
>
> Oh. Well, when I said host I meant sequencer.
I see. Well, either way, I still prefer thinking of scale converters
as something I may just plugin, rather than waiting for my favourite
sequencer to support the kind of scales I want. One multichannel
event processor plugin more or less in the net won't be a disaster -
and again, you *can* use 1.0/octave in the sequencer as well as an
alternative.
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`----------------------------> http://www.linuxdj.com/maia -'
--- http://olofson.net --- http://www.reologica.se ---
I'm trying to oversample to smooth the display on my software audio
scope using band limited (sinc) interpolation. I have a quick question
to ask. Which of the following implementations is liable to take more
processing time:
1) Padding my data, and then convolving the original (non-zero) samples
with a sinc function (from a LUT). I think it's only feasible to have a
LUT for a sinc of unity amplitude.
or
2) Doing an FFT on my original sample set. Padding, and multiplying by a
rectangular window, then doing an iFFT to return to the time domain. The
FFT library I would use would be FFTW.
Cheers
Henry
> Steve Harris <S.W.Harris(a)ecs.soton.ac.uk> writes:
>
> Yeah, please do that would be damn useful. For rapid prototyping if
> nothing else
FYI, making sfront produce code suitable for .so's is at the
top of the list of things to do these days, because AudioUnits
support awaits it. But, that's the "sfront enhancements" list
of things to do, which is kind of subordinate to the "get MWPP
to Last Call in the IETF" list of things to do ... so it may
take a while.
Basically, many Logic users would like to use SAOL as a scripting
language for their own plugins ... thus, AudioUnits support.
This could actually be a catalyst for SAOL becoming more popular
generally, if it works out ...
-------------------------------------------------------------------------
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-------------------------------------------------------------------------
Paul,
I think that if the dmesg/var/log/messages output said there were
buffers, then it was getting loaded by automatically. Most likely this is
because it is in modules.conf.
However, when I ran insmod snd-hdsp by hand, I got some error messages
that were a bit different than the :init_module message that shows up at
boot time. These messages (from memory) had something to do with memory
buffer or memory addresses more memory names. I don't remember right now,
but will check it out later and report back.
Mostly Fernando and I wanted to know what configurations successful users
were using. Since I'm on 7.3, I have this funky C compiler, and just wanted
to know if others were successful. I'm still considering a RH8.0 upgrade
simply to get rid of some of these issues with Ardour and Rosegarden. (And
probably create new issues...who knows?)
thanks,
Mark
-----Original Message-----
From: Paul Davis [mailto:paul@linuxaudiosystems.com]
Sent: Tuesday, December 10, 2002 11:36 AM
To: Mark Knecht
Cc: D R Holsbeck; linux-audio-dev(a)music.columbia.edu; PlanetCCRMA;
Ardour; Ardour-User-List; alsa-user(a)lists.sourceforge.net
Subject: Re: [ardour-dev] Re: [linux-audio-dev] HDSP 9652 Users -
Request for info
> Hi. Actually, I tried that last night, but I didn't load the
>snd-hammerfall-mem. It complained about some sort of missing references.
>I'll give this a try later today. Thanks.
in the message you sent recently, snd-hammerfall-mem worked just fine,
and reported allocating buffers for the card.
if snd-hammerfall-mem doesn't load, then neither snd-hdsp nor
snd-rme9652 (not relevant here) can do so.
--p
hi guys !
as most of you will know, the linux-audio-user list is members-only.
many of you post helpful and perfectly on-topic replies there, which is
great, but unfortunately from a non-subscribed address, which is not :(
i used to hand-approve them, but there are just too many of those
messages now to do that, so i'm getting anal about the policy and reject
everything regardless of sender or topic.
please, everyone, do not cease to follow lau, and please do share your
expertise, but also please do it from a subscribed account. it really
hurts me to stash messages from bill s. or paul d. into the bit bucket,
but i just can't keep up. :(
unfortunately, there is no easy and reliable way to tell mailman to
accept all members of another list as posters, and i do not have the
necessary privileges on the list server to roll my own
cross-authentication.
check out the possibility described below if you need to post from
several mail accounts.
best,
jörn
forwarded from lau:
Jörn Nettingsmeier wrote:
>
> hi everyone !
>
> lately, the number of posting attempts from non-members has risen to a
> point where i'm just rejecting them without reading each one and checking
> if it's on-topic or has already been sent to the list from another
> account.
>
> our current list policy is members-only posting, for the following
> reasons:
> * i believe people who want to profit from a community should take part in
> it
> * spam prevention. so far, it has saved everyone from about .5k spam msgs.
>
> a number of list subscribers seem to post from different addresses, so
> some of their mails are being rejected. if this applies to you, there is
> always the possibility of subscribing multiple addresses and disabling mail
> delivery for all but one via the web interface.
>
> best,
>
> jo"rn
--
Jörn Nettingsmeier
Kurfürstenstr 49, 45138 Essen, Germany
http://spunk.dnsalias.org (my server)
http://www.linuxdj.com/audio/lad/ (Linux Audio Developers)
Hi everybody. I've been reading this list for a week. Thought I'd "pitch"
in here because I'm also writing a softstudio; it's pretty far already and
the first public release is scheduled Q1/2003.
First, I don't understand why you want to design a "synth API". If you
want to play a note, why not instantiate a DSP network that does the job,
connect it to the main network (where system audio outs reside), run it
for a while and then destroy it? That is what events are in my system -
timed modifications to the DSP network.
On the issue of pitch: if I understood why you want a synth API I would
prefer 1.0/octave because it carries less cultural connotations. In my
system (it doesn't have a name yet but let's call it MONKEY so I won't
have to use the refrain "my system") you just give the frequency in Hz,
there is absolutely no concept of pitch. However, if you want, you can
define functions like C x = exp((x - 9/12) * log(2)) * middleA, where
middleA is another function that takes no parameters. Then you can give
pitch as "C 4" (i.e. C in octave 4), for instance. The expression is
evaluated and when the event (= modification to DSP network) is
instantiated it becomes an input to it, constant if it is constant,
linearly interpolated at a specified rate otherwise. I should explain more
about MONKEY for this to make much sense but maybe later.
Anyway, the question I'm most interested is: why a synth API?
--
Sami Perttu "Flower chase the sunshine"
Sami.Perttu(a)hiit.fi http://www.cs.helsinki.fi/u/perttu