Didn't we come up with some good ammo in case anyone decided to sue?
--
Patrick Shirkey - Boost Hardware Ltd.
For the discerning hardware connoisseur
Http://www.boosthardware.comHttp://www.djcj.org - The Linux Audio Users guide
========================================
Being on stage with the band in front of crowds shouting, "Get off! No!
We want normal music!", I think that was more like acting than anything
I've ever done.
Goldie, 8 Nov, 2002
The Scotsman
Hi,
I'm currently looking at JACK (http://jackit.sourceforge.net/) for a small
project I'd like to work on some time soon. It sounds like a promising concept.
It's interesting for me because I don't have to write my own audio loop. My
questions are:
-Is it in a state where I can actually use it? Or are there so many things
still to be done so you wouldn't advise me to build something on top of it?
-Is there any competing "product" at the moment? What are the chances that JACK
will be the standard in the future? (Try to remain as objective as you can,
please.)
Thanks for your help.
-Oliver
__________________________________________________________________
Gesendet von Yahoo! Mail - http://mail.yahoo.de
Weihnachts-Einkäufe ohne Stress! http://shopping.yahoo.de
Andrew Morton wrote:
>At http://www.zip.com.au/~akpm/linux/2.4.20-low-latency.patch.gz
>
>Very much in sustaining mode. It includes a fix for a livelock
>problem in fsync() from Stephen Tweedie.
Hi,
I have not currently the possibility to test this patch for the next 2-3
weeks but I'd be interested it this patch is able to cure the latency
problems of Red Hat 8.0.
I think Red Hat 8.0 is a nice desktop distro and thus it would be good
if we achieve low latencies it too.
While discussing about RH8 on the #lad channel on irc, Jussi L. told me
that ext3 causes latency spikes during writes becauses of journal
commints etc, but according to him it seems that there are other latency
sources too (he said probably libc).
e.g. he tried a LL kernel on RH7.3 with reiserFS and it worked fine
while RH8.0 with reiserFS did cause latency peaks.
So my question is: does this patch fix latency problems on Red Hat 8.0 ?
cheers,
Benno
--
http://linuxsampler.sourceforge.net
Building a professional grade software sampler for Linux.
Please help us designing and developing it.
What's going on with headers, docs, names and stuff?
I've ripped the event system and the FX API (the one with the state()
callback) from Audiality, and I'm shaping it up into my own XAP
proposal. There are headers for plugins and hosts, as well as the
beginnings of a host SDK lib. It's mostly the event system I'm
dealing with so far.
The modified event struct:
typedef struct XAP_event
{
struct XAP_event *next;
XAP_timestamp when; /* When to process */
XAP_ui32 action; /* What to do */
XAP_ui32 target; /* Target Cookie */
XAP_f32 value; /* (Begin) Value */
XAP_f32 value2; /* End Value */
XAP_ui32 count; /* Duration */
XAP_ui32 id; /* VVID */
} XAP_event;
The "global" event pool has now moved into the host struct, and each
event queue knows which host it belongs to. (So you don't have to
pass *both* queue and host pointers to the macros. For host side
code, that means you can't accidentally send events belonging to one
host to ports belonging to another.)
Oh, well. Time for some sleep...
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`----------------------------> http://www.linuxdj.com/maia -'
--- http://olofson.net --- http://www.reologica.se ---
Just an observation about an alternative path on softsynths: a LADSPA plugin
or network can be used easily enough as a softsynth using control-voltage
(CV) approaches (a few already exist). It's just a matter of agreeing the
conventions - implementation is trivial.
I've been meaning to finish writing PNet for a while (I've mentioned it a
few times) - essentially an environment where LADSPA plugins are strung
together to form a "patch" and are wired up to "standard" CV controls for
pitch, velocity, MIDI CC etc. These CV components and outputs can be
provided by the host as "fake" plugins providing the CV signals based on
MIDI input (or by using a non-LADSPA convention). This is trivial to
implement and provides an extremely flexible way to build plugin-based
softsynths from LADSPA components - or to wire existing self-contained
LADSPA soft synths (e.g. the "analogue" synth by David Bartold in the CMT
library, see http://www.ladspa.org/cmt/plugins.html) up to MIDI streams.
All a question of time - if anyone wants to do the rest of the
implementation then please let me know. The code required to do the above
also provides a nice way to store patches of plugins for standard processing
chains. Patches would probably be stored as XML representations of
pure-LADSPA networks. BTW, is anyone doing this already? If so, 50% of the
code is already done. ;-) I'm thinking in terms of defining a synth using
two patches - one to define the per-note network required (e.g.
CV->osc->filter->OUT) and another for any per-instrument post processing
(e.g. IN->chorus->reverb->OUT).
--Richard
I was in this long thread about pitch control on the VST list, and I
think I learned a few things. (For a change! ;-D)
There are times when continous, linear pitch (what I have in
Audiality) is perfectly fine - and in those cases, it's by far, the
simplest possibly way you can control pitch of a synth. You get note
pitch, pitch bend, continous pitch control over the whole range,
whatever scales you like and all that, using *only a single
pitch->frequency conversion* somewhere in your synth code.
I will bet almost anything that there simply cannot be an easier way
of dealing with this.
*However*, in some cases, you may not be all that interested in the
actual pitch, but rather just want to deal with the notes in whatever
scale the user wants to deal with. One example would be a simple,
basic arpeggiator. Sure, you *could* do that with linear pitch, but
then the plugin would have to either assume that you want 12tET (or
whatever), or you need a way to tell it what scale you want to use
for the output. (Note that the kind of arpeggiator I'm thinking about
here may be expected to generate a full, modulated chord from a
single note, so it can't just look at a full input chord and pick the
exact pitches from that.)
In that case, you'd much rather have input more similar to integer
MIDI pitch, and *possibly* pitch bend to go with that. This could
indeed be expressed as "linear pitch" as well (float; 1.0 per
octave), but with one very important difference: it would actually be
1.0 per *note* - where what a "note" is is not strictly defined or
known to the plugin. The plugin just assumes that the user knows what
0-4-7 means, if he/she enters that for "arpeggio offsets". The plugin
also assumes that the user will put a suitable note_pitch to
linear_pitch pitch converter in between the output and the synth, or
that the synth understands note_pitch events.
Note that linear_pitch = note_pitch * (1.0/12.0) for 12teT, so these
"conerters" (or note_pitch support) can be very trivial to implement.
If you want "weird" scales, it gets slightly more complicated, but
the *major* point here is that no synth plugin is required to do this
- and still, every synth plugin can use any scale!
(How many VSTi plugins actually support non-12tET scales? ;-)
Hmm... As to having a synth support *both* note_pitch and
linear_pitch controls, I suppose that would effectively just be a
dual interface to a single internal control value. Send something to
linear_pitch, and it goes directly into the internal pitch variable.
Send it as note_pitch, and it gets multiplied by (1.0/12.0) or is
passed through an interpolated "weird scale" table first. Makes sense?
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`----------------------------> http://www.linuxdj.com/maia -'
--- http://olofson.net --- http://www.reologica.se ---
I did some thinking last night, and I have an interesting problem, as
well as a nice solution - or so I think. Please tell me what you
think.
Consider the following situation:
* You have a plugin "A", which has 1:4:7 (Bay:Channel:Control)
(which is an Event Output) connected to some Event Input Port
named "P".
* Now, you want to connect output 1:2:9 (Bay:Channel:Control)
to that same Event Input Port "P".
So, what's the problem?
Well, as I've mentioned before, having separate Event Input ports for
Channels is probably an advantage in many most cases, since it avoids
queue splitting overhead, and reduces event size. (No need for a
"channel" field.)
Regardless of the above, any reasonably complex synth will most
probably have several "inner loops" working through the same number
of sample frames.
These two internal plugin designs have the same problem; you're not
running *the whole plugin* one sample at a time, through the whole
buffer. Instead, you're iterating through the buffer several times.
Now, the *problem* is that whenever you send an event from inside one
of these event and/or audio processing loops mentioned above, you
risk sending events out of order, whenever two loops send to the same
port! (Note that you can't know that without comparing a ton ofp
ointers every time a connection is made. The host just tells you to
connect some output, and gives you an Event Port pointer and a target
Control Index to send to.)
In theory, the problem is very easy to solve: Have the host throw in
"shadow event ports", and then have it sort/merge the queues from
those into a single, ordered queue that is passed to the actual
target port.
However, how on earth could the host know which outputs of a plugin
can safely be connected to the same physical port, and which ones
*cannot*?
Easy: Output Context IDs. :-)
Whenever the host wants to connect an output, it asks
plugin->get_context_id(bay, channel, output), and gets an int. The
actual values returned are irrelevant; they're only there so the host
can compare them.
How to use (plugin1 and plugin2 being the two plugins that have
outputs to be connected to the same physical event port):
struct XAP_cnx_descriptor
{
XAP_plugin *plugin;
int bay;
int channel;
int output;
};
/*
* When you're about to make a connection to an input event port
* that already has connections, use this to figure out whether
* or not you need to do shadow + sort/merge.
*/
int must_shadow(XAP_cnx_descriptor *from1, XAP_cnx_descriptor *from2)
{
int ctxid1, ctxid2;
if(from1->plugin != from2->plugin)
return 1; /* Yes, *definitely* --> */
if(!from1->plugin->get_context_id)
return 0; /* No, this plugin has only
* one sending context. -->
*/
ctxid1 = from1->plugin->get_context_id(from1);
ctxid2 = from2->plugin->get_context_id(from2);
return (ctxid1 != ctxid2); /* Only if ctx IDs differ. */
}
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`----------------------------> http://www.linuxdj.com/maia -'
--- http://olofson.net --- http://www.reologica.se ---
Kjetil and i have been boring the VST crew to death. so we took it
here :)
>> because when running a real-time low-latency audio system, the cost of
>> context switches is comparatively large. if you've got 1500usecs to
>> process a chunk of audio data, and you spend 150usecs of it doing
>> context switches (and the cost may be a lot greater if different tasks
>> stomp over a lot of the cache), you've just reduced your effective
>> processor power by 10%.
>>
>I dont believe you. I just did a simple context-switching/sockets
>test after I sent the last mail. And for doing 2*1024*1024 context
>syncronized switches between two programs, my old 750Mzh duron uses 2.78
>seconds. That should about 1.3usecs per switch or something. By
you didn't touch much of the cache, did you?
it doesn't matter how fast the actual switch is if each task wipes out
the L1 and L2 cache, forcing a complete refill of the cache, reload of
the TLB, etc. etc. the cost of a context switch is not just a register
store+restore. the cost of it depends on what has happened since the
last context switch.
try your "simple context switch test" with a setup in which each task
writes to about 256kB of memory.
we measured this extensively on LAD a year or two ago. both myself and
abramo and some others did lots of tests. we plotted lots of
curves. the results were acceptable but not encouraging. yes, faster
processors will decrease the time it takes to save and load the
registers. but just as for much DSP code these days, other issues
often dominate over raw CPU speed; the slow downs caused by the TLB
being invalidated as a result of switching address spaces and the
cache invalidation (for the same reason) are dramatic.
>I'm not talking about jack tasks, I'm talking about doing a simple plug-in
>task inside a standalone program, the way the vst server works.
i don't understand how the vst server works. perhaps you can explain
it.
--p