>>>
Please specify if you mean per fragment (buffer, usually 2) latency or
total latency.
<<<
Sorry, yes, I mean per-buffer latency. WDM lets buffer sizes get down
to 1.5 msec.
>>>
the Tascam guys that produce Gigastudio, would have dropped their own
low latency driver model (GSIF) and would have
adopted WDM,ASIO which would ease the pain of integrating Gigastudio
with other audio apps.
The other theory why they still use GSIF (now they even developed GSIF
further by releasing GSIF2) because it would
be too time consuming (perhaps they need to rewrite a large section of
code to adapt Gigastudio to WDM/ASIO) .
<<<
When Gigastudio first came out they still needed to support Win95 and
Win98. WDM doesn't exist or work very well on these operating systems.
Combine this with the fact that Giga does a lot of processing in kernel
mode and it's understandable why they needed to invent something like
GSIF.
They still do work in the kernel, which would also rule out ASIO. ASIO
is a user-mode API, not a kernel mode driver model.
I'm surprised they don't do WDM today, but I really doubt it's because
of some inefficiency in WDM. They can keep all their code in the kernel
and still do KS.
>>>
at any cost. I think one of the best tradeoffs is simply writing VST
plugins (or using other plugin APIs) because hosts are usually
optimized for low latency audio I/O, can use whatever audio drivers they
wish/need and reinventing the wheel for the
audio application developer is simply a waste of time.
Not to mention that using a plugin API you get perfect integration in a
virtual studio.
<<<
Plugin APIs are different than driver models. You can run VST plugins
in ASIO hosts apps, WDM host apps or even MME host apps. If you were
developing an audio host app you would need to choose both which driver
model you wanted to support, as well as which kind of plugins you want
to support.
>>>
I don't see why WDM should not become the only driver model
standard in the windows pro audio world.
<<<
Totally agree.
>>>
As said I'm not an expert in low leven windows drivers but I guess WDM
is still not perfect so audio apps producers
prefer to go their own way to squeeze out the maximum from the hardware
(especially for virtual instruments it's very important
to achieve low latencies so that the instruments can be played live).
<<<
In the last few years every pro sound card that's come out has had both
WDM drivers and ASIO drivers. Steinberg applications only support ASIO,
so if the h/w vendors want to support both Steinberg and others, they
need to write both kinds of drivers. It's about being compatible with
more applications.
>>>
I am preparing some slides about Linux audio, and while comparing Linux
with Windows, I have been wondering how the ASIO drivers manage to
obtain low latency on MS Windows, an operating system that does not seem
capable of low latency in any other way. So what tricks did Steinberg
come up with to get around that? I'd like to be able to say why the
Linux
approach is better/cleaner.
<<<
ASIO provides an interface directly to the kernel mode driver, typically
through a private device I/O control between the user mode ASIO DLL and
the kernel mode driver. By talking right to the driver, they bypass the
system component known as KMIXER, which adds 30 msec of latency to all
audio in Windows.
But ASIO isn't the only way around KMIXER. With the advent of Win32
Driver Model (WDM) Kernel Streaming (KS), the Windows O/S is indeed
capable of very low latency. WDM KS has a standardized device I/O
control set that's part of the Windows audio stack. KS makes it
possible to stream audio at sub 5-msec latency -- approaching 1 msec
latency -- using a direct interface to the "miniport" driver in the
Windows driver stack.
I wrote a white paper about all this a few years back:
http://www.cakewalk.com/DevXchange/audio_i.asp
There's a diagram of the audio stack in Windows which might be helpful.
Note that we never needed to create custom IOCTLs for Windows.
Microsoft followed up after this meeting by disclosing the Windows
kernel IO controls for everyone to use, known as DirectKS:
http://www.microsoft.com/whdc/device/audio/DirectKS.mspx
As far as I know the ASIO4All driver is built using DirectKS.
----------
Ron Kuper
VP / Engineering
Cakewalk
http://www.cakewalk.com
On Tue, Jul 06, 2004 at 12:33:30PM +0100, Rui Nuno Capela wrote:
> martin rumori wrote:
> >> Is anybody out there using an ATI Radeon card with DRI and DRM ?
> >
> > jackd with realtime-lsm from the cmdline is fine, with qjackctl it
> > fails. qjackctl with realtime but no-mlock works fine.
> >
>
> Either with no-mlock, LD_ASSUME_KERNEL or whatever switch I've been trying
> to reproduce those nasty lockups some of you are suffering.
>
> Using qt-3.3 on kde-3.2, it that matters.
>
> I think we're narrowing around drm, realtime mode mlock and the 2.6
> kernel, are we?
appears like that. as soon as i switch to the vesa driver (direct
rendering off) there is no problem. no problem, too, on my other
machine with nvidia binary driver.
on my machine, it's qt-3.2.3(-mt) from debian unstable.
bests,
rm -ri
Hello,
I am preparing some slides about Linux audio, and while comparing Linux
with Windows, I have been wondering how the ASIO drivers manage to
obtain low latency on MS Windows, an operating system that does not seem
capable of low latency in any other way. So what tricks did Steinberg
come up with to get around that? I'd like to be able to say why the Linux
approach is better/cleaner.
Maarten
Hi all,
I am using the latest 2.6.7 kernel (tried also 2.6.5) but with hdsp I cannot select anything lower than 1024x2 buffer settings in jackd without having massive xruns.
Asoundrc is fine, modprobe.conf is fine too.
The hdsp runs fine with the aforementioned settings but anything lower simply is horrible.
The kernel is patched with all kinds of mm patches but I am not currently using -r option nor the realtime module (a bit scared of freezing my machine :-). Is this the best one can do in user-space?
Any help is greatly appreciated!
Best wishes,
Ico
I thought it might worth mentioning 2 days of intense LAD-friendly
talks at the Libre Software Meeting on July 7th and 8th. The program
will include:
Dave Phillips on "The Scene"
Yann Orlarey on Faust
Takashi Iwai on ALSA
Steve Harris on RDF & Audio
Julien Villain on Gestural/Listening training
Paul Davis on Ardour (talk + 2hr workshop)
Julien Ottavi on APODIO
Damien Cirotteau on AGNULA
If you're in the Southwest corner of France and want to come and fill
the rooms just a little more, not to mention sample of the "young
upstart" red and white wines of the region (quite a political battle
in the region, apparently), check with lsm2004.abul.org. A glass of
decent wine will be considered partial downpayment on your copy of the
Ardour manual :)
--p
Hi everyone,
It's been a while, although this time there's not much. Just minor fixes,
nothing very outstanding. However here it is, a new public release for
QjackCtl, the little Qt (cutie:) application to control the JACK sound
server daemon, specific for the Linux Audio Desktop infrastructure.
Check it out from the usual place:
http://qjackctl.sourceforge.net
>From the changelog:
- Patchbay socket dialog client and plug list option items are now
properly escaped as regular expressions.
- JACK callbacks are now internally mapped to QCustomeEvent's instead
of using the traditional pipe notifications.
- The system tray popup menu is now featured as a context menu on the
main application window too.
- The reset status option is now included in the system tray popup menu.
- Server stop command button now enabled during client startup interval;
this makes it possible to stop the server just in case the client
can't be activated for any reason.
- Top level sub-windows are now always raised and set with active focus
when shown to visibility.
Hope you enjoy.
--
rncbc aka Rui Nuno Capela
rncbc(a)rncbc.org
Hi all,
People who play around with floating point code (especially on x86)
quickly learn about the evils of comparing the equality of one floating
point value with another.
There are other related evils with floating point one of which I was
bitten by just recently and I thought I'd share it with y'all. If it
helps just one person from spending 20 odd hours chasing an elusive
bug like I did, I will have acheived something.
The evil I speak of is the difference between 32 and 64 bit floating
point representations (types float and double) and the x86 CPU's
internal 80 bit representation.
The most common trap is something like:
if (x == y)
do_something ();
where x and y are say double flotaing point numbers. Also lets just say
that the value of x is already in the CPU's FPU register as a result of
a previous operation and the other y, is not. What happens is that the
result of the previous operation can have a full 80 bits (part mantissa,
part exponent and a sign bit) of precision while y, loaded from memory
does not have this extra precision. The comparison therefore fails, even
though when printed out (or when compiler optimisation is switched off)
the two values are equal. This is the reason why the above if statement
is better written as:
if (fabs (x - y) < 1e-20)
do_something ();
The reason I am writing this email is that I was recently bitten by a
similar problem. I was keeping a running index into a table, and keeping
the integer part separate from the fractional part which was kept in a
double floating point:
double fractional = 0.0, increment = 0.1;
int integer = 0;
for (;;)
{
/* Bunch of other code. */
fractional += increment ;
integer += lrint (floor (fractional));
fractional -= floor (fractional);
}
The above code can produce very odd results for certain values of
increment. The problem in this case manifested itself in the
integer/fractional losing counts when compiled with gcc-3.4 while
the same code had worked perfectly with previous versions of the
compiler. The problem seems to be caused by the fact that the other
code in the loop was pushing at least some of the relevant values
out of the FPU stack into double floating point variables and that
when they were reloaded they had lost precision.
The fix in this case was this:
for (;;)
{
/* Bunch of other code. */
fractional += increment ;
rem = fmod (fractional, 1.0); /* floating point modulus */
integer += lrint (round (fractional - rem));
fractional = rem;
}
which is far more robust.
HTH,
Erik
--
+-----------------------------------------------------------+
Erik de Castro Lopo nospam(a)mega-nerd.com (Yes it's valid)
+-----------------------------------------------------------+
"The X-files is too optimistic. The truth is not out there."
-- Anthony Ord
>
> From: Takashi Iwai <tiwai(a)suse.de>
> Date: 2004/07/02 Fri PM 01:15:46 GMT
> To: <ico(a)fuse.net>
> CC: <alsa-devel(a)lists.sourceforge.net>,
> <linux-audio-dev(a)music.columbia.edu>
> Subject: Re: [Alsa-devel] Re: [linux-audio-dev] snd-hdsp oddities
>
> At Fri, 2 Jul 2004 12:56:06 +0000,
> <ico(a)fuse.net> wrote:
> >
> > > Shouting "DON'T USE 2.6" isn't a good solution. Though, we need to
> > > inform to "set LD_ASSUME_KERNEL as a workaround"...
> >
> > Pardon my ignorance but how does one do this? As a part of the
> > config before compiling kernel or?
>
> No, just set the environment variable like
> export LD_ASSUME_KERNEL=2.4.19
> (better globally) and start jack. That's all.
>
> In this way, glibc chooses LinuxThreads instead of NPTL.
>
> > Also, any ideas on the odd behavior of the hdspmixer? (see my other post)
>
> Not checked yet... Did hdspmixer work on any versions correctly on
> your system?
>
Well not really since this notebook is only a couple months old so I never had the chance to test anything pre 1.0.2. But anything 1.0.2 and greater behaves the same.
Ico