convoLV2 is an LV2 plugin to convolve audio signals without additional
latency.
https://github.com/x42/convoLV2https://github.com/x42/convoLV2/tarball/v0.2
convoLV2 is in an early stage of development and is not yet suitable for
users. However, it serves as a working example of an LV2 plugin with
block length restrictions, in this case:
* Maximum block length must be passed as an instantiate option
* Block length must always be a power of 2 (a required feature)
See the links in the README and on the github page for details about
these features. The first should be trivial to implement in any host,
which is highly recommended. The second may be difficult or impossible,
though it is trivial in hosts that run plugins directly on the Jack
cycle[1] or process files with fixed parameters.
This plugin is intended to provide latency-free synchronous convolution,
which inherently requires these restrictions. It does not, and will not
ever, do latent audio buffering in the plugin itself (though a generic
wrapper to do so is a good idea...)
This release is known to work in Jalv 1.2.0. If you're feeling
adventurous and remove the power of 2 feature requirement from the data,
it will also work in Ardour3. Sometimes. Maybe.
convoLV2 is jointly developed by Robin Gareus (who wrote the entire
plugin before LV2 could properly support it) and David Robillard (who
invented/implemented the missing LV2 pieces).
We hope that convoLV2 will eventually be as fully-featured as IR.lv2
without resorting to kludges that violate the LV2 specification.
This announcement is exclusive to developer mailing lists. Feel free to
reply with any thoughts, and report back with any host implementation
progress so the README can be updated.
Happy Hacking,
-dr
[1] Assuming the Jack block length is a power of 2 anyway, which is not
actually guaranteed, but is true in any case sane enough to care about.
Hello all,
My ex-collegues at Alcatel are screaming for help. They want to run
an app (as root, debatable but that's another story) using SCHED_FIFO
threads on an openSuSE 11.4 system.
Using the 'default' kernel (which has CONFIG_PREEMPT not set), this
works. Using the 'desktop' kernel (CONFIG_PREEMPT=y) they get an
EPERM when trying to start a RT thread, even as root.
As I haven't used SuSE for ages, has anyone an idea of what is
happening here ?
TIA,
--
FA
A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)
You (Fons) obviously know your stuff when it comes to implementing
convolution, and I'm very grateful you gave us libzita-convolver, but I
have to disagree with this statement:
On Sat, 2012-10-20 at 12:00 +0000, Fons Adriaensen wrote:
> Thirdly, for the use case of reverbs, the whole latency issue is
> just irrelevant. A reverb IR should not contain the direct sound
> (since this will be mixed in separately), and in fact the first
> 10 ms or so should be silence anyway, they don't contribute to
> the effect and just cause coloration. Which means you can adopt
> a scheme that permits arbitrary block sizes by allowing one (Jack)
> period of latency. Just remove as many samples from the start of
> the IR to compensate.
When you use a reverb plugin for ambience or things like the Bricasty M7
Bass XXL impulse, the coloration it is 90% of the effect you are using
the plugin for.
There is a whole range of useful coloration to be had by changing the
pre-delay within 0-20ms. Often it will sound best with 0ms, because that
how the maker of the impulse listened to it.
An ambience impulse is an important part of practically every mix I do.
That is why I am so happy with libzita-convolver and it's offspring :)
That is also why I would *love* latency compensation on buses in Ardour.
All the best,
Bart.
Hi Harry,
Thanks for your suggestions and sample.
I can understand the sample almost, and will start to learn JACK from now on.
Please give me more guidance if i encounter trouble on JACK.
Thanks in advance!
Best regards
Spring
------------------ 原始邮件 ------------------
发件人: "Harry van Haaren"<harryhaaren(a)gmail.com>;
发送时间: 2012年10月19日(星期五) 下午5:50
收件人: "新月如钩"<46620410(a)qq.com>;
抄送: "linux-audio-dev"<linux-audio-dev(a)lists.linuxaudio.org>;
主题: Re: 回复: [LAD] About some interface of aplay
On , 新月如钩 <46620410(a)qq.com> wrote:
> I want to learn some Linux audio related technologies, and going to start from ALSA.
It is easier to start with JACK.
If you understand the code in this file, then you understand the basics behind writing a JACK audio program.
https://github.com/jackaudio/example-clients/blob/master/simple_client.c
It is much harder to learn to code for ALSA, as there are many steps involved before you can play sound.
Make life easy for yourself and let JACK do that hard work: stand on the shoulders of giants. -Harry
Hi SxDx,
Thanks for your feedback, it seems i download a outdated version.
I think these interface may be have abandoned.
sorry to trouble you! Thanks for your feedback again!
Best regards
Spring
------------------ 原始邮件 ------------------
发件人: "SxDx"<sed(a)free.fr>;
发送时间: 2012年10月19日(星期五) 晚上9:38
收件人: "新月如钩"<46620410(a)qq.com>;
抄送: "linux-audio-dev"<linux-audio-dev(a)lists.linuxaudio.org>;
主题: Re: [LAD] About some interface of aplay
> From: "新月如钩" <46620410(a)qq.com>
> I analyze the source code of aplay and very confused on follow
> interfaces:
> snd_pcm_playback_info()
> snd_pcm_playback_format()
> snd_pcm_playback_params()
>
> The aplay is belongs to alsa-utils that also is a application.
> It should be use interfaces of alsa-lib, but why i can't find
> these interfaces in alsa-lib ?
> Could you tell me where can find out these interface?
what version of alsa-utils do you have?
These functions are not in alsa-utils-1.0.24.2
that I have here nor in the git repository.
Hi All,
Say hello to everyone!
I'm a new member to learn how to write a ALSA application.
I analyze the source code of aplay and very confused on follow interfaces:
snd_pcm_playback_info()
snd_pcm_playback_format()
snd_pcm_playback_params()
The aplay is belongs to alsa-utils that also is a application.
It should be use interfaces of alsa-lib, but why i can't find these interfaces in alsa-lib ?
Could you tell me where can find out these interface?
Looking forward to your reply!
Best regards
Spring
Hi Harry,
Thanks for your reply!
I want to learn some Linux audio related technologies,
and going to start from ALSA.
But i find it is difficult to do that,hope to be able to get more help。
Best regards
Spring
------------------ 原始邮件 ------------------
发件人: "Harry van Haaren"<harryhaaren(a)gmail.com>;
发送时间: 2012年10月19日(星期五) 下午5:15
收件人: "新月如钩"<46620410(a)qq.com>;
抄送: "linux-audio-dev"<linux-audio-dev(a)lists.linuxaudio.org>;
主题: Re: [LAD] About some interface of aplay
On Fri, Oct 19, 2012 at 9:31 AM, 新月如钩 <46620410(a)qq.com> wrote:
Hi All,
Say hello to everyone!
Hi Spring!
I'm a new member to learn how to write a ALSA application.
Is there a reason that you want to learn ALSA specifically? Or do you want to start audio programming in general? I might advise writing a JACK client if its audio coding you want to get into :)
I analyze the source code of aplay and very confused on follow interfaces:
snd_pcm_playback_info()
snd_pcm_playback_format()
snd_pcm_playback_params()
Can't help there I'm afraid: I'm a JACK coder.. :) -Harry
sorry for >< please >> <<
We are happy to announce the next issue of the Linux Audio Conference (LAC), May 9-12, 2013 @ IEM, the Institute of Electronic Music and Acoustics, in Graz, Austria.
The Linux Audio Conference is an international conference that brings together musicians, sound artists, software developers and researchers, working with Linux as an open, stable, professional platform for audio and media research and music production. LAC includes paper sessions, workshops, and a diverse program of electronic music.
*Call for Papers, Workshops, Music and Installations*
We invite submissions of papers addressing all areas of audio processing and media creation based on Linux. Papers can focus on technical, artistic and scientific issues and should target developers or users. In our call for music, we are looking for works that have been produced or composed entirely/mostly using Linux.
The online submission of papers, workshops, music and installations is now open at
http://lac.iem.at/
The Deadline for all submissions is February 4th, 2013 (23:59 HAST)
You are invited to register for participation on our conference website. There you will find up-to-date instructions, as well as important information about dates, travel, lodging, and so on.
This year's conference is hosted by IEM, Graz, in cooperation with local artists and FLOSS enthusiasts.
The Institute of Electronic Music and Acoustics (IEM) at the University of Music and Performing Arts Graz is considered Austria's leading institution in computer music, acoustics and audio engineering and has gained international reputation for its research on spatial audio and its artistic production and research.
IEM has been embracing Linux audio as a production and research environment since the mid-1990s, and has contributed to FLOSS/Linux projects, amongst others by providing drivers for multichannel audio interfaces and hosting the Pure Data community portal and mailing lists.
http://iem.at/
We look forward to seeing you in Graz in May!
Sincerely,
The LAC 2013 Organizing Team
------- Forwarded message -------
From: "Uwaysi Bin Kareem" <uwaysi.bin.kareem(a)paradoxuncreated.com>
To: "Adrian Knoth" <adi(a)drcomp.erfurt.thur.de>
Cc:
Subject: Re: [LAD] [LAU] Linux Audio 2012: Is Linux Audio moving forward?
Date: Thu, 18 Oct 2012 10:06:45 +0200
On Wed, 17 Oct 2012 17:58:33 +0200, Adrian Knoth
<adi(a)drcomp.erfurt.thur.de> wrote:
> On Wed, Oct 17, 2012 at 05:07:20PM +0200, Uwaysi Bin Kareem wrote:
>
>> http://paradoxuncreated.com/Blog/wordpress/?p=2268
>
> The site mentions:
>
> --- quote ---
>> sudo schedtool -p 98 -n -20 -F `pgrep X`
> --- end quote ---
>
> Setting the X-server to FIFO/98 is just plain wrong, at least on an
> audio mailing list.
>
> And then:
>
> --- quotes ---
>> To go with this I also recommend, using the Ubuntu 2d desktop, as it has
>> low-jitter. Also the chromium-browser has low-jitter (better youtube).
> --- end quotes ---
>
> I have no idea what you're trying to prove here, but I'm pretty sure you
> have a general misunderstanding of jitter, thread wake-up latencies and
> proper scheduling priorities.
There seems to be a lot of misunderstandings about scheduling policies and
jitter out there. Howver if you want your desktop to slow down, simply by
moving another window, then leave it at normal. Jitter for audio seem
unaffected by this. The standard kernel seems to almost do 0.33 ms stable
on my HDA soundchip. A few clicks, and that is how it is with realtime X
aswell. So why not do it, even if audio is your main focus. X is
singlethreaded, so it needs to have data ready, for it`s windows or games.
Or else it becomes a bottleneck. Do whatever you want with this, but don`t
say it is wrong, or some kind of misunderstanding. I would not run a
desktop any other way.
Peace Be With You.