Patrick Shirkey wrote:
>...
> Wanda da Bizzarre Bunny Rocks!!!! Very heavy dynamics they are
> achieving. How were the ears after that set? Or maybe a better question
> is how many heads exploded during that set?
None. Stefan Janssen was at the mixer, which meant that we could walk around safely without earplugs.
Great set indeed !
Cheers,
Marc
I have an idea in mind for an application that would involve a core
audio callback responsible for playing several sounds at the same
time, each being streamed in by some as-yet-undetermined means.
Before I get too far into it, I have a few questions about the best
method for ensuring that the audio callback is not interrupted for
lengthy disk access, etc. Obviously I am not planning on doing the
main disk I/O in the callback, but I am thinking about the best means
for the callback to communicate with the rest of the application.
Possibly I might like to support having some of these streams come
from external processes, opened through popen() for example.
So, the idea for an RT audio callback is that it should not wait on
data, (whether it comes from a file or process), but continue
processing the other streams if audio data is not immediately
available. There are a few ways to do this in Linux:
1) Have a secondary thread responsible for passing data to the audio
callback through a wait-free ring buffer.
2) Read from a pipe, FIFO, or socket from another process (e.g.
popen), using select() or poll() to check when there is actually data
to read.
3) Read from a file, using select()?
4) The async I/O API.
5) Interprocess shared memory, presumably using a semaphore of some
kind. I guess this is similar to (1) but for inter-process
communication.
The question is, which one of these methods is the most "real-time
friendly"? Under what conditions, if any, can I be sure a read() will
not block? Is there any advantage to threads vs. processes? Using
async I/O I suppose I could avoid either one. Are there any general
guidelines somewhere for dealing with I/O in audio applications?
thanks in advance,
Steve
Hi,
i have another question to RME experts (hi Florian!).
the system i need to build should have 8 AES inputs and 24 AES outputs.
For that, i see two basic approaches 1) using two RME AES-32 (*) cards
in world-clock synch and 2) using a RME hammerfall with 24 ADAT
input/outputs and then several 8ch ADAT<->AES converters such as the
Aphex 144 (+), which costs about 400€ each 8ch unit.
Option 1 seems to me simpler and cheaper but i ignore if is feasible and
reliable: Have anyone had good experiences using two clock-synced RME
cards with jack?
Are there other options apart from 1 and 2?
Thanks a lot!
P
* http://www.rme-audio.de/en_products_hdsp_aes32.php
+ http://www.aphex.com/144.htm
hi *!
the lac2010 presentation recordings are now available at
http://www.linuxproaudio.org/lac2010/ - kudos to faberman for
very-close-to-realtime post-production!
let me take the opportunity to thank all stream team people (many of
them members of the linux video community, who put in many hours of
volunteer work to cover our beloved little annual meeting)!
check out their works and websites, join their projects, send them beer!
christian thäter, germany (http://lumiera.org/)
- cam operator, vision mixing
florian faber, germany
- post production, archive, software support, club mate
frank neumann, germany
- cam operator, emcee
herman robak, norway (http://developer.skolelinux.no/~herman/)
- director of photography, cams, vision mixing, hardware
marc-olivier barre, france (http://marcochapeau.org/)
- relay operator
yours truly, germany (http://stackingdwarves.net)
- vision mixing, technical supervisor, relay operator
raffaella traniello, italy (http://www.g-raffa.eu/,
http://vimeo.com/raffatraniello)
- cam operator
robin gareus, france (http://gareus.org/)
- hardware, setup, technical support
thijs koerselman, the netherlands (http://www.vauxlab.com)
- cam operator
wouter verwijlen, the netherlands (http://www.wouterverwijlen.nl)
- cam operator
and of course major kudos to marc groenewegen and everyone at hku for a
great conference!
best,
jörn
Hello everyone,
This is my first post on this list, so please excuse me if I'm not
following list etiquette.
I don't think I can be called a developer, although I can get things
done with Python scripts. I'm much more of an end user, a musician
using linux live - LinuxSampler being my 'bread and butter' tool.
I have a specific application in mind, and I dont think there's
anything out there that does it:
I want to 'sample' a softsynth (e.g. Yoshimi) / modelled software
instrument. In other words, I want something that automates the
following process:
1. Sequencially play 88 X N notes, where N is the number of velocity layers
2. Play them through a particular preset on the softsynth/modelled
instrument (basically just send MIDI output to the softsynth/modelled
piano, through ALSA/JACK)
3. Record each sound (corresponding to each note/velocity) as a .wav
file (at a specified sample rate, etc)
The sample files can then be converted into a .gig instrument sample
using Gigedit. The advantage of this process would be that one can now
use (more or less) the same sounds with MUCH less CPU usage. the
disadvantage, of course, is the loss of tweakability, but I can live
with that, especially in live contexts.
I searched for an application that does this, and the closest I got to
was this feature in Renoise:
http://tutorials.renoise.com/wiki/Render_or_Freeze_Plugin_Instruments_to_Sa…
But I don't use Renoise live. Which brings me to my question(s):
1. Am I right about there not being any application that does this? (I
hope I'm not!)
2. If yes, can someone point me in the direction of how I might get
about writing a script that does this?
2a. What Python modules would be useful? I just came across PyAudio
and PySndObj, and checking them out.
I'm sure there are *quite* a few people who would find such a script
very useful, and I intend to make it available ASAP. My programming
skills are limited, though, so any help in this regard would be
appreciated.
Thanks for your time and consideration!
Cheers,
Guru
Greetings,
I'm updating the Audio Plugins page at linux-sound.org and need some
updated URLs. Does anyone know how to reach these
packages/sites/developers :
LCP Perl - David Riley
Lemux - ?
NJL Plugins - Nick Lamb
Soundtank - Jacob Robbins
WASP - Artemiy Pavlov
Also, please advise if you know of any other LV2 or LADSPA plugins that
should be added to the list at http://linux-sound.org/plugins.html.
Links to native Linux VST plugins will be added soon.
TIA,
dp
On Sat, May 08, 2010 at 03:23:11PM +0400, Andrew Gaydenko wrote:
> On Saturday, May 08, 2010 15:12:06 Julien Claassen wrote:
> > Hi Andrew!
> > If you want control over the dithering best use Fons' resample for both
> > bitdepth and samplerate conversion. If not, I think sndfile-convert should
> > be fine.
>
> As for sample rate changing - do I understand well, Fons' intention is to use
> resampling in real time processing? If it so - will it be comparable in
> quality with SRC offline "Best Sinc" processing? I mean, real-timing can
> forces some compromises in quality (with all my respect to and admire of
> multiple Fons' tools which I use widely!).
Zita-resampler uses the same 'sinc' algo as SRC, and the resample
application configures for the best quality which corresponds to
a 192 tap FIR. The differences are
* zita-resampler precomputes all filter coefficients and does not
have to interpolate them while processing. This makes it faster,
in particular for multichannel since (at least the last time I
looked) SRC repeats this interpolation for each channel.
* The SRC filter goes for full attenuation at FS/2 (FS is the lower
of the two sample rates) while zita-resampler has -60dB at that
point. This is a deliberate choice. Zita-resampler will reach
full attenuation of aliases at all frequencies where it matters,
at least when used at the normal sample rates (>= 44100). It
should not be used to resample to e.g. 32 kHz or lower.
Listening tests with 'expert' users have shown that none of them
have been able to hear the difference between the original, SRC,
and zita-resampler even at a *lower* quality setting.
Ciao,
--
FA
O tu, che porte, correndo si ?
E guerra e morte !
I wrote:
>> I tired a quick mod in MusE to do what the author of the caps ladspa suite
>> did to handle de-normals. He said "A -80dB signal at the Nyquist frequency
>> or lower". No luck.
>> But yeah, obviously at some signal level and type, it should stop.
>> So I'll keep trying. Noise sounds like the best way. -100dB white to start?
>> OK...
>>
>> Ugh. A new MusE options panel: Advanced de-normalization options, he he...
>>
Paul wrote:
>You might want to check ardour. It has 3 denormal protection options (2 that
>are h/w based, setting processor flags, and 1 that is software based, adding
>a very very very very tiny constant value ("DC Bias") to every signal. See
>libs/pbd/fpu.cc to find the h/w stuff.
I raised MusE's de-normal DC bias to a very high 1.0e-2.
Same problem.
So I slapped some printf traces on various float data streams in MusE
to see what was really happening.
Here are the first four float values of the buffers involved, into and out of
the plugin:
Into the plugin: 1.000000e-02 1.000000e-02 1.000000e-02 1.000000e-02
Out of the plugin: -1.398807e-19 8.058466e-19 1.999431e-18 3.264671e-18
What comes out is a meandering, wandering series of floats which
eventually wanders into de-normal territory.
This data eventually makes its way to Jack for audio output.
I might conclude then, that the plugin AC-couples it's input to remove
any DC bias.
So DC input bias won't help, and likely wouldn't help either if applied
to the wandering output. Seems only noise or other AC signal cures it.
I gather this also means we must be careful what we feed into Jack.
Haven't tested Ardour yet, but what do you think about this, Paul?
I suppose Ardour's processor flags option *must* cure it?
Still testing... Thanks. Tim.
Thanks for the quick answer Florian!
> Pau!
>
> > is the RME HDSPe AES supported by alsa? And would is be as reliable
> as
> > the Hammerfall 9652?
>
> Remy Bruno added support for the hdspm driver in 2006. I have never
> tested it, but I know of a few people that use it.
>
> Why didn't you ask me yesterday?
Yes I could have asked you at LAC - only that the requirement for
interfacing with AES popped up just today :-)
So will buy this card and see how it works.
P
Hey guys!
I was wondering about the following.
On Windows we have lots and lots of plugins and synthesizers and effect
racks. On Linux the selection is much less variable.
However, am I correct in understanding that the variety of the Windows
synths and plugins merely means that people take several core modules and
just rearrange them into different GUIs?
Am I correct in understanding that there are only several major algorithms
for things like filters, delays, reverbs and choruses?
Louigi Verona.