Hello everyone,
This is my first post on this list, so please excuse me if I'm not
following list etiquette.
I don't think I can be called a developer, although I can get things
done with Python scripts. I'm much more of an end user, a musician
using linux live - LinuxSampler being my 'bread and butter' tool.
I have a specific application in mind, and I dont think there's
anything out there that does it:
I want to 'sample' a softsynth (e.g. Yoshimi) / modelled software
instrument. In other words, I want something that automates the
following process:
1. Sequencially play 88 X N notes, where N is the number of velocity layers
2. Play them through a particular preset on the softsynth/modelled
instrument (basically just send MIDI output to the softsynth/modelled
piano, through ALSA/JACK)
3. Record each sound (corresponding to each note/velocity) as a .wav
file (at a specified sample rate, etc)
The sample files can then be converted into a .gig instrument sample
using Gigedit. The advantage of this process would be that one can now
use (more or less) the same sounds with MUCH less CPU usage. the
disadvantage, of course, is the loss of tweakability, but I can live
with that, especially in live contexts.
I searched for an application that does this, and the closest I got to
was this feature in Renoise:
http://tutorials.renoise.com/wiki/Render_or_Freeze_Plugin_Instruments_to_Sa…
But I don't use Renoise live. Which brings me to my question(s):
1. Am I right about there not being any application that does this? (I
hope I'm not!)
2. If yes, can someone point me in the direction of how I might get
about writing a script that does this?
2a. What Python modules would be useful? I just came across PyAudio
and PySndObj, and checking them out.
I'm sure there are *quite* a few people who would find such a script
very useful, and I intend to make it available ASAP. My programming
skills are limited, though, so any help in this regard would be
appreciated.
Thanks for your time and consideration!
Cheers,
Guru
Greetings,
I'm updating the Audio Plugins page at linux-sound.org and need some
updated URLs. Does anyone know how to reach these
packages/sites/developers :
LCP Perl - David Riley
Lemux - ?
NJL Plugins - Nick Lamb
Soundtank - Jacob Robbins
WASP - Artemiy Pavlov
Also, please advise if you know of any other LV2 or LADSPA plugins that
should be added to the list at http://linux-sound.org/plugins.html.
Links to native Linux VST plugins will be added soon.
TIA,
dp
On Sat, May 08, 2010 at 03:23:11PM +0400, Andrew Gaydenko wrote:
> On Saturday, May 08, 2010 15:12:06 Julien Claassen wrote:
> > Hi Andrew!
> > If you want control over the dithering best use Fons' resample for both
> > bitdepth and samplerate conversion. If not, I think sndfile-convert should
> > be fine.
>
> As for sample rate changing - do I understand well, Fons' intention is to use
> resampling in real time processing? If it so - will it be comparable in
> quality with SRC offline "Best Sinc" processing? I mean, real-timing can
> forces some compromises in quality (with all my respect to and admire of
> multiple Fons' tools which I use widely!).
Zita-resampler uses the same 'sinc' algo as SRC, and the resample
application configures for the best quality which corresponds to
a 192 tap FIR. The differences are
* zita-resampler precomputes all filter coefficients and does not
have to interpolate them while processing. This makes it faster,
in particular for multichannel since (at least the last time I
looked) SRC repeats this interpolation for each channel.
* The SRC filter goes for full attenuation at FS/2 (FS is the lower
of the two sample rates) while zita-resampler has -60dB at that
point. This is a deliberate choice. Zita-resampler will reach
full attenuation of aliases at all frequencies where it matters,
at least when used at the normal sample rates (>= 44100). It
should not be used to resample to e.g. 32 kHz or lower.
Listening tests with 'expert' users have shown that none of them
have been able to hear the difference between the original, SRC,
and zita-resampler even at a *lower* quality setting.
Ciao,
--
FA
O tu, che porte, correndo si ?
E guerra e morte !
I wrote:
>> I tired a quick mod in MusE to do what the author of the caps ladspa suite
>> did to handle de-normals. He said "A -80dB signal at the Nyquist frequency
>> or lower". No luck.
>> But yeah, obviously at some signal level and type, it should stop.
>> So I'll keep trying. Noise sounds like the best way. -100dB white to start?
>> OK...
>>
>> Ugh. A new MusE options panel: Advanced de-normalization options, he he...
>>
Paul wrote:
>You might want to check ardour. It has 3 denormal protection options (2 that
>are h/w based, setting processor flags, and 1 that is software based, adding
>a very very very very tiny constant value ("DC Bias") to every signal. See
>libs/pbd/fpu.cc to find the h/w stuff.
I raised MusE's de-normal DC bias to a very high 1.0e-2.
Same problem.
So I slapped some printf traces on various float data streams in MusE
to see what was really happening.
Here are the first four float values of the buffers involved, into and out of
the plugin:
Into the plugin: 1.000000e-02 1.000000e-02 1.000000e-02 1.000000e-02
Out of the plugin: -1.398807e-19 8.058466e-19 1.999431e-18 3.264671e-18
What comes out is a meandering, wandering series of floats which
eventually wanders into de-normal territory.
This data eventually makes its way to Jack for audio output.
I might conclude then, that the plugin AC-couples it's input to remove
any DC bias.
So DC input bias won't help, and likely wouldn't help either if applied
to the wandering output. Seems only noise or other AC signal cures it.
I gather this also means we must be careful what we feed into Jack.
Haven't tested Ardour yet, but what do you think about this, Paul?
I suppose Ardour's processor flags option *must* cure it?
Still testing... Thanks. Tim.
Thanks for the quick answer Florian!
> Pau!
>
> > is the RME HDSPe AES supported by alsa? And would is be as reliable
> as
> > the Hammerfall 9652?
>
> Remy Bruno added support for the hdspm driver in 2006. I have never
> tested it, but I know of a few people that use it.
>
> Why didn't you ask me yesterday?
Yes I could have asked you at LAC - only that the requirement for
interfacing with AES popped up just today :-)
So will buy this card and see how it works.
P
Hey guys!
I was wondering about the following.
On Windows we have lots and lots of plugins and synthesizers and effect
racks. On Linux the selection is much less variable.
However, am I correct in understanding that the variety of the Windows
synths and plugins merely means that people take several core modules and
just rearrange them into different GUIs?
Am I correct in understanding that there are only several major algorithms
for things like filters, delays, reverbs and choruses?
Louigi Verona.
Hi,
I'm trying to add a threaded timer to kluppes looperdata.c
looperdata_calc_sample_stereo function so that I can add a delayed
restart to the loop process.
Can anyone tell me why the "while" statement in the following code locks
up the audio stream for the loop it is being run on? I end up with a
buzz throughout the delay period instead of a nice quiet delay period.
#include <stdlib.h>
#include <signal.h>
#include <stdio.h>
/* This flag controls termination of the main loop. */
volatile sig_atomic_t isdelay_countdown = 1;
/* The signal handler just clears the flag and re-enables itself. */
void catch_alarm (int sig){
isdelay_countdown = 0;
signal (sig, catch_alarm);
}
vol = data->vol;
if(data->playbackdelay > 0){
/* Establish a handler for SIGALRM signals. */
signal (SIGALRM, catch_alarm);
isdelay_countdown = 1;
/* Call alarm to countdown length of
playbackdelay */
alarm ((int)data->playbackdelay);
/* Check the flag once in a while to see when to
quit. */
while(isdelay_countdown){
looperdata_set_vol(data,0);
data->isplaying = 0;
}
}
/* return to start of loop */
looperdata_set_vol(data,vol);
data->isplaying = 1;
data->playindex += data->loopstart - data->loopend;
--
Patrick Shirkey
Boost Hardware Ltd