Hi!
I guess this could be important for some of you:
Debian is currently dropping QT3 and KDE3 from unstable. This in turn
means that apps depending on those libs either need to be ported to
QT4/KDE4 or will also be removed.
As always, saying Debian sooner or later also means Ubuntu and
derivatives.
Of course, all software remains installable from squeeze (current
release) for the next few years (let me lie: 5?).
I know that fmit and creox use QT3, probably some other audio software,
too.
The creox author has no time for porting, so if somebody feels bored or
is looking for a little project, you might want to consider this.
Cheers
--
mail: adi(a)thur.de http://adi.thur.de PGP/GPG: key via keyserver
Citējot *Fons Adriaensen <fons(a)linuxaudio.org> [1]*:
> > How from ...1024 or 2048 or 4096... FFT return values i
> calculate
> > power magnitudes for all bands,
> > and finally values for visual 10-20 hopping bars, like in
> Winamp ,
> > XMMS , QMMP ... ?
>
> If you want such a display the FFT is not a good way to do it.
> It's possible but not simple if you want a correct result.
What is good/best way to calculate values for visual 10-20 hopping
bars like in GUI audio players ?
> > 43303.2715 + 796.7285 = 44100 or 44100 - 43303.2715 =
> 796.7285>
> > Why Frequency 796.7285 is mirrored as Frequency 43303.2715 ,
> and magnitude for both Frequencies is divided by 2 ????
>
> Because you are using a complex FFT, and the imaginary part
> of your signal is zero. That means that the spectrum must be
> symmetric.
>
> > Is here way direct calculate full magnitude and without
> Frequency
> > mirroring , in band 0 Hz ... FSampl/2 ONLY ,
>
> Use an FFT operating on real data instead of complex.
Can U gimme pointers to such functions ?
What about all Radix algorithms ?
Tnx in advance
Alf
---------- Pārsūtītās vēstules beigas ----------
Links:
------
[1] mailto:fons@linuxaudio.org
Hi Experts.
I am Physics student, and i wanna write referat about Fourier
transformations, also about FFT 1D real case.
Hope this is best place for ask this, and here are best experts.
I wanaa demonstrate how diverse window functions changes measured
spectrum,
how much CPU ressources take diverse FFT algorithms ...
Yet i understand [i hope so] how works windowing .
Is somewhere available copy-paste self contained C example functions
or makros for diverse FFT algorithms
FFT, QFT, Goertzel, radix-2, radix-4, split-radix, mixed-radix ... ?
Which variables in FFT function must/should be defined as static,
register ... for best performance ?
What typical comes in to FFT function ? Pointer to already windowed
array of samples ?
What return FFT ?
What exact physical dimensions return FFT , if input function
dimensions was - voltage depends from (time) U=U(t) ?
How from ...1024 or 2048 or 4096... FFT return values i calculate
power magnitudes for all bands,
and finally values for visual 10-20 hopping bars, like in Winamp ,
XMMS , QMMP ... ?
If i exact know my FFT window size [for example 4096 samples] and
window type , and it will be constant forever,
is it possible to calculate window(,sine,cosine,) and store all
values in constant array,
so that in runtime it do not need be calculated , but i can just
take values form array ?
I have google_d about FFT and have found such proggie [see below]
I have it little bit remixed , i generate pure sine frekwenz =
796.7285 HZ ,
and in output file i got so what :
[ 71] 764.4287 Hz: Re= 0.0000000011142182 Im= 0.0000002368905824 M=
0.0000002368932028
[ 72] 775.1953 Hz: Re= 0.0000000011147694 Im= 0.0000003578625618 M=
0.0000003578642981
[ 73] 785.9619 Hz: Re= 0.0000000011164234 Im= 0.0000007207092628 M=
0.0000007207101275
[ 74] 796.7285 Hz: Re=-0.0000022785007614 Im=-0.5000000048748561 M=
0.5000000048800476
[ 75] 807.4951 Hz: Re= 0.0000000011098065 Im=-0.0000007304711756 M=
0.0000007304720186
[ 76] 818.2617 Hz: Re= 0.0000000011114605 Im=-0.0000003676273975 M=
0.0000003676290776
[ 77] 829.0283 Hz: Re= 0.0000000011120118 Im=-0.0000002466579503 M=
0.0000002466604569
...
...
...
[4019] 43270.9717 Hz: Re= 0.0000000011120118 Im= 0.0000002466579503
M= 0.0000002466604569
[4020] 43281.7383 Hz: Re= 0.0000000011114605 Im= 0.0000003676273975
M= 0.0000003676290776
[4021] 43292.5049 Hz: Re= 0.0000000011098065 Im= 0.0000007304711756
M= 0.0000007304720186
[4022] 43303.2715 Hz: Re=-0.0000022785015510 Im= 0.5000000048748419
M= 0.5000000048800334
[4023] 43314.0381 Hz: Re= 0.0000000011164234 Im=-0.0000007207092628
M= 0.0000007207101275
[4024] 43324.8047 Hz: Re= 0.0000000011147694 Im=-0.0000003578625618
M= 0.0000003578642981
[4025] 43335.5713 Hz: Re= 0.0000000011142182 Im=-0.0000002368905824
M= 0.0000002368932028
Where
43303.2715 + 796.7285 = 44100 or 44100 - 43303.2715 = 796.7285
Why Frequency 796.7285 is mirrored as Frequency 43303.2715 , and
magnitude for both Frequencies is divided by 2 ????
Is here way direct calculate full magnitude and without Frequency
mirroring , in band 0 Hz ... FSampl/2 ONLY ,
and not in full band - 0 Hz ... FSampl ??
In this case - how do i calculate corresponding frequency of each
band that returns FFT, if sample frequency is 32K, 44K or 48K ?
Pleaz do not point me to FFTW[3] and such libs , i must self
write/combine code .
Except FFTW has best short self contained 1D functions for
copy-paste :)
Each pointer and example is welcomed.
Tnx in advance @ all.
Alfs Kurmis.
====
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <ctype.h>
#include <math.h>
#define BUFFER 4096 //2048
//#define BUFFER 256
# define M_PI_LD 3.1415926535897932384626433832795029L /* pi
*/
//void readwave(const char *filename, double *tr);
short FFT(short int dir, long m, double *x, double *y);
double fstep;
int main (int argc, char *argv [])
{
int i;
double f = 0.0;
double amplitude = 0.; double samplerate = 44100.,
frekwenz=796.7285; double xarg=0. , argplus = 0. , pti ;
double real[BUFFER], img[BUFFER];
// ==== =====
argplus = ( frekwenz *2.0*M_PI )/samplerate;
//printf("Reading %d bytes from %s.rn", BUFFER, argv[1]);
for(i=0;i<BUFFER;i++)
{ // xarg=0.0; argplus=0.2; floarArg =0.0001;
pti = sin ( xarg ) /* * 32767.0*/ ;
/*if ( kante>0 ) { if( pti > 0.0 ){ pti = 23767.0;
}else{pti = -23767.0;} } /**/
xarg = xarg +argplus;//+floarArg;
if ( xarg > (4.0*M_PI) ) xarg = xarg - (4.0*M_PI);
real[i] = pti;
}
printf("F= %7.2f Hz nn", (samplerate*argplus)/(2*M_PI) );
memset(img, 0, sizeof(img)); /* Fill all the imaginary parts with
zeros */
//fstep = (double) samplerate / (double) (BUFFER*2);
fstep = (double) samplerate / (double) (BUFFER);
printf("Frequency step : %10.6frn", fstep);
FFT(1, /*11*/ 12, real, img); /* Fast Fourier Transform with 2^11
bins */
// FFT(1, 7, real, img); /* Fast Fourier Transform with 2^11 bins
*/
/* Write fourier transformed data to stdio */
i = 0;
while(i < BUFFER)
{
amplitude = sqrt((real[i]*real[i]) + (img[i]*img[i]));
//printf("(%4d) %.2f Hz: Re=%f Im=%f P=%frn", i, f, real[i],
img[i], amplitude);
printf("[%4d] %8.4f Hz: Re=%22.16f Im=%22.16f M=%22.16fn", i,
f, real[i], img[i], amplitude);
i++;
f += fstep;
}
return 0 ;
} // main
/*
This computes an in-place complex-to-complex FFT
x and y are the real and imaginary arrays of 2^m points.
dir = 1 gives forward transform
dir = -1 gives reverse transform
*/
short FFT(short int dir, long m, double *x, double *y)
{
long n,i,i1,j,k,i2,l,l1,l2;
double c1,c2,tx,ty,t1,t2,u1,u2,z;
/* Calculate the number of points N = M ^2 */
n = 1; for (i=0;i<m;i++) n *= 2;
printf("FFT -->> (n) 2 ^ %ld = %ldn", m, n);
/* Do the bit reversal */
i2 = n >> 1;
j = 0;
for (i=0;i<n-1;i++) {
if (i < j) {
tx = x[i]; ty = y[i];
x[i] = x[j]; y[i] = y[j];
x[j] = tx; y[j] = ty;
}
k = i2;
while (k <= j) {
j -= k;
k >>= 1;
}
j += k;
} // for (i=0;i<n-1;i++)
/* Compute the FFT */
c1 = -1.0;
c2 = 0.0;
l2 = 1;
for (l=0;l<m;l++) {
l1 = l2;
l2 <<= 1;
u1 = 1.0;
u2 = 0.0;
for (j=0;j<l1;j++) {
for (i=j;i<n;i+=l2) {
i1 = i + l1;
t1 = u1 * x[i1] - u2 * y[i1]; t2 = u1 * y[i1] + u2 *
x[i1];
x[i1] = x[i] - t1; y[i1] = y[i] - t2;
x[i] += t1; y[i] += t2;
}
z = u1 * c1 - u2 * c2;
u2 = u1 * c2 + u2 * c1;
u1 = z;
}
c2 = sqrt((1.0 - c1) / 2.0);
if (dir == 1)
c2 = -c2;
c1 = sqrt((1.0 + c1) / 2.0);
}
/* Scaling for forward transform */
if (dir == 1) {
for (i=0;i<n;i++) {
x[i] /= n; y[i] /= n;
}
}
return(1); //return(TRUE);
}
----
Hi experts.
I have started my small project - mp3 database for radio.
http://martini.pudele.com/radio/mp3_database/mp3_database.html
How do i normalize by peak [not RMS] and trim silences in begin and
end of WAV files ?
Silences somewhere in middle of file i wanna leave untouched.
I wanna in first step detect MAX sample in whole WAV file.
For example we gotta MAX sample 10 000, then Apmliefier_coefficient
will be 32 000/10 000 = 3,2 .
In second step i wanna trim silences at begin and below -80 dB [or 2
bit noise]
For this in same file each sample multiple by Apmliefier_coefficient
, and see - result is over -80 dB or not.
If not, then first N samples will not written in trimmed file, but
first sample that is over -80 dB [in any channel] ,
and all further samples written in new file.
Now we must just follow which sample [in any channel] is over -80
dB.
After write is complete, we can just truncate after last sample that
was over -80 dB, and write header.
So far i have found SOX vanna reverse da file for end silence trim,
and for each step produce tmp file.
Is here C API , program, script, way to do so what without any
temporary files ?
I have written script for normalize, but what ir best way for
normalize ?
What about mp3 and ogg automatic normalize and frames trim ?
Tnx in advance
Alf
====
#!/bin/bash
for i in *.wav; do
val=${i%.wav}
echo "** Check peak for $i **"
ampl=`sox "$i" -t wav /dev/null stat -v 2>&1`
waveaus=${i%.wav}.wave
wert1="1.1"; wert2=$ampl;
wahr=$(echo "$wert1 > $wert2" | bc)
if [ $wahr = 1 ]; then
echo " $wert1 > $wert2 , Do Nuthin"
else
echo " $wert1 <= $wert2 , Do process"
echo "** Amplifying volume by -=$ampl=- to fake a normalize
$val.wav -- $waveaus"
ampl2=$(echo $ampl*0.9995 | bc -l)
echo "ampl2 = $ampl2"
sox -v $ampl2 "$i" -t wav "$waveaus"
fi
echo ""
done
----
From: Stefano D'Angelo <zanga.mail(a)gmail.com>
Date: 2011/2/27
Subject: Re: [LAD] RDF libraries, was Re: [ANN] IR: LV2 Convolution Reverb
To: Giuseppe Zompatori <siliconjoe(a)gmail.com>
Cc: linux-audio-dev(a)lists.linuxaudio.org
2011/2/27 Stefano D'Angelo <zanga.mail(a)gmail.com>:
>
>
> Ciao Giuseppe,
>
Ciao Stefano,
Taking this email to a new thread.
>Well... they seem to have a lot of stuff there. :-)
>
>However, I wonder how they do it... I think they are probably using
>some black box modeling, since multiple nonlinearities+feedback in a
>single system is very hard to model.
>
They are very silent on this sadly, don't know what they are doing.
>
>The kind of stuff I'm trying to do is accurately model a class A amp
>with a single triode using white box techniques... to give you an idea
>of what it sounds like see this:
>http://www.youtube.com/watch?v=cdNtmaIdLdo - it is part of my MSc
>thesis presentation (100.000 lire guitar, dated and slow laptop, cheap
>speaker and cheap camera... only the sound card is good).
>
>I guess you speak Italian (at least your name suggests that), so enjoy
>my weird southern accent. :-P
>
Very interesting, I tried compiling your thesis with permafrost to try
this out (obtaining the source from the pdf has been hell BTW) but it
bails with an "m_pi" undeclared input/output function...
Anyway, are you limited to the simulation of a half triode with white
box techniques? I think you should model at least both halves of a
triode if you're after accuracy, a single triode amplifier won't even
work in real life (I build tube amps, I know) ;)
Also class A amplifiers aren't very popular amongst guitar players
(mainly because of their clipping behavior). You also want a
multi-stage preamp with different filtering/biasing points between
stages.
You might think I am crazy but that's what you'll discover yourself by
observing schematics to popular guitar amps.
Here's a simple (early Fender-like) amp topology:
Tube n. 1
-------------------------------------------------------
Tube n. 2 Tube n. 3 and
4
| |
|
|
1st triode -> Tone stack -> post tone stack recovery triode -> P.I.
(Phase inverter) triodes -> (at least 2) Pentodes -> O.T. (Output
Transformer) -> Speakers
^
^
|
|
Presence
pot<--------------------------------negative-feedback---------------------------------------------
This is the easiest PP (Push Pull) class A/B amp I could come up with
(sounds pretty darn good in real life). It has got a tone stack, 4
tubes (2 triodes and two pentodes) and an OT/speakers, do you think
this is feasible computational-wise with permafrost?
>Well, they say guitarix has improved, yet the last time I was all but
>satisfied with it. You may want to take a look at invada plugins, if
>you haven't already.
Invada has a simple generic tube drive function AFAIK, I still prefer
the CAPS* amp over it as it's at least based on a real amp.
>Stammi bene,
>
>Stefano
Anche tu!
-Giuseppe
Hey guys,
I'm working on a mulithreaded version of my pet project, and I've now
managed to deadlock one thread,
which in turn makes the GUI thread wait for a mutex lock, and then finally
segfaults the whole program :-)
So I'm looking for pointers on how best to find a deadlock's cause using
gdb?
Other advice / good articles on the topic etc welcome!
Thanks for reading, -Harry
hi all,
I just came across this:
http://www.cs.unc.edu/~anderson/litmus-rt/index.html. From the site:
The LITMUSRT project is a soft real-time extension of the Linux kernel with
> focus on multiprocessor real-time scheduling and synchronization. The Linux
> kernel is modified to support the sporadic task model and modular scheduler
> plugins. Both partitioned and global scheduling is supported.
>
It seems their latest patch is against 2.6.36. I realize this is not a plug
and play alternative to Ingo's work by any means, but I was wondering if
anyone more knowledgeable has some insight into what exactly the Litmus
kernel may offer...
-michael
> From: Sean Bolton <musound(a)jps.net>
> if the GUI is in another process, its really absurdly hard for the
> host to add its own controls to the window. not impossible, but a
> level of hard that doesn't actually buy the user (or developer)
> anything. this means that the kinds of generic "every plugin" controls
> like "bypass" or "preset" that ardour adds to LADSPA and AU windows
> would vanish. not a huge cost, but a real one.
Not very hard, but think more in terms of functionality, not implementation.
e.g. my plugin API can provide the GUI a string "Commands for Parameter 23:"
- "MIDI Learn, Unlearn, Edit...".
The user can right-click any knob on the GUI and get a menu of functions the
*HOST* provides for that parameter... With ZERO dependence on the type of
GUI. I really like that the feature works with ANY GUI toolkit. The host is
free to provide as few or as many options as it needs.
That one example of thinking in a platform independent way, without being
absurdly hard to implement, in fact real easy.
Jeff McClintock
Hi,
Just a quick question, where does validation of parameters belong?
Invalid params might crash the engine, so some validation should go there.
But if the job of validation is down to the UI, the engine can be more
efficient.
But if multiple UIs are possible, validation effort is duplicated.
How should I approach?
Cheers,
James.
--
_
: http://jwm-art.net/
-audio/image/text/code/
> From: Paul Davis <paul(a)linuxaudiosystems.com>
> Subject: Re: [LAD] Portable user interfaces for LV2 plugins.
> VST3 allows the GUI to run in a different process?
" The design of VST 3 suggests a complete separation of processor and edit
controller by implementing two components. Splitting up an effect into these
two parts requires some extra efforts for an implementation of course.
But this separation enables the host to run each component in a different
context. It can even run them on different computers. Another benefit is
that parameter changes can be separated when it comes to automation. While
for processing these changes need to be transmitted in a sample accurate
way, the GUI part can be updated with a much lower frequency and it can be
shifted by the amount that results from any delay compensation or other
processing offset."
> > The host needs to see every parameter tweak. It needs to be between the
> GUI
> > and the DSP to arbitrate clashes between conflicting control surfaces.
> It's
> > the only way to do automation and state recall right.
>
> well, almost. as i mentioned, AU doesn't really route parameter
> changes via the host, it just makes sure that the host can find out
> about them. the nicest part of the AU system is the highly
> configurable listener system, which can be used to set up things like
> "i need to hear about parameter changes but i don't want to be told
> more than once every 100msec" and more. It's pretty cool.
Yeah. It's important to realise that at any instant 3 entities hold a
parameter's value:
-The audio processer part of the plugin.
-The GUI Part.
-The Host.
A parameter change can come from several sources:
- The GUI.
- The Host's automation playback.
- A MIDI controller.
- Sometimes the Audio processor (e.g. VU Meter).
If several of these are happening at once, some central entity needs to give
one priority. For example if a parameter/knob is moving due to automation
and you click that control - the automation needs to relinquish control
until you release the mouse. The host is the best place for this logic.
Think of the host as holding the parameter, the GUI and Audio processor as
'listeners'. Or the host's copy of the parameter as the 'model' and the GUI
and audio processor as 'views' (Model-View-Controller pattern).
Best Regards!,
Jeff