Oh, well, since GUIs are this week's flavor in the list, I might as well
throw FLAM's alpha in the mix.
I am preparing FLAM's (Front-ends for Linux Audio Modules) first
release. Among other things, FLAM intends to allow programmers and
non-programmers alike to create their own (external) GUIs for audio
plugins. At this moment only Rosegarden as a host and LADSPA as plugin
type are supported, but this is hopefully just a first step.
Project page:
http://vagar.org/code/projects/flam
Tutorial with a few screenshots:
http://vagar.org/asciidoc/flam/primer/primer.html
Source code repository:
git clone http://vagar.org/git/flam
I'd welcome any feedback, here or in the FLAM forums (registration
required):
http://vagar.org/code/projects/flam/boards
Thanks!
Luis
do any of the jack client examples show playing a file from disk?
if so, which?
if not, any links to simple code that does this?
c++ or c?
thanks in advance for any pointers.
drew
--
http://freemusicpush.blogspot.com/
Hi all,
This meditation isn't about gray hair, sagging flesh
or receding libido. :-)
In debugging Nama's new audio editing functions,
I'm noticing that I have more and more code dealing
with time. I am concerned seeing a multiplicity of
method names like adjusted_region_start_time
or unadjusted_mark_time.
I'm wondering if perhaps I can centralize
or at least systematize this functionality.
Nama deals with several kinds of time:
Ecasound time. Positions in seconds or samples from
the perspective of the Ecasound audio engine
WAV time. Displacements in audio files.
Track / Region time. Positions in a track or region
Mark time. Nama currently has only one type of mark:
marks anchored to an absolute project timeline.
I think it also needs marks for positions in a tracks WAV files,
which the user may trim or offset.
Edit / offset-run time. This is what started the entire
issue. To record a fix for a note at time T in a WAV file W,
I use Ecasound's select object to offset all the WAV files
in a project to start at time T. The fix, W', then gets
placed at T using playat.
MIDI time. There are plenty of references on this,
and it's a subject of its own. Nama needs at least to
know enough to work across the various systems for bridging
between ALSA, JACK and MIDI.
So do I need some Big Abstraction(tm), or shall I just
continue to work incrementally?
How do you think about time?
I don't expect a simple answer or an answer at all,
but it can help to formulate the question.
Regards,
Joel
--
Joel Roth
hi all,
for some time, i have been working on nova-simd, a cross-platform SIMD library
library written in c++. it abstracts the platform-specific APIs and provides a
`vec' class, which maps to a SIMD floating-point vector on the target machines
and which can be used to formulate more complex algorithms. for some commonly
used vector functionality, helper functions are provided, which are built using
the vec class.
features:
- same interface for different SIMD architectures
- supported backends: sse-family, avx, altivec, arm/neon
- some backends include vectorized implementations of libm functions
- extensively used in supercollider
- header-only c++ (no runtime dependencies, composable), no dependencies
canveats:
- little documentations/examples
- no release, no tarball, just a git repository [1] and a web interface [2]
- header-only c++ (no c support)
maybe it is useful for other people as well...
cheers, tim
[1] git://tim.klingt.org/nova-simd.git
[2] http://tim.klingt.org/git?p=nova-simd.git;a=summary
--
tim(a)klingt.org
http://tim.klingt.org
Art is either a complaint or do something else
John Cage quoting Jasper Johns
> From: Fons Adriaensen <fons(a)linuxaudio.org>
> On Fri, Feb 18, 2011 at 07:36:44AM +1300, Jeff McClintock wrote:
>
> > With a RMS VU Meter you measure a 1KHz tone as a reference.
>
> A contradiction... A VU does not measure RMS, whatever does measure
> RMS is not a VU.
Isn't a VU Meter a standard root-mean-square function followed by a 300ms
integration to give it some 'weight'? ...calibrated against a 1kHz tone?
Hi Experts.
I wanna normalize my sound stream by loudness (energy / pressure /
intensity) , not by peaks.
How i do it ?
Is available Jack plugin for so what ?
What is (we hear as) "loudness" ?
RMS or +(average) or something else ?
Is somewhere available examples how to calculate RMS ?
Is it done simlpe by :
 int i, n;  double sums, rms;
 sums=0.0;  n=10;  rms=0;
 for(i=0; i<n;i++)
 {   sums = sums + ( (double)i * (double)i );  }
 rms = sqrt(sums / n);
 printf("         
rms = %12.12fnn", rms );
Is so sipmple algo good enough for frequencies > 10 kHz ?
How to calculate RMS with hi-precision for frequencies > 10 kHz ?
With inerpolation or so ?
What is reference ( 0dB ) RMS for example for 16bit PCM signal 1024
samples ?
  sqrt( 1024*(0x7FFF ^ 2) / 1024 )  ==  sqrt( (0x7FFF
^ 2)  ) == 0x7FFF 
Is it simple  0x7FFF (32000 dec) ?
How to calc RMS for stereo signal ?
So , or somehow else ?
for(i=0; i<n;i++)
 {  ....
   sums = sampleL/2 + sampleR/2;
 }
In case operating with float point, what should be a bit
faster,  / 2  or * 0.5 ?
How to gotta RMS value in dB ?
   20 x log10(Measured/Reference0_dB) 
or 10 x log10(Measured/Reference0_dB)  ??
Im just physics student and im new in DSP,
so pleaz no angree about simple or stuppid questions.
Any examples and hints welcomed.
Many Tnx in advance.
Alf
----
Paul Davis:
>
> On Thu, Feb 17, 2011 at 4:53 PM, Robin Gareus <robin(a)gareus.org> wrote:
>
>>> Aside of that, what about locks? I've many times been told that one mustn't do
>>> anything that could block in a realtime thread. What are the consequences of
>>> that? Could a malicious app freeze the system by blocking in a realtime thread?
>
> it poses no risks to anything except itself if it does that. blocking
> in an RT thread matters to the thread, not to anything else.
>
> to demote RT threads that are doing too much you'd need a user-space
> watchdog like das_watchdog
>
Actually, das_watchdog is not very useful anymore after the kernel
developers implemented a scheme to avoid a process to take over
the machine. This built-in scheme is also a watchdog, but much
more fine graided than das_watchdog. And it is also (more often
than not) useless, so one has to press the reboot button anyway.
The sad thing is that this built-in watchdog in newer kernels fools
das_watchdog into thinking that the system is operational.
(@#$@#%%!#$$!!!)
I should look into it though, it might not be impossible to
tune das_watchdog to work again.
Hi folks,
inspired by a plan of a german onlinemag called amazona.de
I came up with the idea that a virtual analogue opensource softsynth
nativly running on Linux
would be really nice. (a nice filterbank too, but thats another thing)
Amazona planned a complete synth based on userpolls (only in german, sorry):
http://www.amazona.de/index.php?page=26&file=2&article_id=3191
which is now realized as vst: (only german, too)
http://www.amazona.de/index.php?page=26&file=2&article_id=3202
I know that Zynaddsubfx/yoshimi has a really strong soundengine and I
asked myself,
if it would be possible to take this engine or the DSSI-API and build
a polyphonic softsynth
with a nice UI like the new calf plugins or guitarix, a bit like the
loomer aspect, with some discoDSP,
a bit from the Tyrell or the Roland Gaia SH-01 with midilearn, ......
The problem I have are my programming skills, that are not good enough
to code this kind of software
by myself.
Are there some LAD's willing to join/take/realise this idea??
If there is interest I could translate the ideas of amazona.de and we
all could share our visions for a
new kind of controllable virtual analogue softsynth.
kind regards, saschas