Documenting (code) is always a good idea.
Its hard enoug to find and motivate people that have good coding skills, so
the last thing you need is for those people to get frustrated while trying
to find out where/how they should start.
Transparent, error-free documentation, website and active mailing list are
key here!
I dont have coding skills, but if you need someone to help with the
documentation as such, i'm willing to help.
Grtz
Thijs
On 28 Nov 2011 11:19, "Sebastian Moors" <mauser(a)smoors.de> wrote:
Am 28.11.2011 03:46, schrieb Iain Duncan:
>
> I also think it's a much needed idea. I'd be happy to do some
contributing too, but like Harry,...
Hi,
a great idea! I'm sure that you will see valuable feedback from this list
if you publish your tutorial, imho this ensures a good quality.
Do not discuss to long over question like the correct toolkit. Those can
lead away too easily from the core problem. Take the one your familiar with!
- Sebastian
_______________________________________________
Linux-audio-dev mailing list
Linux-audio-dev@lists....
Following recent emails on list about documentation and learning "Linux
Audio Programming", I've been thinking about an effort to help people get
started with Linux Audio Coding.
I think some "beginner" coding documentation on Linux Audio would be a
great asset to the community, and I'm willing to contribute to such an
effort. As Robin Gareus mentioned in another thread, a "FLOSS" manual is
probably the best way to go for a community effort on documenting.
I've been doing some "tutorial" style programming articles on my blog
harryhaaren.blogspot.com and I'd have no problem sharing the examples there
(some basic GTK stuff, some small JACK apps, combining the GUI / JACK stuff
). Problem is that I was learning as I was going along, and there are some
*fundamental* issues with some of the tutorials (especially with regards to
thread safe code & more advanced programming concepts)
Personally I'd be even more enthusiastic about such an effort if some of
the veteran Linux audio guys were to get on board and ensure the content is
of a high quality. (As I'm a self though programmer, there are some big
gaps in my knowledge, and I'd not like to provide bad sample code, or share
bad concepts.)
Of course some issues will arise in choosing how to document Linux Audio,
and some typical "flame" topics like GUI toolkits, libraries etc will
arise. I have no idea how we can best avoid that issue, except by following
the "if you think it should be thought that way write the tutorial"... the
downside of this is that if one tutorial uses toolkit <X> and the next
toolkit <Y>, the average beginning coder is going to get lost in
implementation details and that defeats the purpose of documentation :D
-Harry
Hi,
we'd like to ask those who are interested in the new Guitarix LADSPA
plugins to run them on their systems and give us some feedback. They
do fine on our own machines and seem to be fit for wider testing.
The plugins "Guitarix Amp" and "Guitarix Stereo Fx" wrap all of the
entire sound engine of Guitarix. You can load a preset that you defined
with the Guitarix program, and even define some parameters for DAW
automation.
It's explained in our wiki:
https://sourceforge.net/apps/mediawiki/guitarix/index.php?title=How_to_use_…
Our SVN:
svn co http://guitarix.svn.sourceforge.net/svnroot/guitarix/trunk guitarix
After checkout, build with
./waf configure && ./waf && sudo ./waf install
for an installation to /usr/local.
Our tracker:
http://sourceforge.net/tracker/?group_id=236234
Your feedback will be welcome here or there or anywhere..
Or in our forum:
http://sourceforge.net/apps/phpbb/guitarix/
so please, don't be shy and tell us your test results or
if you think the concept is usable / unusable :-)
ciao
Andreas
I got curious, so I bashed out a quick program to benchmark pipes vs
POSIX message queues. It just pumps a bunch of messages through the
pipe/queue in a tight loop. The results were interesting:
$ ./ipc 4096 1000000
Sending a 4096 byte message 1000000 times.
Pipe recv time: 6.881104
Pipe send time: 6.880998
Queue send time: 1.938512
Queue recv time: 1.938581
Whoah. Which made me wonder what happens with realtime priority
(SCHED_FIFO priority 90):
$ ./ipc 4096 1000000
Sending a 4096 byte message 1000000 times.
Pipe send time: 5.195232
Pipe recv time: 5.195475
Queue send time: 5.224862
Queue recv time: 5.224987
Pipes get a bit faster, and POSIX message queues get dramatically
slower. Interesting.
I am opening the queues as blocking here, and both sender and receiver
are at the same priority, and aggressively pumping the queue as fast as
they can, so there is a lot of competition and this is not an especially
good model of any reality we care about, but it's interesting
nonetheless.
The first result really has me thinking how much Jack would benefit from
using message queues instead of pipes and sockets. It looks like
there's definitely potential here... I might try to write a more
scientific benchmark that better emulates the case Jack would care about
and measures wakeup latency, unless somebody beats me to it. That test
could have the shm + wakeup pattern Jack actually uses and benchmark it
vs. actually firing buffer payload over message queues...
But I should be doing more pragmatic things, so here's this for now :)
Program is here: http://drobilla.net/files/ipc.c
Cheers,
-dr
> In other words, you'd expect such a system to behave as if you
> had two faders in series.
>
> Now if the DSP code only sees the sum of the two values (as it
> should, having a VCA group is just a user interface issue),
Ah, You just contradicted yourself. If you expect the system to behave like
two sliders, then the model must represent 2 sliders. The user widgets
should not embody decision making, that implies intelligence in the GUI,
which is not strictly model-view-controller.
Imagine your device supplemented with a 'dumb' pair of MIDI controllers
complementing the two GUI sliders, they could not correctly implement the
complex interaction between the two gain settings, therefore you need to go
back and move that 'intelligence' into the model.
Best Regards,
Jeff
> Message: 18
> Date: Thu, 24 Nov 2011 20:45:09 +0000
> From: Fons Adriaensen <fons(a)linuxaudio.org>
> Subject: Re: [LAD] sliders/fans
> To: linux-audio-dev(a)lists.linuxaudio.org
> Message-ID: <20111124204509.GA14316(a)linuxaudio.org>
> Content-Type: text/plain; charset=us-ascii
>
> On Thu, Nov 24, 2011 at 02:21:25PM -0500, David Robillard wrote:
>
> > Agreed. Everything here is about the *view*. How that maps to
> actual
> > parameter values is an underlying model issue.
>
> Not always. Consider the case of 'VCA' groups for faders. That
> is: you have a slider that controls the gain of a group of
> channels (without those being mixed). The effective channel
> gain (in dB) is the sum of the per channel fader value and
> the one from the group fader. The model sees only this sum.
>
> Now a fader has to go down to zero gain (-inf dB). So you would
> map the lowest possible (finite) value of the widget to something
> that the model (the DSP code) would translate to 'off'.
>
> The question is then: is this exception handled by the widget and
> the DSP code, or by the DSP code only ?
>
> Suppose the minimum value of the widget would correspond to say
> -100 dB if not handled specially. If you just have a single fader
> per channel, you could arrange for the model or the DSP code to
> translate that to 'off'. That is no longer the case if you have
> 'VCA' faders.
>
> There are two thing you'd expect from such a system:
>
> * If either the channel or the group fader is at minimum, then
> the channel must be off (zero gain).
>
> * If the channel fader is at -50 dB, and the group at -60 dB
> you don't want zero gain, but -110 dB. Becaus either fader is
> still in a position where you'd expect that moving it makes a
> difference.
>
> In other words, you'd expect such a system to behave as if you
> had two faders in series.
>
> Now if the DSP code only sees the sum of the two values (as it
> should, having a VCA group is just a user interface issue), then
> that implies that the mapping of the minimum fader position (e.g.
> -100 dB) to something that would be interpreted as 'off' by the
> DSP code (e.g. -9999999 dB) _must be done by each individual
> fader_.
>
>
> Ciao,
>
>
> --
> FA
>
> Vor uns liegt ein weites Tal, die Sonne scheint - ein Glitzerstrahl.
>
>
>
Here is the current version of the LV2 state extension, which defines
the model for plugin state, and a mechanism for saving and restoring it:
http://lv2plug.in/ns/ext/state
It's time to tag this one as stable, unless anyone can see any issues
(i.e. speak now, or forever hold your peace). If anyone has the time to
give it a quick read-through, feedback would be appreciated. I have
done a lot of work on the documentation lately, hopefully everything
should be clear.
This is currently implemented in Ardour 3 SVN, QTractor SVN, and a patch
for LinuxSampler SVN is available here:
http://drobilla.net/files/linuxsampler_lv2_state_0_4.diff
Thanks,
-dr
Just wondering if I understand this correctly. I making a loop based app
for step sequencing. When I previously did this in Csound, I clocked it off
a phasor, so the timing was sample accurate ( but that brought all it's own
issues to be sure ). I'm wondering whether I should do the same thing in
jack app, or use the jack transport clock, or some hybrid.
My question, am I correct in understanding that if I use the jack transport
position to rewind in time, I'll get:
C) any other clients with running audio looping back to ( may or may not be
desirable )
B) a jitter based on the amount of time left between when the loop should
end and the end of the frame buffer in which the loop length runs out?
Has anyone solved B? Could it be done by some complex tempo cheating trick?
Does anyone have any methods they've used for tight timing of looping in a
jack app?
Pointers at code appreciated of course. =)
thank!
Iain
On , Iain Duncan <iainduncanlists(a)gmail.com> wrote:
> Thanks! Did you just write it?
Yup. As in literally just there. And I was reading your post in the new
RAUL thread as you were typing that :D
All the best, -Harry