2008/9/29 Darren Landrum <darren.landrum(a)sbcglobal.net>et>:
Sorry for starting this entire argument. I'm just
tired of getting
nowhere with all of the same tools that everyone else seems to have no
problem with. I have a very bad habit of putting myself under a great
deal of pressure to exceed everyone's expectations of me.
Look, I know that everything I'm asking for exists on the Linux
platform. The problem is, it doesn't all exist in one place, or under a
single language. I'm convinced at this point that starting over from
scratch with a solid design is preferable to trying to use several
disparate tools and somehow glue them all together.
I've already played around with code here and there to try out some
different approaches to this problem, but nothing that I've bothered
keeping around. Starting tonight, I'm going to draft a very detailed
design, create a code hosting account somewhere (probably Google Code),
and get started. I will keep the list apprised of any progress with
It's been pointed out to me that many people on the list seem to think
that I'm trying to get someone else to code this for me. That is not and
never was my intention, and I apologize for any miscommunication on my
part for that. I am a very slow and not very good coder, though, and it
might take a little while to see any progress.
First things first, though. A solid design.
Linux-audio-dev mailing list
I don't know if it is relevant to this discussion (at least in an
"acceptable" amount of time) but I just wanted you to know about my
attempt: NASPRO (http://naspro.atheme.org
). I hope people here don't
take this message as spamming, because it simply is not.
The ideas here are:
* to make different existing and not-yet-existing sound processing
technology interoperate, both general-purpose sound processing stuff
(for example plugins a la LADSPA, LV2, etc.) and special purpose stuff
(for example check this:
), in both
compiled and interpreted forms.
* be techonlogy neutral (support for each technology implemented in
* define distinct "layers", each dealing with a specific aspect of the
whole problem (one for sound processing, one for GUIs, one for session
handling, etc.), so that a "DSP coder" can only work on the DSP part
and have all the rest automagically implemented and working (for
example, you write a LADSPA plugin or write an NDF file and you get an
automatically generated GUI without writing one more line of code);
* have "back bridges" when possible, so that applications with support
for one of NASPRO-supported technologies gets support for all other
technologies without writing a single line of code.
* build dead-easy-to-use tools on top of that to make it easy for non
demaning applications to support DSP stuff.
* build tools on top of that to do data routing among each "sound
processing component" (in other words, chain-like and/or graph-like
processing) - plus, since we have those back bridges, you could also
use, for example, CLAM networks (as soon as CLAM will be supported) as
an alternative to these tools and have the same degree of supported
technology (the same goes for gstreamer, Pd, etc).
* be cross-platform (apart from Mac/Windows, alternative
desktop-oriented OSes like Haiku or Syllable are getting stronger
these days and could become viable to do sound processing in some near
or distant future).
The result will hopefully be to make it also easier to develop new
technologies AND without breaking interoperability.
Now, since I'm only one, and I am the only one working on this, it
will take an insane amount of time probably, and getting each of these
abstraction layers right is astonishingly difficult already (anyone
remembering GMPI?) - at the moment I'm fighting with core level stuff
and I will be doing that at least for another year or two.
If you can wait, I will probably have a talk about NASPRO by the end
of October and will put down some slides trying to describe the inner
working of it (a lot of people complained that I wasn't clear enough
on the website)...
Maybe this helps :-\