[LAD] FLTK vs GTKmm

Steve Harris steve at plugin.org.uk
Tue Aug 11 13:10:23 UTC 2009


On 11 Aug 2009, at 12:54, Jens M Andreasen wrote:

> Continuing this increasingly inaccurately christened thread ..
>
> On Tue, 2009-08-11 at 11:26 +0100, Steve Harris wrote:
>
>> It's not ideal, but assembling all the jack buffers into one big one
>> is not going to be that much load on the CPU.
>>
>
> OK .. Adrian Knoth showed some interest and says he knows his way  
> around
> in jackd as well as a colleague involved with CUDA. If the idea after
> evaluation does not appear to be worth the effort, then we'll just  
> drop
> it by then.
>
>> Another tack would be to provide a library that could execute LV2
>> plugins (with CUDA versions of the code in the bundle). It would
>> presumably need a daemon or such, but would be more generally useful
>> than a hacked jackd.
>>
>
> That would have to be a collection of generally useful plugins, at  
> least
> 32 channels wide to be worth it. A mega plugin so to say. This ain't  
> no
> lawn-mover you can turn around on a platter. Doing little things here
> and there /only/ would be very difficult in general.

Ah, I see. I'm not really that familiar with how it works.

Is it not possible to stitch multiple plugins into a single processing  
unit, horizontally? ie. can you compose the CUDA objects?

Going really hightech, with access to the source you could compile the  
mega-plugin on demand. That might be a bit adventurous, but it would  
be a clear win over the kind of things the closed-source people can do.

> The thing to notice is that, although several kernels can be launched
> the one after the other, at any given time there is only one running
> which will have to be aware of which codepath to take depending on  
> what
> processor it is running on. Or else you'll end up with 640 identical
> channel-strips rather than something like a synth-collection, 64 fully
> equipped channel-strips + a few reverb units as well as an  
> autocomposer
> based on neural networks (Well OK, that last one might be a bit over  
> the
> top... :)
>
> As long as you only have a single 8 core multiprocessor, the GPU can  
> be
> utilized fully by a single somewhat demanding project. It is when  
> there
> are dozens of cores to feed that the need for cooperation arises, and
> this is as far as I can see where things are heading, also over in
> Intel/Larrabee land.
>
>> The whole point of
>> jack is that it lets you combine bits of software in different ways,
>> this would take that away to some extent.
>>
>
> To some extent, yes. But provided that the host isn't spending much
> time on moving buffers around, you'll still have all of your (much  
> more
> flexible) CPU left to use for additional processing. Having this much
> processing power available in a way also eliminates /the need/ for
> carefully selecting what channels should be processed by what plugin.
> Just do everything for all, possibly with all settings turned down to
> zero or bypassed on some, even most channels.
>
> Some effects will still be optional - like say time stretching, which
> might not be a natural choice for the basic bread-and-butter setup, no
> matter how much icing you put on it. Or maybe it is? Some people will
> most likely disagree with me here ..

Oh, I see, you plan on injecting this processing into the in/outputs  
infront of the ALSA device? Kindof makes sense. At the very least you  
could use a very sophisticated dithering algorithm for little cost :)

> To put things in some economical perspective, I am talking about
> upgrading this tiny desktop-machine to having bandwidth and processing
> power twice that of a current top-of-the-line Intel Nehalem for less
> than $200, maybe around Christmas. Inexpensive (!Apple) laptops with
> GPU's like that are hitting the channel as we speak.

Sure, but it doesn't sound like it's as useful as a GP CPU.

- Steve



More information about the Linux-audio-dev mailing list