[renamed thread to reflect subject]
On Sun, 2009-11-08 at 11:01 +0100, Adrian Knoth wrote:
In the meantime, I had a first glance at the new Fermi
chips. They
support independent kernels, so this would leverage the whole design
principle of audio@CUDA.
Independent kernels will on Fermi run at cluster-level. That is to say
each kernel/application will - for the sake of efficiency - still have
to fill four multi-processors, each running no less than six or eight
warps of 32 element wide vectors. This is approximately the equivalence
of current 9500GT or GT220 series.
It is likely that in a production environment, audio will only be
assigned one or two such clusters, one might be assigned for desktop
video and the rest for 3D-rendering and psychedelica.
So the audio challenge will not be much different from what you have
today. Intgration with other applications using expertise from other
fields than audio - like audio/video mix with pigs flying in the air -
will be a lot more practical since a single PCIe slot can dedicatedly
serve more than one purpose. This can in principle by using multiple
boards be achieved already today also, but of course not as economically
as it will be in the future.
OTOH, everything will be better once that mythical tomorrow finally
arrives ..
The cards are expected for Q1/2010, rumours are that it will take a
little longer. However, the cards will have ECC almost everywhere (RAM,
Caches, you name it), so the Fermi cards will be better suited for
computation ...
You have ECC today? I haven't, but does that mean my Linux boxen is no
good for computation? [No, of course not!]
... Fermi is also aiming for onchip C++ exceptions,
highler
lever language support (Python), 64bits pointers and so on. Much closer
to CPU programming.
The C++ developments looks useful, but ... Python close to CPU
programming? To me it looks like will you still have to write the GPU
code in plain CUDA:
http://documen.tician.de/pycuda/
/jma