On 02/01/2013 11:32 PM, Charles Z Henry wrote:

On Fri, Feb 1, 2013 at 4:12 PM, Fons Adriaensen <fons@linuxaudio.org> wrote:
On Fri, Feb 01, 2013 at 08:07:46PM +0000, Kelly Hirai wrote:

> fpga seems a natural way to express in silicon, data flow languages like
> pd, chuck, csound, ecasound. regarding the stretch, the idea that one
> could code in c or c++ might streamline refactoring code, but i'm still
> trying to wrap my head around designing graph topology for code that is
> tied to the program counter register. nor do i see the right peripherals
> for sound. perhaps the g.711 codec support is software implementation
> and could be rewritten. need stats on the 8 bnc to dvi adapter audio port.

There are many ways to use an fpga. I've got a friend who's a real
wizard in this game, and his approach is quite unusual but very
effective.
In most cases, after having analysed the problem at hand, he'll design
one or more ad-hoc processors in vhdl. They are always very minimal,
maybe having 5 to 20 carefully chosen instructions, usually all of them
conditional (ARM style), and coded if necessary in very wide instruction
words so there's no microcode and these processors are incredibly fast.
It takes him a few hours to define such a processor, and a few more hours
more to create an assembler for it in Python. Then he starts coding the
required algorithms using these processors.
If necessary, he'll revise the processor design until it's perfectly
matched to the problem. In all cases I've watched, this results in
something that most other designers couldn't even dream of in terms
of speed and efficiency - not only of the result, but also of the
design process and hence the economics.
All of this is of course very 'politically incorrect' - he just throws
away the whole canon of 'high level tools' or rather replaces it with
his own vision of it - with results that I haven't seen matched ever.

Gotta say, you guys impress me.  I think embedded programming is pretty tough.  I bombed my FPGA class last spring--I gave up too soon for that class, but haven't given up altogether.  There's a lot of value for rt-audio there.

One topic of research where I'm at (ITTC/KU) concerns compilation from Haskell (a relational language) to verilog or vhdl for synthesis on fpga's--not going through the usual chain of defining a processor but actually building the specific functions (greater utilization this way as I understood it).  Maybe someday Faust (the audio relational language) will have a similar compiler target like this too

There is an older publication on Faust and FPGA, haven't read it tough:
http://faust.grame.fr/index.php/documentation/papers/31-dafx2006-art

I have implemented a runtime-configurable audio SoC on FPGA as a semester project as part of my graduate program. The data processing runs completely without a processor but all the audio filters were hand-crafted in Verilog. That's certainly not the way to go for more complex filters. But a nice little guitar FX to play around with :-D

As the Zynq is a FPGA/ARM combo, maybe it's possible to write a Jack client that runs on Linux on the ARM and offloads some more complex FIR filters to the FPGA fabric. This filter acceleration approach in general is a hot research topic a few colleagues of mine are researching in.

Jeremia