On Wed, Nov 2, 2011 at 10:25 AM, Paul Davis <paul@linuxaudiosystems.com> wrote:
On Wed, Nov 2, 2011 at 1:09 PM, Iain Duncan <iainduncanlists@gmail.com> wrote:
> - realtime low latency engine in C++ using per sample callback audio (
> either RTAudio or Jack or PortAudio )

this conflicts with this:

> Basically I want to be able to do the gui and data transforming code in
> Python whenever possible and allow plugins to be written to work on the data
> in Python

the GUI is one thing; transforming data *in general* with python isn't
going to fit into a low latency engine.

now, of course, if you mean "performing edits to high level data
structures", which you might, then there isn't really a problem
(though you'll likely want to get into RCU to manage things). but if
you are talking about DSP processing with python plugins, i *doubt*
that it will work reliably.

thanks Paul, I think I was unclear. If I understand you correctly, then I meant transforming high level data. I'm making a CV style step sequencer for live looping, so I'm talking about having Python be used to do things like apply transformative routines to material that is in the sequences but is not yet in the audio output chain.  Stuff like, when I hit this midi key, run this routine over the sequence data. These Python transformations are meant to be run at lower priority, ie if the engine wants the processor to spit out the next sample while the transformation is part way through, that's fine, it gets interrupted. I wasn't intending to use Python to apply dsp to signals going out in realtime, that would likely be accomplished using either the STK or embedded Csound instances.

Does that sound more feasible? BTW, excuse my ignorance, but what is RCU? 

I found some blog posts on Ross Bencina's (sp?) site about making sure communication between the high priority engine and lower priority processes works right, but I'm hoping to find more concrete examples of this, and figure out how to do it between python and c++.

thanks for your help,
Iain