Am Mittwoch, 27. April 2005 06:06 schrieb Dave Robillard:
On Tue, 2005-26-04 at 18:41 +0200, Ralf Beck wrote:
FWIW, I can already do this with Om (
www.nongnu.org/om-synth), but that
only covers synthesis/processing (modular style), not sequencing.
Guess it's
time to have a look at OM then :-)
don't think sequencing and audio processing should
be all mish-mashed
together, unless you have a good reason to do so...
Just thought about storing the
midi data with the audio on the nodes to
keep traffic low on the net. The host would then only send basic
start/stop/continue/locate commands, synchronisation is sample locked through
the audio anyway.
I'm very much into the idea of networked audio
processing, perhaps we
can leverage each other's work?
I would be glad if work can be distributed on
several shoulders. Maybe there
is less work to be done than i thought of :-)
That is, if I can concretely figure out what you
actually want to do -
do you have a link? Anything other than an abstract list of goals?
That's not much of an 'annoucement' :)
Would you have read the message,
if it were no announcement ;-)
If you look at the current sequencer market, you see all products are
targeted to run everything on one computer, i.e. sequencing, running
softsynths, effects and recoring the audio.
The problem with this is that sooner or later you get into performance
problems, be it disk too slow, not enough processing power, a lot noise
from a more GHz machine...
Solutions to this problem like FX-Teleport do not work well and pretty soon
eat up more CPU power on the host due to networking overhead than they
offer by offloading tasks to other comps. I have to admit i dont know how
well the node concept of Logic 7 is working.
Now what i want is a noiseless PC (the host), that is running the GUIs,
controls processing of the CPU intensive tasks on the nodes and routes the
audio/midi data from external interfaces to/from the nodes.
Two major additions are the possibilty to have Win XP nodes running
Vst/Vstis, especially the effects of my UAD-1 card and to use the whole
cluster as a big mixer, i.e. realtime audioin-> node processing->audioout at
very low latencies.
The reason why i would prefer the midi data (recorded audio data anyway)
stored on the nodes is, that sending realtime audiostreams together with
extensive automtion data over the network would flood the net
when the sequencer is running. On the other hand, sending a request to the
nodes for the midi data to be edited (and actually displayed on the screen)
when the sequencer is stopped is extremly cheap.
The network flooding is especially a problem with windows nodes. On linux
it is easy to bypass the normal network layer, though hardware dependent.
Hopefully this makes things a bit clearer.
Ralf