Well, it can be fun to poke around with a little bit of C code, but in
the case of GS this probably won't get you very far unless you really
get serious and write your own realtime synthesis engine. Which isn't easy.
But I *would* rather write my own realtime synthesis engine. If anyone
who has done so could tell me the basic requirements....I'm not looking for
an easy solution. I just want to look at the architecture of synthesis
systems
from the inside. For instance, ChucK. Instead of learning how to "use"
ChucK, I'm
more interested in writing my own language, as it were. The advantages are
many.
So why not use SuperCollider? SC is well-suited for
this kind of
application because of the ease with which you can allocate massive
amounts of voices dynamically. All the realtime audio and control stuff
that you need is already there, and you can still program everything the
way you want it, rather than using some ready-made GS software.
Ah, well...I have tried SC, but somehow the aesthetics did not inspire me
at that time. I prefer ChucK increasingly because of its similarity with
C/C++
in syntax. And I would never even dream of using ready-made GS software.
That kills the whole point, doesn't it? Oh and, 6 months ago I wanted to do
all this in Python, which then was the only language I knew. Just my luck
that sound in Python is/was in a deplorable state.
Anyways, languages are not the issue anymore. I'm teaching a class of C,
and I want the students to have fun making music, while I learn some linux
audio mantras on the sly.
------- -.-
1/f ))) --.
------- ...
http://www.algomantra.com