On Tuesday 10 December 2002 09.38, Sami P Perttu wrote:
Hi everybody. I've been reading this list for a
week. Thought I'd
"pitch" in here because I'm also writing a softstudio; it's pretty
far already and the first public release is scheduled Q1/2003.
Sounds interesting! :-)
First, I don't understand why you want to design a
"synth API". If
you want to play a note, why not instantiate a DSP network that
does the job, connect it to the main network (where system audio
outs reside), run it for a while and then destroy it? That is what
events are in my system - timed modifications to the DSP network.
99% of the synths people use these days are hardcoded, highly
optimized monoliths that are easy to use and relatively easy to host.
We'd like to support that kind of stuff on Linux as well, preferably
with an API that works equally well for effects, mixers and even
basic modular synthesis.
Besides, real time instantiation is something that most of us want to
avoid at nearly any cost. It is a *very* complex thing to get right
(ie RT safe) in any but the simplest designs.
On the issue of pitch: if I understood why you want a
synth API I
would prefer 1.0/octave because it carries less cultural
connotations.
Agreed. (Yes! The reasons why I want(ed) something else are purely
technical - but I think I have accepted another, equivalent solution
now.)
In my system (it doesn't have a name yet but
let's
call it MONKEY so I won't have to use the refrain "my system") you
just give the frequency in Hz, there is absolutely no concept of
pitch.
When working with musical synthesis, I find linear pitch (like
1.0/octave) much easier to deal with.
However, if you want, you can define functions like C
x =
exp((x - 9/12) * log(2)) * middleA, where middleA is another
function that takes no parameters. Then you can give pitch as "C 4"
(i.e. C in octave 4), for instance. The expression is evaluated and
when the event (= modification to DSP network) is instantiated it
becomes an input to it, constant if it is constant, linearly
interpolated at a specified rate otherwise. I should explain more
about MONKEY for this to make much sense but maybe later.
This sounds interesting and very flexible - but what's the cost? How
many voices of "real" sounds can you play at once on your average PC?
(Say, a 2 GHz P4 or someting.) Is it possible to start a sound with
sample accurate timing? How many voices would this average PC cope
with starting at the exact same time?
Anyway, the question I'm most interested is: why a
synth API?
(See above.)
You could think of our API as
* A way of running anything from a basic mono->mono
effect though complex "programs" inside any host
application that supports the API.
* In interface that inspires synth and effect
programmers to write *reusable components*, instead
of even more of these stand-alone applications that
cannot cooperate in any useful way.
* A performance hack, that gives users sample
accurate timing and lots of processing power on
affordable off-the-shelf hardware.
* An attempt to copy and improve upon designs that
have proven tremendously successful in the music
industry.
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`--------------------------->
http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`---------------------------->
http://www.linuxdj.com/maia -'
---
http://olofson.net ---
http://www.reologica.se ---