Guys,
This answer appeared just after I decided to ask the very same question.
Is it true that there is no _common_ "instrument" or "synth" plugin
API on linux?
Is it true that there is no the same kind of media for out-of-process instruments?
I see that there are some kinds of possible plugin APIs:
-- MusE's LADSPA extensions
-- mustajuuri plugin
-- maybe there's some more (MAIA? OX?)
-- I remember Juan Linietsky working on binding sequencer with softsynths
But I dont remember to hear anything about the results
So can anyone _please_ answer:
What is the right way to use the multiple (e.g. thirty)
softsynths together simultaneously with one host?
I mean working completely inside my computer
to have just one (or even none) midi keyboard as input.
So all the synthesys, mixing, processing goes on inside.
And to send one audio channel out to any sound card.
thanks,
nikodimka
=======8<==== Tommi Ilmonen wrote: ===8<=================
Hi.
Sorry to come in very late. The Mustajuuri plugin interface includes all
the bits you need. In fact I already have two synthesizer engines under
the hood.
With Mustajuuri you can write the synth as a plugin and the host is only
responsible for delivering the control messages to it.
Alternatively you could write a new voice type for the Mustajuuri synth,
which can lead to smaller overhead ... or not, depending on what you are
after.
http://www.tml.hut.fi/~tilmonen/mustajuuri/
On 3 Jul 2002, nick wrote:
Hi all
I've been scratching my head for a while now, planning out how im going
to write amSynthe (aka amSynth2)
Ideally i don't want to be touching low-level stuff again, and it makes
sense to write it as a plugin for some host. Obviously in the Win/Mac
world theres VST/DXi/whatever - but that doesnt really concern me as I
dont use em ;) I just want to make my music on my OS of choice..
Now somebody please put me straight here - as far as I can see, there's
LADSPA and JACK. (and MuSE's own plugins?). Now, I'm under the
impression that these only deal with the audio data - only half what I
need for a synth. Or can LADSPA deal with MIDI?
So how should I go about it?
Is it acceptable to (for example) read the midi events from the ALSA
sequencer in the audio callback? My gut instinct is no, no, no!
Even if that's feasible with the alsa sequencer, it still has problems -
say the host wanted to "render" the `song' to an audio file - using the
sequencer surely it would have to be done in real time?
I just want to get on, write amSynthe and then everyone can enjoy it,
but this hurdle is bigger than it seems.
Thanks,
Nick
_________________________________________________________
Do You Yahoo!?
Get your free @yahoo.com address at
http://mail.yahoo.com
Tommi Ilmonen Researcher
Linux/IRIX
audio: Mustajuuri
3D audio/animation: DIVA
__________________________________________________
Do you Yahoo!?
Faith Hill - Exclusive Performances, Videos & More
http://faith.yahoo.com