> -----Original Message-----
> From: nick [mailto:nixx@nixx.org.uk]
>
> > I think gcc is in general not the best choice when you want
> to have highly
> > optimized code. I had no problems with C++ so far. You
> should avoid to use
> > pointers when ever possible and use references instead.
> RTSynth is written
> > in C++ and it performs quite well i think...
> >
> > - Stefan
>
> erm, sorry, but why not use pointers?
it's dangerous... null pointers, memory leaks etc. tendency is not to use
pointers unless absolutely neccessary...
as for the context above, I don't think it has anything to do with
performance (should be same).
erik
Oh yeah I forgot!
And there's another question I _realy_ want to know the answer for:
Until we have such instrument plugin API, what is the
right way to implement the the system
(30 softsynths working together)
with what we have
I mean a bunch of software synths /dev/midi -> /dev/dsp
Can I use these together right now?
Is there a right way to control them all via
a single sequencer and to get their output
into one place?
nikodimka
--- nick wrote:
> Hi
>
> IMO running each synth in its own thread with many synths going is
> definitely _not_ the way forward. The host should definitely be the only
> process, much how VST, DXi, pro tools et. al. work.
>
> No, there is no real "instrument" or "synth" plugin API. but since my
> original post I have been brewing something up. its quite vst-like in
> some ways, but ive been wanting to make it more elegant before
> announcing it. It does, however, work, and is totally C++ based ATM. You
> just inherit the "Instrument" class and voila. (ok, so it got renamed
> along the way)
>
> Although in light of Tommi's post (mastajuuri) i have to reconsider
> working on my API. My only problem with mastajuuri is its dependance on
> QT (if im not mistaken), sorry.
>
> If people would like to my work-in-progress, i could definitely use some
> feedback ;-)
>
> This discussion is open!
>
> -Nick
>
> On Thu, 2002-10-17 at 20:53, nikodimka wrote:
> >
> > Guys,
> >
> > This answer appeared just after I decided to ask the very same question.
> >
> > Is it true that there is no _common_ "instrument" or "synth" plugin API on linux?
> >
> > Is it true that there is no the same kind of media for out-of-process instruments?
> >
> > I see that there are some kinds of possible plugin APIs:
> > -- MusE's LADSPA extensions
> > -- mustajuuri plugin
> > -- maybe there's some more (MAIA? OX?)
> > -- I remember Juan Linietsky working on binding sequencer with softsynths
> > But I dont remember to hear anything about the results
> >
> > So can anyone _please_ answer:
> >
> > What is the right way to use the multiple (e.g. thirty)
> > softsynths together simultaneously with one host?
> > I mean working completely inside my computer
> > to have just one (or even none) midi keyboard as input.
> > So all the synthesys, mixing, processing goes on inside.
> > And to send one audio channel out to any sound card.
> >
> >
> > thanks,
> > nikodimka
> >
> >
> > =======8<==== Tommi Ilmonen wrote: ===8<=================
> >
> > Hi.
> >
> > Sorry to come in very late. The Mustajuuri plugin interface includes all
> > the bits you need. In fact I already have two synthesizer engines under
> > the hood.
> >
> > With Mustajuuri you can write the synth as a plugin and the host is only
> > responsible for delivering the control messages to it.
> >
> > Alternatively you could write a new voice type for the Mustajuuri synth,
> > which can lead to smaller overhead ... or not, depending on what you are
> > after.
> >
> > http://www.tml.hut.fi/~tilmonen/mustajuuri/
> >
> > On 3 Jul 2002, nick wrote:
> >
> > > Hi all
> > >
> > > I've been scratching my head for a while now, planning out how im going
> > > to write amSynthe (aka amSynth2)
> > >
> > > Ideally i don't want to be touching low-level stuff again, and it makes
> > > sense to write it as a plugin for some host. Obviously in the Win/Mac
> > > world theres VST/DXi/whatever - but that doesnt really concern me as I
> > > dont use em ;) I just want to make my music on my OS of choice..
> > >
> > > Now somebody please put me straight here - as far as I can see, there's
> > > LADSPA and JACK. (and MuSE's own plugins?). Now, I'm under the
> > > impression that these only deal with the audio data - only half what I
> > > need for a synth. Or can LADSPA deal with MIDI?
> > >
> > > So how should I go about it?
> > > Is it acceptable to (for example) read the midi events from the ALSA
> > > sequencer in the audio callback? My gut instinct is no, no, no!
> > >
> > > Even if that's feasible with the alsa sequencer, it still has problems -
> > > say the host wanted to "render" the `song' to an audio file - using the
> > > sequencer surely it would have to be done in real time?
> > >
> > > I just want to get on, write amSynthe and then everyone can enjoy it,
> > > but this hurdle is bigger than it seems.
> > >
> > > Thanks,
> > > Nick
> > >
> > >
> > > _________________________________________________________
> > > Do You Yahoo!?
> > > Get your free @yahoo.com address at http://mail.yahoo.com
> > >
> >
> > Tommi Ilmonen Researcher
> > >=> http://www.hut.fi/u/tilmonen/
> > Linux/IRIX audio: Mustajuuri
> > >=> http://www.tml.hut.fi/~tilmonen/mustajuuri/
> > 3D audio/animation: DIVA
> > >=> http://www.tml.hut.fi/Research/DIVA/
> >
> > __________________________________________________
> > Do you Yahoo!?
> > Faith Hill - Exclusive Performances, Videos & More
> > http://faith.yahoo.com
>
> __________________________________________________
> Do You Yahoo!?
> Everything you'll ever need on one web page
> from News and Sport to Email and Music Charts
> http://uk.my.yahoo.com
>
>
>
> __________________________________________________
> Do you Yahoo!?
> Faith Hill - Exclusive Performances, Videos & More
> http://faith.yahoo.com
__________________________________________________
Do you Yahoo!?
Faith Hill - Exclusive Performances, Videos & More
http://faith.yahoo.com
>From: Steve Harris <S.W.Harris(a)ecs.soton.ac.uk>
>
>Hmmm... My experiments with c++, dsp code and gcc (recent 2.96) did not
>turn out very well. For some reason the optimiser totaly chokes on c++
>code. I only tried one routine, and I'm no c++ expert, so its possible I
>screwed something up, but it did not look encouraging. I will revisit this
>and also try gcc3, which has much better c++ support IIRC.
>
>- Steve
I think gcc is in general not the best choice when you want to have highly
optimized code. I had no problems with C++ so far. You should avoid to use
pointers when ever possible and use references instead. RTSynth is written
in C++ and it performs quite well i think...
- Stefan
_________________________________________________________________
Broadband? Dial-up? Get reliable MSN Internet Access.
http://resourcecenter.msn.com/access/plans/default.asp
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
(This safeguard is not inserted when using the registered version)
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
(This safeguard is not inserted when using the registered version)
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
OK, here I go ranting about the same thing again... Can't help it :-),
tired of fighting the issue. :-(
Here's a simple proposal that I have been thinking of (even though my
computing skills are not so good when it comes to system stuff), and you
please tell me whether this is a good idea:
There should be just a simple sound daemon running 24/7, constantly
reading from the /dev/dsp inputs and writing into the outputs with a
small circular buffer that keeps on recycling itself (i.e. 64 bytes to
allow for low-latency work if needed). Then, when an app that does not
care at all what is behind the /dev/dsp resource, addresses the /dev/dsp
resource, it gets rerouted to the appropriate buffers provided by the
sound daemon. This way, infinite number of apps could read and write
into the same buffer, (writing being a bit trickier to do obviously) and
being down-mixed in software. If the app works with larger buffers, it
just simply reads off of the buffer longer and by the same token writes
into the audio buffer as needed (daemon would adjust incoming buffer to
app's needs by reading its OSS or ALSA request for the audio buffer).
Now, someone please tell me why is this not doable? Sound daemon must
be, at least in my mind, compatible with all software, and the only way
it can be that is by making itself transparent. Therefore there would be
no need for JACK-ifying or ARTSD-ing of an app. It would simply work (a
concept that we definitely need more of in the Linux world).
I am sure that with the above description I have covered in a nutshell
both JACK and ARTSD to a certain extent, but the fact remains that both
solutions require application to be aware of them if any serious work is
to be done. And as such, there is only a VERY limited pool of
applications that can be used in combination with either of these.
Any comments and thoughts would be appreciated! Sincerely,
Ico
I work with a small community radio station, and since we're continually
strapped for cash we implement our studio-transmitter link by streaming
audio over a network. We use a variety of players and formats (mainly xmms
and realplayer), but all of them share a common problem. Despite our
process monitoring software, the stream occasionally goes silent. We would
like to find a way to detect when the output of the playback computer's
soundcard is silent- or at least quiet enough to count as such.
I was hoping that someone could give some quick hints or point me to a
tutorial on writing a plugin that could live between our playback app and
the sound device and monitor that stream of data. I think this would be
similar to a funky visualization plugin, but so far those have all been
specific to particular playback applications. I've played with ladspa
(which is awesome, BTW) and I've mucked about in the internals of kde's
arts but I haven't found a clean way of inserting this into the code.
thanks!
-Dave
Hi,
pure evil ... now you can load all your "legacy mp3 files" into sweep and
scrub around with them. I also made a few demo recordings, hooray :)
Sweep 0.5.8 Development Release
-------------------------------
Sweep is a sound wave editor, and it is now also generally useful as a
flexible recording and playback tool. Inside lives a pesky little virtual
stylus called Scrubby who enjoys mixing around in your files.
This development release is available as a source tarball at:
http://prdownloads.sourceforge.net/sweep/sweep-0.5.8.tar.gz?download
MP3 import is now supported (via libmad). Minor bugs have been fixed in
rendering of record position and playback mixing.
There is a new page of audio demos made with Sweep. These demonstrate the
sounds of Scrubby, a tool which allows vinyl-like manipulation of digital
audio:
http://www.metadecks.org/software/sweep/demos.html
Screenshots:
http://www.metadecks.org/software/sweep/screenshots/
Sweep is designed to be intuitive and to give you full control. It includes
almost everything you would expect in a sound editor, and then some:
* precise, vinyl like scrubbing
* looped, reverse, and pitch-controlled playback
* playback mixing of unlimited independent tracks
* looped and reverse recording
* internationalisation
* multichannel and 32 bit floating point PCM file support
* support for Ogg Vorbis and MP3 compressed audio files
* LADSPA 1.1 effects support
* multiple views, discontinuous selections
* easy keybindings, mouse wheel zooming
* unlimited undo/redo with fully revertible edit history
* multithreaded background processing
* shaded peak/mean waveform rendering, multiple colour schemes
Sweep is Free Software, available under the GNU General Public License.
More information is available at:
http://www.metadecks.org/software/sweep/
Thanks to Pixar Animation Studios and CSIRO Australia for supporting the
development of this project.
enjoy :)
Conrad.
--- Paul Davis wrote:
> >IMO running each synth in its own thread with many synths going is
> >definitely _not_ the way forward. The host should definitely be the only
> >process, much how VST, DXi, pro tools et. al. work.
>
> i think you need to scan back a year or 18 months in the archives to
> where we measured this. the context switch under linux can be
> extremely quick - on the order of 20-50 usecs on a PII-450, and is not
> necessarily a problem. switching between contexts is massively more
> expensive under windows and macos (at least pre-X), and hence the
> multi-process design is not and cannot be an option for them at this time.
But could something change throuh 18 months? sigh...
>
> >No, there is no real "instrument" or "synth" plugin API. but since my
> >original post I have been brewing something up. its quite vst-like in
> >some ways, but ive been wanting to make it more elegant before
> >announcing it. It does, however, work, and is totally C++ based ATM. You
> >just inherit the "Instrument" class and voila. (ok, so it got renamed
> >along the way)
>
> thus guaranteeing that no instruments can be written in other
> languages. for all the mistakes the GTK+ crew made, their design to
> use C as the base language so as to allow for other languages to
> provide "wrappers" was a far-sighted and wise choice. OTOH, i will
> concede that the real-time nature of most synthesis would tend to rule
> out most of the languages of interest.
Yes. I was asking about C API mostly.
>
> >Although in light of Tommi's post (mastajuuri) i have to reconsider
> >working on my API. My only problem with mastajuuri is its dependance on
> >QT (if im not mistaken), sorry.
> >
> >If people would like to my work-in-progress, i could definitely use some
> >feedback ;-)
But anyways. I would really love to look at what you have.
Is this only a specification or you have a reference implementation?
> >
> >
> >This discussion is open!
>
> the discussion is several years old :)
But could something change in several years? sigh...
Still no API.
Isnt it was a great idea behind the LADSPA
"simple API _now_ is better than several years old discussion"?
>
> you managed to touch upon the central problem in your penultimate
> sentence, apparently without realizing the depth of the problem.
>
> if a synth comes with a GUI, then the issue of toolkit compatibility
> rears its ugly and essentially insoluble head once again. you can't
> put GTK based code into a Qt application, or vice versa. this also
> fails with any combination of toolkits, whether they are fltk, xforms,
> motif etc. etc.
>
> if the synth doesn't come with a GUI, but runs in the same process as
> the host, then every synth has to have some kind of inter-process
> control protocol to enable a GUI to control it.
So what? Isnt it just a two-three more API functions?
Damn! I never wrote any audio application ever,
and I dont want to create the ninth or tenth possible winner
instrument/synth API for linux.
So I see no reason for me to say "okay I can do it".
So I ask all of you guys once again:
What should we (I) use?
We already have some possibilities
-- MusE LADSPA extensions,
-- nick can finish his work,
-- MAIA
-- mustajuuri's API
....
why dont we use what we have?
>
> these are deep problems that arise from the lack of a single toolkit
> on linux (and unix in general).
>
> this is why JACK is designed in the way that it is, and why it
> (theoretically) allows for both in-process and out-of-process
> "plugins". this allows programmers to choose which model they want to
> use. i predict that any API that forces the programmer to use a
> particular toolkit will fail. JACK's problem in this arena is that its
> designed for sharing audio data, and does not provide any method for
> sharing MIDI or some other protocol to control synthesis parameters.
>
> besides, if SC for linux is in the offing, who needs any other
> synthesizers anyway? :))
Which SC are you talknig about?
>
> --p
>
__________________________________________________
Do you Yahoo!?
Faith Hill - Exclusive Performances, Videos & More
http://faith.yahoo.com
> Paul Davis <pbd(a)linuxaudiosystems.com> writes
>
> switching between contexts is massively more
> expensive under windows and macos (at least pre-X),
As a data point, I ran two different sa.c files (the audio
engines sfront produces) set up as softsynths using different
patches under OS X (using CoreAudio + CoreMIDI, not the
AudioUnits API), and it worked -- two patches doubling together,
both looking at the same MIDI stream from my keyboard, both
creating different audio outputs into CoreAudio that were
mixed together by the HAL. So, for N=2 at least, OS X seems
to handle N low-latency softsynth apps in different processes
OK ...
-------------------------------------------------------------------------
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-------------------------------------------------------------------------
Hi all,
I've been running 2.5 for about a week now without any troubles. I did try
2.5.39, but it didn't want to stay up for more than 10 minutes. 41 and 42
have been solid (and that's with all the usual whizbang usb, bttv, etc, stuff
as well.)
Overall, imho, 2.5 seemed less responsive than the 2.4.19 kernel that I'm
coming from. The mouse cursor sticks on the screen under load, and I had to
increase the buffer size in sweep because it was dropping out like nobody's
business. But then I gave 2.5.42-mm3 a whirl, and it seems to be
considerably more responsive, and sweep drops out much less.
I've done some latency tests as well:
http://pkl.net/~node/lad/latency-tests/
The machine is an athlon 1.4GHz, 512MB ram, xfs fs, sblive sound card,
matrox g550 graphics card.
There's quite(!) a descrepency between Joern's and my X11 performance. This
makes me wonder 2 things: what card are you using Joern?, and what (if
anything) can be done to make the X11 performance less ludicrously bad?
The 2.4.19 benchmarks in there are probably bad due to the fact that it's
xfs+lowlatency, but apparently there's ways to do that properly which I
didn't do. Other than that, fairly dry benchmarks; just showing slightly
better performance with each patch.
Bob