--- Paul Davis wrote:
> >IMO running each synth in its own thread with many synths going is
> >definitely _not_ the way forward. The host should definitely be the only
> >process, much how VST, DXi, pro tools et. al. work.
>
> i think you need to scan back a year or 18 months in the archives to
> where we measured this. the context switch under linux can be
> extremely quick - on the order of 20-50 usecs on a PII-450, and is not
> necessarily a problem. switching between contexts is massively more
> expensive under windows and macos (at least pre-X), and hence the
> multi-process design is not and cannot be an option for them at this time.
But could something change throuh 18 months? sigh...
>
> >No, there is no real "instrument" or "synth" plugin API. but since my
> >original post I have been brewing something up. its quite vst-like in
> >some ways, but ive been wanting to make it more elegant before
> >announcing it. It does, however, work, and is totally C++ based ATM. You
> >just inherit the "Instrument" class and voila. (ok, so it got renamed
> >along the way)
>
> thus guaranteeing that no instruments can be written in other
> languages. for all the mistakes the GTK+ crew made, their design to
> use C as the base language so as to allow for other languages to
> provide "wrappers" was a far-sighted and wise choice. OTOH, i will
> concede that the real-time nature of most synthesis would tend to rule
> out most of the languages of interest.
Yes. I was asking about C API mostly.
>
> >Although in light of Tommi's post (mastajuuri) i have to reconsider
> >working on my API. My only problem with mastajuuri is its dependance on
> >QT (if im not mistaken), sorry.
> >
> >If people would like to my work-in-progress, i could definitely use some
> >feedback ;-)
But anyways. I would really love to look at what you have.
Is this only a specification or you have a reference implementation?
> >
> >
> >This discussion is open!
>
> the discussion is several years old :)
But could something change in several years? sigh...
Still no API.
Isnt it was a great idea behind the LADSPA
"simple API _now_ is better than several years old discussion"?
>
> you managed to touch upon the central problem in your penultimate
> sentence, apparently without realizing the depth of the problem.
>
> if a synth comes with a GUI, then the issue of toolkit compatibility
> rears its ugly and essentially insoluble head once again. you can't
> put GTK based code into a Qt application, or vice versa. this also
> fails with any combination of toolkits, whether they are fltk, xforms,
> motif etc. etc.
>
> if the synth doesn't come with a GUI, but runs in the same process as
> the host, then every synth has to have some kind of inter-process
> control protocol to enable a GUI to control it.
So what? Isnt it just a two-three more API functions?
Damn! I never wrote any audio application ever,
and I dont want to create the ninth or tenth possible winner
instrument/synth API for linux.
So I see no reason for me to say "okay I can do it".
So I ask all of you guys once again:
What should we (I) use?
We already have some possibilities
-- MusE LADSPA extensions,
-- nick can finish his work,
-- MAIA
-- mustajuuri's API
....
why dont we use what we have?
>
> these are deep problems that arise from the lack of a single toolkit
> on linux (and unix in general).
>
> this is why JACK is designed in the way that it is, and why it
> (theoretically) allows for both in-process and out-of-process
> "plugins". this allows programmers to choose which model they want to
> use. i predict that any API that forces the programmer to use a
> particular toolkit will fail. JACK's problem in this arena is that its
> designed for sharing audio data, and does not provide any method for
> sharing MIDI or some other protocol to control synthesis parameters.
>
> besides, if SC for linux is in the offing, who needs any other
> synthesizers anyway? :))
Which SC are you talknig about?
>
> --p
>
__________________________________________________
Do you Yahoo!?
Faith Hill - Exclusive Performances, Videos & More
http://faith.yahoo.com
> Paul Davis <pbd(a)linuxaudiosystems.com> writes
>
> switching between contexts is massively more
> expensive under windows and macos (at least pre-X),
As a data point, I ran two different sa.c files (the audio
engines sfront produces) set up as softsynths using different
patches under OS X (using CoreAudio + CoreMIDI, not the
AudioUnits API), and it worked -- two patches doubling together,
both looking at the same MIDI stream from my keyboard, both
creating different audio outputs into CoreAudio that were
mixed together by the HAL. So, for N=2 at least, OS X seems
to handle N low-latency softsynth apps in different processes
OK ...
-------------------------------------------------------------------------
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-------------------------------------------------------------------------
Hi all,
I've been running 2.5 for about a week now without any troubles. I did try
2.5.39, but it didn't want to stay up for more than 10 minutes. 41 and 42
have been solid (and that's with all the usual whizbang usb, bttv, etc, stuff
as well.)
Overall, imho, 2.5 seemed less responsive than the 2.4.19 kernel that I'm
coming from. The mouse cursor sticks on the screen under load, and I had to
increase the buffer size in sweep because it was dropping out like nobody's
business. But then I gave 2.5.42-mm3 a whirl, and it seems to be
considerably more responsive, and sweep drops out much less.
I've done some latency tests as well:
http://pkl.net/~node/lad/latency-tests/
The machine is an athlon 1.4GHz, 512MB ram, xfs fs, sblive sound card,
matrox g550 graphics card.
There's quite(!) a descrepency between Joern's and my X11 performance. This
makes me wonder 2 things: what card are you using Joern?, and what (if
anything) can be done to make the X11 performance less ludicrously bad?
The 2.4.19 benchmarks in there are probably bad due to the fact that it's
xfs+lowlatency, but apparently there's ways to do that properly which I
didn't do. Other than that, fairly dry benchmarks; just showing slightly
better performance with each patch.
Bob
Hi.
Sorry to come in very late. The Mustajuuri plugin interface includes all
the bits you need. In fact I already have two synthesizer engines under
the hood.
With Mustajuuri you can write the synth as a plugin and the host is only
responsible for delivering the control messages to it.
Alternatively you could write a new voice type for the Mustajuuri synth,
which can lead to smaller overhead ... or not, depending on what you are
after.
http://www.tml.hut.fi/~tilmonen/mustajuuri/
On 3 Jul 2002, nick wrote:
> Hi all
>
> I've been scratching my head for a while now, planning out how im going
> to write amSynthe (aka amSynth2)
>
> Ideally i don't want to be touching low-level stuff again, and it makes
> sense to write it as a plugin for some host. Obviously in the Win/Mac
> world theres VST/DXi/whatever - but that doesnt really concern me as I
> dont use em ;) I just want to make my music on my OS of choice..
>
> Now somebody please put me straight here - as far as I can see, there's
> LADSPA and JACK. (and MuSE's own plugins?). Now, I'm under the
> impression that these only deal with the audio data - only half what I
> need for a synth. Or can LADSPA deal with MIDI?
>
> So how should I go about it?
> Is it acceptable to (for example) read the midi events from the ALSA
> sequencer in the audio callback? My gut instinct is no, no, no!
>
> Even if that's feasible with the alsa sequencer, it still has problems -
> say the host wanted to "render" the `song' to an audio file - using the
> sequencer surely it would have to be done in real time?
>
> I just want to get on, write amSynthe and then everyone can enjoy it,
> but this hurdle is bigger than it seems.
>
> Thanks,
> Nick
>
>
> _________________________________________________________
> Do You Yahoo!?
> Get your free @yahoo.com address at http://mail.yahoo.com
>
Tommi Ilmonen Researcher
>=> http://www.hut.fi/u/tilmonen/
Linux/IRIX audio: Mustajuuri
>=> http://www.tml.hut.fi/~tilmonen/mustajuuri/
3D audio/animation: DIVA
>=> http://www.tml.hut.fi/Research/DIVA/
After troubleshooting configuration problems with several users, I believe I
have fixed the problem with the configure script that prevented some people from
building freqtweak. Anyone who had problems should definitely try the new
release.
Along with that, you get a 4096 freq bin mode, and a few tiny bugfixes.
http://freqtweak.sourceforge.net/
I added a short example of how to pipe alsaplayer through freqtweak and out your
speakers without messing with JACK patchbays (see Usage Tips section).
jlc
Hi Guys,
I thought I'd mention this project that's been going on in that bad bad windows
world to provide free wdm drivers for all emu10k based soundcards. It seems
they've made quite a good job at that. I've heard reports that it is working in
w2k with Cubase at 4ms latency with a standard sb-live soundcard (sound quality
not accounted for).
Though this is not really Linux related, there does not seem to be any source
for this driver, and so on. It would however be interesting to know how they've
crafted the driver to provide such good performance.
More releated to Linux is the bundled DSP compiler. It seems there is a number
of DSP effect algorithms bundled in the package that are made for the emu10k
processor. I don't know if the source to the effects in in it, but they have a
message board where one of the topics seems purely about using/programming the DSP.
The dsp algorithms, even the dsp-binaries, should be very interesting to test
with the emu10k in Linux, they _should_ be possible to use "of the shelf".
Oh, right! The site:
http://kxproject.spb.ru/
Regards
Robert
Changes:
-Bug when playing recorded tracks (VERY IMPORTANT!!!)
-Bug when trying to export .ecs
- Add quotes when filename contains spaces
Download it from:
http://www.sourceforge.net/projects/tkeca
Regards,
Luis Pablo
Ahora pod�s usar Yahoo! Messenger desde tu celular. Aprend� c�mo hacerlo en Yahoo! M�vil: http://ar.mobile.yahoo.com/sms.html
> -----Original Message-----
> From: Ivica Bukvic [mailto:ico@fuse.net]
...
> introduces a problem of porting apps into its API, and that
> again poses
> the same problem of excluding a lot of older audio apps that
...
this might be solved by user space device drivers - they do not want the
mixer in kernel... once the userspace device drivers are available it will
be possible to implement what you propose, I guess. not sure what's the
latest news as far as the userspace device drivers go...
the other option is that one of the APIs will become roughly as common as
OSS is today (so that virtually no applications will go straight to device
drivers). JACK?
erik
Announcing the initial release of FreqTweak (v0.4)
http://freqtweak.sourceforge.net
FreqTweak is a tool for FFT-based realtime audio spectral manipulation
and display. It provides several algorithms for processing audio data
in the frequency domain and a highly interactive GUI to manipulate the
associated filters for each. It also provides high-resolution spectral
displays in the form of scrolling- raster spectragrams and energy vs
frequency plots displaying both pre- and post-processed spectra.
It currently relies on JACK for low latency audio interconnection and
delivery. Thus, it is only supported on Linux.
FreqTweak is an extremely addictive audio toy, I have to pry myself
away from playing with it so I can work on it! I hope it has value
for serious audio work too (sound design, etc). The spectrum analysis
is pretty useful in its own right.
FreqTweak supports manipulating the spectral filters at several
frequency resolutions (64,128,256,512,1024, or 2048 bands) depending
on your needs and resources. Overlap and windowing are also
selectable.
The GUI filter graph manipulators (and analysis plots) have selectable
frequency scale types: 1x and 2x linear, and two log scales to help
with modulating the musical frequencies. Filters can be linked across
multiple channels. The plots are resizable and zoomable (y-axis) to
allow precise editing of filter values.
The current processing filters are described below in the order audio
is processed in the chain. Any or all of the filters can be
bypassed. The state of all filters can be stored or loaded as presets.
Spectral Analysis -- Multicolor scrolling-raster spectragram,
or energy vs. freq line or bar plots... one shows
pre-processed, another shows post-processed.
EQ -- Your basic multi-band frequency attenuation. But you get
an unhealthy number of bands...
Pitch Scaling -- This is an interesting application of
Sprengler's pitch scaling algorithm (used in Steve Harris'
LADSPA plugin). If you keep all the bins at the same scale, it
is equivalent to Steve's plugin, but when you start applying
different scales per frequency bin, things quickly get weird.
Gate -- This is a double filter where a given frequency band is
allowed to pass through (unaltered) if the power on that band
is between two dB thresholds... otherwise its gain is clamped
to 0.
Delay -- This lets you delay the audio on a per frequency-bin
basis yielding some pretty wild effects (or subtle, if you are
careful). A feedback filter controls the feedback of the delay
per bin (be careful with this one). This is basically what
Native Instrument's Spektral-Delay accomplishes. Granted, I
don't have all the automated filter modulations (yet ;). See
their website for audio examples of what is possible with this
cool effect.
Have fun... report bugs...
Jesse Chappell <jesse(a)essej.net>