I work with a small community radio station, and since we're continually
strapped for cash we implement our studio-transmitter link by streaming
audio over a network. We use a variety of players and formats (mainly xmms
and realplayer), but all of them share a common problem. Despite our
process monitoring software, the stream occasionally goes silent. We would
like to find a way to detect when the output of the playback computer's
soundcard is silent- or at least quiet enough to count as such.
I was hoping that someone could give some quick hints or point me to a
tutorial on writing a plugin that could live between our playback app and
the sound device and monitor that stream of data. I think this would be
similar to a funky visualization plugin, but so far those have all been
specific to particular playback applications. I've played with ladspa
(which is awesome, BTW) and I've mucked about in the internals of kde's
arts but I haven't found a clean way of inserting this into the code.
thanks!
-Dave
Hi,
pure evil ... now you can load all your "legacy mp3 files" into sweep and
scrub around with them. I also made a few demo recordings, hooray :)
Sweep 0.5.8 Development Release
-------------------------------
Sweep is a sound wave editor, and it is now also generally useful as a
flexible recording and playback tool. Inside lives a pesky little virtual
stylus called Scrubby who enjoys mixing around in your files.
This development release is available as a source tarball at:
http://prdownloads.sourceforge.net/sweep/sweep-0.5.8.tar.gz?download
MP3 import is now supported (via libmad). Minor bugs have been fixed in
rendering of record position and playback mixing.
There is a new page of audio demos made with Sweep. These demonstrate the
sounds of Scrubby, a tool which allows vinyl-like manipulation of digital
audio:
http://www.metadecks.org/software/sweep/demos.html
Screenshots:
http://www.metadecks.org/software/sweep/screenshots/
Sweep is designed to be intuitive and to give you full control. It includes
almost everything you would expect in a sound editor, and then some:
* precise, vinyl like scrubbing
* looped, reverse, and pitch-controlled playback
* playback mixing of unlimited independent tracks
* looped and reverse recording
* internationalisation
* multichannel and 32 bit floating point PCM file support
* support for Ogg Vorbis and MP3 compressed audio files
* LADSPA 1.1 effects support
* multiple views, discontinuous selections
* easy keybindings, mouse wheel zooming
* unlimited undo/redo with fully revertible edit history
* multithreaded background processing
* shaded peak/mean waveform rendering, multiple colour schemes
Sweep is Free Software, available under the GNU General Public License.
More information is available at:
http://www.metadecks.org/software/sweep/
Thanks to Pixar Animation Studios and CSIRO Australia for supporting the
development of this project.
enjoy :)
Conrad.
--- Paul Davis wrote:
> >IMO running each synth in its own thread with many synths going is
> >definitely _not_ the way forward. The host should definitely be the only
> >process, much how VST, DXi, pro tools et. al. work.
>
> i think you need to scan back a year or 18 months in the archives to
> where we measured this. the context switch under linux can be
> extremely quick - on the order of 20-50 usecs on a PII-450, and is not
> necessarily a problem. switching between contexts is massively more
> expensive under windows and macos (at least pre-X), and hence the
> multi-process design is not and cannot be an option for them at this time.
But could something change throuh 18 months? sigh...
>
> >No, there is no real "instrument" or "synth" plugin API. but since my
> >original post I have been brewing something up. its quite vst-like in
> >some ways, but ive been wanting to make it more elegant before
> >announcing it. It does, however, work, and is totally C++ based ATM. You
> >just inherit the "Instrument" class and voila. (ok, so it got renamed
> >along the way)
>
> thus guaranteeing that no instruments can be written in other
> languages. for all the mistakes the GTK+ crew made, their design to
> use C as the base language so as to allow for other languages to
> provide "wrappers" was a far-sighted and wise choice. OTOH, i will
> concede that the real-time nature of most synthesis would tend to rule
> out most of the languages of interest.
Yes. I was asking about C API mostly.
>
> >Although in light of Tommi's post (mastajuuri) i have to reconsider
> >working on my API. My only problem with mastajuuri is its dependance on
> >QT (if im not mistaken), sorry.
> >
> >If people would like to my work-in-progress, i could definitely use some
> >feedback ;-)
But anyways. I would really love to look at what you have.
Is this only a specification or you have a reference implementation?
> >
> >
> >This discussion is open!
>
> the discussion is several years old :)
But could something change in several years? sigh...
Still no API.
Isnt it was a great idea behind the LADSPA
"simple API _now_ is better than several years old discussion"?
>
> you managed to touch upon the central problem in your penultimate
> sentence, apparently without realizing the depth of the problem.
>
> if a synth comes with a GUI, then the issue of toolkit compatibility
> rears its ugly and essentially insoluble head once again. you can't
> put GTK based code into a Qt application, or vice versa. this also
> fails with any combination of toolkits, whether they are fltk, xforms,
> motif etc. etc.
>
> if the synth doesn't come with a GUI, but runs in the same process as
> the host, then every synth has to have some kind of inter-process
> control protocol to enable a GUI to control it.
So what? Isnt it just a two-three more API functions?
Damn! I never wrote any audio application ever,
and I dont want to create the ninth or tenth possible winner
instrument/synth API for linux.
So I see no reason for me to say "okay I can do it".
So I ask all of you guys once again:
What should we (I) use?
We already have some possibilities
-- MusE LADSPA extensions,
-- nick can finish his work,
-- MAIA
-- mustajuuri's API
....
why dont we use what we have?
>
> these are deep problems that arise from the lack of a single toolkit
> on linux (and unix in general).
>
> this is why JACK is designed in the way that it is, and why it
> (theoretically) allows for both in-process and out-of-process
> "plugins". this allows programmers to choose which model they want to
> use. i predict that any API that forces the programmer to use a
> particular toolkit will fail. JACK's problem in this arena is that its
> designed for sharing audio data, and does not provide any method for
> sharing MIDI or some other protocol to control synthesis parameters.
>
> besides, if SC for linux is in the offing, who needs any other
> synthesizers anyway? :))
Which SC are you talknig about?
>
> --p
>
__________________________________________________
Do you Yahoo!?
Faith Hill - Exclusive Performances, Videos & More
http://faith.yahoo.com
> Paul Davis <pbd(a)linuxaudiosystems.com> writes
>
> switching between contexts is massively more
> expensive under windows and macos (at least pre-X),
As a data point, I ran two different sa.c files (the audio
engines sfront produces) set up as softsynths using different
patches under OS X (using CoreAudio + CoreMIDI, not the
AudioUnits API), and it worked -- two patches doubling together,
both looking at the same MIDI stream from my keyboard, both
creating different audio outputs into CoreAudio that were
mixed together by the HAL. So, for N=2 at least, OS X seems
to handle N low-latency softsynth apps in different processes
OK ...
-------------------------------------------------------------------------
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-------------------------------------------------------------------------
Hi all,
I've been running 2.5 for about a week now without any troubles. I did try
2.5.39, but it didn't want to stay up for more than 10 minutes. 41 and 42
have been solid (and that's with all the usual whizbang usb, bttv, etc, stuff
as well.)
Overall, imho, 2.5 seemed less responsive than the 2.4.19 kernel that I'm
coming from. The mouse cursor sticks on the screen under load, and I had to
increase the buffer size in sweep because it was dropping out like nobody's
business. But then I gave 2.5.42-mm3 a whirl, and it seems to be
considerably more responsive, and sweep drops out much less.
I've done some latency tests as well:
http://pkl.net/~node/lad/latency-tests/
The machine is an athlon 1.4GHz, 512MB ram, xfs fs, sblive sound card,
matrox g550 graphics card.
There's quite(!) a descrepency between Joern's and my X11 performance. This
makes me wonder 2 things: what card are you using Joern?, and what (if
anything) can be done to make the X11 performance less ludicrously bad?
The 2.4.19 benchmarks in there are probably bad due to the fact that it's
xfs+lowlatency, but apparently there's ways to do that properly which I
didn't do. Other than that, fairly dry benchmarks; just showing slightly
better performance with each patch.
Bob
Hi.
Sorry to come in very late. The Mustajuuri plugin interface includes all
the bits you need. In fact I already have two synthesizer engines under
the hood.
With Mustajuuri you can write the synth as a plugin and the host is only
responsible for delivering the control messages to it.
Alternatively you could write a new voice type for the Mustajuuri synth,
which can lead to smaller overhead ... or not, depending on what you are
after.
http://www.tml.hut.fi/~tilmonen/mustajuuri/
On 3 Jul 2002, nick wrote:
> Hi all
>
> I've been scratching my head for a while now, planning out how im going
> to write amSynthe (aka amSynth2)
>
> Ideally i don't want to be touching low-level stuff again, and it makes
> sense to write it as a plugin for some host. Obviously in the Win/Mac
> world theres VST/DXi/whatever - but that doesnt really concern me as I
> dont use em ;) I just want to make my music on my OS of choice..
>
> Now somebody please put me straight here - as far as I can see, there's
> LADSPA and JACK. (and MuSE's own plugins?). Now, I'm under the
> impression that these only deal with the audio data - only half what I
> need for a synth. Or can LADSPA deal with MIDI?
>
> So how should I go about it?
> Is it acceptable to (for example) read the midi events from the ALSA
> sequencer in the audio callback? My gut instinct is no, no, no!
>
> Even if that's feasible with the alsa sequencer, it still has problems -
> say the host wanted to "render" the `song' to an audio file - using the
> sequencer surely it would have to be done in real time?
>
> I just want to get on, write amSynthe and then everyone can enjoy it,
> but this hurdle is bigger than it seems.
>
> Thanks,
> Nick
>
>
> _________________________________________________________
> Do You Yahoo!?
> Get your free @yahoo.com address at http://mail.yahoo.com
>
Tommi Ilmonen Researcher
>=> http://www.hut.fi/u/tilmonen/
Linux/IRIX audio: Mustajuuri
>=> http://www.tml.hut.fi/~tilmonen/mustajuuri/
3D audio/animation: DIVA
>=> http://www.tml.hut.fi/Research/DIVA/
After troubleshooting configuration problems with several users, I believe I
have fixed the problem with the configure script that prevented some people from
building freqtweak. Anyone who had problems should definitely try the new
release.
Along with that, you get a 4096 freq bin mode, and a few tiny bugfixes.
http://freqtweak.sourceforge.net/
I added a short example of how to pipe alsaplayer through freqtweak and out your
speakers without messing with JACK patchbays (see Usage Tips section).
jlc
Hi Guys,
I thought I'd mention this project that's been going on in that bad bad windows
world to provide free wdm drivers for all emu10k based soundcards. It seems
they've made quite a good job at that. I've heard reports that it is working in
w2k with Cubase at 4ms latency with a standard sb-live soundcard (sound quality
not accounted for).
Though this is not really Linux related, there does not seem to be any source
for this driver, and so on. It would however be interesting to know how they've
crafted the driver to provide such good performance.
More releated to Linux is the bundled DSP compiler. It seems there is a number
of DSP effect algorithms bundled in the package that are made for the emu10k
processor. I don't know if the source to the effects in in it, but they have a
message board where one of the topics seems purely about using/programming the DSP.
The dsp algorithms, even the dsp-binaries, should be very interesting to test
with the emu10k in Linux, they _should_ be possible to use "of the shelf".
Oh, right! The site:
http://kxproject.spb.ru/
Regards
Robert
Changes:
-Bug when playing recorded tracks (VERY IMPORTANT!!!)
-Bug when trying to export .ecs
- Add quotes when filename contains spaces
Download it from:
http://www.sourceforge.net/projects/tkeca
Regards,
Luis Pablo
Ahora pod�s usar Yahoo! Messenger desde tu celular. Aprend� c�mo hacerlo en Yahoo! M�vil: http://ar.mobile.yahoo.com/sms.html