[linux-audio-dev] Re: [MusE] More ideas :) [Track freeze] - Alsa Sequencer
robert.jonsson at dataductus.se
Tue Jun 10 09:13:01 UTC 2003
tisdagen den 10 juni 2003 13.21 skrev Frank van de Pol:
> On Tue, Jun 10, 2003 at 08:30:39AM +0200, Robert Jonsson wrote:
> > Hi,
> > > In fact the bounce feature in MusE is "realtime". It means that you
> > > have to wait the real duration of the track to be rendered.
> > > In a non "realtime" mode the track is rendered as fast as computer can.
> > AFAICT the realtimeness of the bounce feature is like that because of
> > design constraints. Okay, bouncing wavetracks should be possible in
> > non-realtime, but not when using softsynths.
> > This is because all softsynths use alsa-sequencer as the input interface.
> > And if I'm not missing anything, this interface is strictly realtime
> > based. (perhaps it can be tweaked by timestamping every note and sending
> > them in batches? it seems very hard though.)
> You are right, with the current alsa-sequencer the softsynths are driven by
> realtime events. Though an application can enqueue the events to the
> priority queues with delivery timestamp, the scheduling is handled
> internally by the alsa sequencer. This causes some problems (especially for
> sample accurate synchronisation with JACK or LADSPA synth plugins (XAP?)),
> but also for network transparency and support for MIDI interfaces which
> accepts timing hints (Steinberg LTB or Emagic AMT ... if specs of the
> protocol were available :-( ).
> During the LAD meeting at Karlsruhe we discussed this and sketched a
> alsa-sequencer roadmap that focusses on transition of the alsa-sequencer
> from kernel to userspace and better integration with softsynths / JACK.
> A few things from this are very much related to your track bouncing /
> off-line rendering thing:
> - Provide facility to delegate scheduling to the client. The implementation
> would be to deliver the events directly (without queuing) with the
> timestamp attached to the registered client port. This would allow the
> client to get the events before the deadline (time at which the event
> should be played) and use that additional time to put the events at the
> right sample position.
> Note that for the softsynth to get advantage of this the application
> should enqueue the events (a bit) ahead of time and pass the timestamp.
> Some of the current applications (including MusE) use the alsa-sequencer
> only as event router and drive it real-time.
> Since the softsynth/plugin has no notion of the acutal time (only the
> media time and sample position), rendering at arbitrary speed should be
> possible: bounce faster than realtime or even slower than realtime for
> those complex patches.
> - JACK is real-time, and bound to the sample rate of the soundcard. Since
> the audio sample rate can also be used as a clock master for the alsa
> sequencer this would be a good option to ensure synchronisation. The
> transport of JACK and alsa sequencer can be tied together (either one of
> the two acting as master, a run-time configurable option) to provide
> uniform transport and media time amongst the applications that hook into
> the JACK and/or alsa sequencer framework.
> For the offline rendering no nice scheme has been worked out yet; I guess
> it would be something along the lines where the application that owns the
> sequencer queue has full control on the transport, moving media time at the
> speed the frames are actually rendered, and the app(s) generating the
> events keeping at least one sample frame ahead of time.
Okay, I didn't know that this had been up on the table, how far has this work
progressed, was it just the Karlsruhe meeting or has more thinking occured?
(fyi I'm CC:ing LAD, it might be a more appropriate place for this
More information about the Linux-audio-dev