On Sat, 2003-06-14 at 04:45, Frank van de Pol wrote:
On Sat, Jun 14, 2003 at 02:12:57AM +1000, Allan
Klinbail wrote:
This is all quite interesting..
Thanks for your reply Allan, I've put some of my controversial opinions in
it, please don't feel offended!
No offence taken at all.. this was a very good and informative read...
From working with hardware synths I'm used to
listening to everything in
real time.... over and over and over again.. no big
deal... often this
is where new ideas are formed and also problems in the mix discovered..
Personally I'm just excited that we have real-time soft synths.
Me too, unlike some other people I have a 'traditional' setup with mostly
hardware
synths, effects, mixing desks. Some portions are done by the soft synths,
but I do see that more and more of the fine hardware is being virtualised in
the software (at the expense of especially the user interface / user
interaction)..
Agreed, and even some of the good hardware sytns are becoming software
dependent. i.e. With my AN-200 not all of the parameters can be accessed
through the hardware interface.. I have to boot into that other OS to
run the software to create patches with all the flexibility that the
device offers.
To my knowledge MIDI in itself is a real-time spec except for sysex...
which is used more often for management rather than real-time control..
(although it can be). However the notes below do indicate a dedication
to real-time.... although some concerns do arise
"Note that for the softsynth to get advantage of this the application
> should enqueue the events (a bit) ahead of
time"
ON outset it would seem that someone who is using an external MIDI
controller device (fader box e.t.c..) or even an internal Midi slider
app may suffer as us humans work in real time.. our real time messages
would then be queuing after the above events...(am I making sense? let
me try another phrase) essentially a real-time tweak of a knob (say
routed to a filter, envelope or LFO) may not register with the system
as the sequencer app only received these messages in real time and not
"slightly before"..... This would make recording performances extremely
difficult (and in techno,electronica et.al synth tweaking is the
performance in itself)... Imagine trying to pre-empt yourself by a
matter of milliseconds.. Playing on traditional instruments it's often
hard to even do it in time. I'm not convinced trying to do "better
than real-time" is actually such a musical concept. I Would be happy to
see a bounce feature that works in better than real-time.. but not if it
is going to sacrifice the performance of MusE or any other app in
real-time.
Perhaps I failed to make my point clear, or even that I forgot to tell the
whole context...
Some of my points of view:
- you want the latency (time between user action, eg. tweaking a real-time
control or pressing a key) as low as possible. But don't get trapped in
all the marketing stories people want to play
My experience is that even with without low latency it is possible to play
an instrument, though is more difficult and takes more practice to get the
timing right. A latency of >20 ms makes a piano feel kind of sluggish; a
latency <10ms gives your the feeling of instant sound. The curch organ is
a nice example of an instrument that is difficult to play because of
extremely high latency, but good musicians do manage...
Yep I had a go at that.. and I found it really frustrating.. Then when I
changed school the new one had a Hammond B2 (about 1.5 times the size of
a the legendary B-3) but htey wouldn't let me play it cause I wasn't a
church organ player... (But I was attending the school of Deep Purple,
and Black Sabbath at the time.. so god did I want to play on that)
A nice example of constant delay I keep on using is the propagation
velocity of sound waves. At 20 deg, the speed of sound is approximate 330
m/s. Which means that moving the speakers 1 meter further away will cost
you 3ms more latency...
"Recording engineers" who claim that they can't make a decent mix if the
latency is above 10ms might be right with that statement, but I'll be glad
to put a bet for a crate of beer on it that they can't do a decent mix
with no latency either ;-)
- Though the human brain can compensate for latency (constant delay), it
can't for jitter (stochastic variable delay). If there is a lot of jitter,
it just won't sound right, the groove just isn't there.
Every device has latency, every keyboard has latency (time between hitting
the key, and event or sound generated), every MIDI synth has latency (time
between NOTE ON event coming in and sound generated). The amount of latency
depends on the brand/model of the device (eg. my AKAI S3K sampler has lower
latency than my Kawai K1 synth etc.).
This is true
This brings op a rather interesting 3rd point:
- In the end, all that counts is that the parts of the composition arrive at
the listner's ears at the right time. This implies that you would need to
know about the latencies in your system, and compensate for it:
If an event is know ahead of time (sequencer playback, but also
pre-recorded audio), use that knowledge to ease the system and gain better
timing accuracy. This way you can start the longish things before the fast
things, with as goal that they are completed at the same time.
Now you do know your latencies, but an event comes in to be played 'right
now' (eg. a live keyboard player. You have the option to either try to
play is best effort, with a certain (hopefully small) latency, or delay
everything to mask the individual instrument's latency.
In the end the only thing that matters is that it sounds OK and that when
the recording is reproduced it sound the same... (already not a trivial
question)
When comparing a MIDI with a block based digital audio system (eg. Jack) we
see an interesting difference: MIDI is event driven, while the audio portion
is in fact sampled, with fixed frames per second (numer of frames per second
depends on sample rate and block size).
Due to the block size processing this inevitely introduces some latency, but
what's worse, when viewed from real-time perspective, this latency also has
a jitter of up to 1 frame size (which might be quite large!).
when a softsynth, running such a frame based audio engine is driven by
real-time events like MIDI controls, it has basically 2 options:
1. play the event as soon as possible, ie. next frame, but this introduces a
large amount of jitter.
2. determine somehow the timestamp within the frame the event needs to be
played, and delay it till that time arrises.
For live performance one would typically opt for the first option, unless
the sound systems has fairly hight latency (which translates in jitter to
the event system!!!!). For other cases the 2nd option would be preferd,
since the data is either known ahead of time, or one can introduce a small
constant delay.
When sending the data from one app to another, this might even make the
whole timing worse, unless the last one in the chain still has access to the
original timing data, and has the ability to compensate.
The moral of my story is that timing hints won't save your ass, but might
help you if you either know the event ahead of time (eg. recorded midi
sequence), or can introduce a constant delay. A sequencer system does not
have a crystal globe to predict the future. Being aware of the artifacts of
mixing real-time and frame based systems might help us builing the right
tools for great musical compositions...
ps: I'm still a very lousy keyboard player, and am heading for some serious
beer now. Cheers!
mmmmm beeer (no beer never really helped my keyboard playing either..
but it improved how I perceived I played)
>
> "Since
> > > the audio sample rate can also be used as a clock master for the alsa
> > > sequencer this would be a good option to ensure synchronisation."
>
> So long as sample-rate is converted into minutes and seconds etc.. to
> allow for adjusting BPM.. but I'm sure this consideration has been taken
> on-board
>
> >
>
>
> On Tue, 2003-06-10 at 23:51, Robert Jonsson wrote:
> > tisdagen den 10 juni 2003 13.21 skrev Frank van de Pol:
> > > On Tue, Jun 10, 2003 at 08:30:39AM +0200, Robert Jonsson wrote:
> > > > Hi,
> > > >
> > > > > In fact the bounce feature in MusE is "realtime". It
means that you
> > > > > have to wait the real duration of the track to be rendered.
> > > > > In a non "realtime" mode the track is rendered as fast
as computer can.
> > > >
> > > > AFAICT the realtimeness of the bounce feature is like that because
of
> > > > design constraints. Okay, bouncing wavetracks should be possible in
> > > > non-realtime, but not when using softsynths.
> > > >
> > > > This is because all softsynths use alsa-sequencer as the input
interface.
> > > > And if I'm not missing anything, this interface is strictly
realtime
> > > > based. (perhaps it can be tweaked by timestamping every note and
sending
> > > > them in batches? it seems very hard though.)
> > >
> > > You are right, with the current alsa-sequencer the softsynths are driven
by
> > > realtime events. Though an application can enqueue the events to the
> > > priority queues with delivery timestamp, the scheduling is handled
> > > internally by the alsa sequencer. This causes some problems (especially
for
> > > sample accurate synchronisation with JACK or LADSPA synth plugins
(XAP?)),
> > > but also for network transparency and support for MIDI interfaces which
> > > accepts timing hints (Steinberg LTB or Emagic AMT ... if specs of the
> > > protocol were available :-( ).
> > >
> > > During the LAD meeting at Karlsruhe we discussed this and sketched a
> > > alsa-sequencer roadmap that focusses on transition of the alsa-sequencer
> > > from kernel to userspace and better integration with softsynths / JACK.
> > > A few things from this are very much related to your track bouncing /
> > > off-line rendering thing:
> > >
> > > - Provide facility to delegate scheduling to the client. The
implementation
> > > would be to deliver the events directly (without queuing) with the
> > > timestamp attached to the registered client port. This would allow the
> > > client to get the events before the deadline (time at which the event
> > > should be played) and use that additional time to put the events at the
> > > right sample position.
> > >
> > > Note that for the softsynth to get advantage of this the application
> > > should enqueue the events (a bit) ahead of time and pass the timestamp.
> > > Some of the current applications (including MusE) use the
alsa-sequencer
> > > only as event router and drive it real-time.
> > >
> > > Since the softsynth/plugin has no notion of the acutal time (only the
> > > media time and sample position), rendering at arbitrary speed should be
> > > possible: bounce faster than realtime or even slower than realtime for
> > > those complex patches.
> > >
> > > - JACK is real-time, and bound to the sample rate of the soundcard. Since
> > > the audio sample rate can also be used as a clock master for the alsa
> > > sequencer this would be a good option to ensure synchronisation. The
> > > transport of JACK and alsa sequencer can be tied together (either one
of
> > > the two acting as master, a run-time configurable option) to provide
> > > uniform transport and media time amongst the applications that hook
into
> > > the JACK and/or alsa sequencer framework.
> > >
> > > For the offline rendering no nice scheme has been worked out yet; I guess
> > > it would be something along the lines where the application that owns the
> > > sequencer queue has full control on the transport, moving media time at
the
> > > speed the frames are actually rendered, and the app(s) generating the
> > > events keeping at least one sample frame ahead of time.
> > >
> > > Frank.
> >
> > Okay, I didn't know that this had been up on the table, how far has this
work
> > progressed, was it just the Karlsruhe meeting or has more thinking occured?
> > (fyi I'm CC:ing LAD, it might be a more appropriate place for this
> > discussion..).
> >
> > Regards,
> > Robert
--
Son of Zev <sonofzev(a)labyrinth.net.au>