Hello all developers. Sorry for, probably, slow and straightfull english.
All stand-alone instruments, processors and other modules, controlled through midi, as you understand, currently have a serious disadvantage of audio plugins: user must remember all midi parameter numbers and, sometime, values (aeolus stops switching) in order to controll them via MIDI.
It would be nice for all MIDI-managed software, to have capability to send out changes of MIDI parameters when user changes them using native GUI of this software. For example, when user toggles several stops on aeolus, it would send approriate MIDI signals, so, that when sent back into instrument, it will toggle the same stops.
So, users could use standalone software as easy as audio-plugins: Aeolus, Yoshimi/ZynAddSubFX (not sure, that all parameters available via GUI), Phasex... Jack-rack, fst and vsthost (dssi-vst), rakarrack, many other.
Kokkiniza! ;)
Transport issue for Qtractor - has impact to the jitter issue
So the advice to use amidiplay is something I'll follow soon.
Hi all :), hi Robin :), hi Devin :)
Robin, for 64 Studio 3.3 alpha the group has got read and write access
to /dev/hpet too. Btw '[...] | sudo tee [...]' for 3.3 alpha isn't good,
regarding to the enabled root account. Anyway, for this recording test I
kept the value 64 for hpet/max-user-freq, but I'll test higher values
soon.
Devin and some others want to know if the drum module is played before
FluidSynth DSSI is played.
JACK2 doesn't start with -p4096, so I started JACK2 with -Rch -dalsa
-dhw:0 -r44100 -p2048 -n2 to increase the unwanted effect, to get a
clear result.
Without recording it's clear audible that the external drum module
always is played before FluidSynth DSSI is played, plausible regarding
to the audio latency (stupid idea to use such high latency ;), so
there's the need to do an audio test with lower latency, to see if
jitter might change what instrument is played first.
Instead of a rhythm I did record 4 to the floor at 120BPM to 'see' the
jitter.
I did record FluidSynth DSSI by the sound card too. Left channel the
drum module, right channel FluidSynth DSSI and regarding to Qtractor's
graphic of the wavefrorms, there is jitter for both!
Ok, next recording with -Rch -dalsa -dhw:0 -r96000 -p512 -n2.
Without recording it's already audible that the drum module is played
first all the time ...
and it's visible too. Again there's jitter for both. The audio recording
of the drum module always is before the MIDI event. The recording of
FluidSynth DSSI sometimes is before and sometimes after the MIDI event.
There's no offset for the audio track.
I kept -Rch -dalsa -dhw:0 -r96000 -p512 -n2 and recorded FluidSynth DSSI
alone, internal Linux, without using the sound card.
The audio recordings are before the MIDI events and there's jitter. I
never noticed jitter internal Linux before.
I need to repeat the test ASAP, but by using 64 Studio 3.0 beta and
perhaps an older version of Qtractor.
Playing FluidSynth DSSI by MIDI and the recording made internal Linux in
unison, there isn't audible jitter. But after starting playing sometimes
MIDI and the audio recording are perfectly synced and sometimes there's
delay, real delay between the recording and MIDI, not only an early
reflection like effect (but without audible jitter, the jitter only is
visible by the waveforms).
$ qtractor -v
Qt: 4.5.2
Qtractor: 0.4.6
More maybe tomorrow.
Cheers!
Ralf
I have three applications that want to use the sound card, two audio stream players, and a voip phone.
I want to set up linux so that if a call comes in on the phone the OS
will disconnect the audio players, give exclusive access to the voip
phone, and then when the phone is done reconnect the audio players to
the sound card.
How can this be done?
I guess nearly ever mail sent to LAD returned with an issue regarding to
unable to deliver the message in the time limit specified to thaytan at
noraisin dot net, while no mail was send to somebody who obviously has
something seriously to do with GStreamer.
Perhaps this is unimportant, but because I don't know, I guess it's
better to inform about this 'issue' or 'non-issue'
- Ralf
No firewire here. I once had a MOTU, but I guess that there isn't a
driver for Linux and the guy who lend me the MOTU + Mac was Dirk Brauner
who isn't a friend anymore. I guess the MOTO was audio only. The people
who are still my friends don't have much different equipment, but I've
got. Always Envy24 based PCI, one friend has just more IOs for his
Envy24 based PCI card.
> Make sure that the MIDI device is being triggered before the soft
> synth before you post to LAD. If it ends up being the case, then go
> ahead and post it on LAD.
You're right I was stupid to spread to much speculations.
And yes, regarding to your knowledge you should join LAD.
- Ralf
On Wed, 2010-07-14 at 14:12 -0700, Devin Anderson wrote:
> On Wed, Jul 14, 2010 at 12:43 PM, Ralf Mardorf
> <ralf.mardorf(a)alice-dsl.net> wrote:
> > On Wed, 2010-07-14 at 12:30 -0700, Devin Anderson wrote:
> >> On Wed, Jul 14, 2010 at 10:29 AM, Ralf Mardorf
> >> <ralf.mardorf(a)alice-dsl.net> wrote:
> >>
> >> > Hi :)
> >> >
> >> > delayed by a thunder-storm I could do another test.
> >> > --snip--
> >>
> >> So, what you're saying is that your MIDI device and software synth
> >> sync up less and less as you raise the period size.
> >
> > Yes :).
> >
> >> I had presupposed
> >> before that your MIDI device was triggering *after* your software
> >> synth, but it occurs to me that it might be the other way around. Do
> >> you hear the audio from your software synth first, or from your MIDI
> >> device?
> >
> > I can't say it today, now I do some office work. I had the impression
> > that it might vary. Sometimes the virtual drum sampler and sometimes the
> > standalone drum sampler was played earlier, I need to check this ASAP.
> > For older tests with my USB MIDI device it was exactly that way, that
> > jitter had positive and negative delay. At least the recorded waveforms
> > of external MIDI equipment (when I used USB MIDI, now I'm using PCI
> > MIDI), were recorded by Qtractor, before theoretically the MIDI event
> > was send ;). Note! Qtractor had no latency compensation, all recorded
> > audio of external MIDI instruments should have (positive) delay, but
> > negative delay.
>
> If it ends up being the case that your MIDI device is being triggered
> before your software synth, then I'm guessing that the issue here is
> not MIDI jitter. I'm guessing the issue is that the latency that's
> imposed by JACK on incoming and outgoing audio is not imposed on
> incoming and outgoing ALSA MIDI. So, while the audio coming out of
> the software synth is delayed by a certain amount of frames imposed by
> JACK, the audio coming out of your MIDI device is only delayed by the
> latency of the ALSA drivers, the latency of the MIDI ports, the
> latency of your MIDI device.
>
> This would certainly explain why the problem gets worse as you raise
> the period size, and could explain why you had positive and negative
> delay in your older USB MIDI tests, as the reported MIDI jitter in
> your tests was *far* worse in your older tests than it is now.
>
> At the moment, I happen to be doing some work in JACK 2 that could
> potentially solve this issue by enabling MIDI to sync more closely
> with audio, so I'm very curious to know if my suspicions are correct.
> Please keep me updated. :)
Should I build JACK dummy packages for 64 Studio and daily get JACK2
from svn co http://subversion.jackaudio.org/jack/jack2/trunk/jackmp ?
I wonder if this should be cross-posted to LAD?
On LAD and the 64 Studio list are people with much knowledge and your
reply might hit the nail on the head.
- Ralf
On 07/14/2010 06:31 PM, Nedko Arnaudov wrote:
> Robin Gareus <robin-+VlDMftONaMdnm+yROfE0A(a)public.gmane.org> writes:
>
>> I was hinting that the audible midi-jitter could be a result of
>> midi-messages getting 'quantizied' to jack-periods.
>>
>> A JACK-MIDI app which does not honor 'jack_midi_event_t->time' but
>> simply processes all queued midi-events on each jack_process_callback()
>> will result in the symptoms you describe (snare & kick on the same
>> beat). One example of such an app is "a2j".
>
> What version?
release-4 - the latest on svn://svn.gna.org/svn/a2jmidid
> I can clearly see code that handles this in the current
> version. It is in jack.c, a2j_alsa_output_thread(), lines 385-411
I've just seen that there's git://repo.or.cz/a2jmidid.git and jumped
from release-4 to release-6.
ciao,
robin
--
Robin Gareus mail: robin(a)gareus.org
site: http://gareus.org/ chat: xmpp:rgareus@ik.nu
blog: http://rg42.org/ lab : http://citu.fr/
Public Key at http://pgp.mit.edu/
Fingerprint : 7107 840B 4DC9 C948 076D 6359 7955 24F1 4F95 2B42
Hi :)
delayed by a thunder-storm I could do another test.
64 Studio 3.3 alpha (= Ubuntu Karmic) amd64
LXDE, poff dsl-provider, cpufreq-selector -g performance, chgrp
audio /dev/hpet, sysctl -w dev.hpet.max-user-freq=64, modprobe
snd-hrtimer
Qtractor + HR timer playing FluidSynth DSSI drums and a standalone MIDI
drum module in unison.
$ jackd -Rch -dalsa -dhw:0 -r96000 -p16 -n2
FluidSynth DSSI and the hardware MIDI drum module are playing in unison.
$ jackd -Rch -dalsa -dhw:0 -r96000 -p512 -n2
Borderline, not out of timing, but not fine and already critical.
$ jackd -Rch -dalsa -dhw:0 -r96000 -p256 -n2
Borderline, not out of timing, but not fine.
$ jackd -Rch -dalsa -dhw:0 -r96000 -p128 -n2
Borderline, not out of timing, but not fine.
$ jackd -Rch -dalsa -dhw:0 -r96000 -p64 -n2
Borderline, not out of timing, but not fine.
$ jackd -Rch -dalsa -dhw:0 -r96000 -p32 -n2
It here might become usable.
$ jackd -Rch -dalsa -dhw:0 -r96000 -p16 -n2
Yes, for this test the problem at -p32 and -p16 is, that because of
phasing the kick become randomly very loud. It seems not to depend on
the sounds, but jitter, anyway I can't say this for sure, some sounds
can't be played in unison, even if there won't be jitter.
Values >= -p64 result in ... I need to hear it again ...
$ jackd -Rch -dalsa -dhw:0 -r96000 -p64 -n2
... 'in unison' becomes a little bit of an early reflection like effect,
but a very, very short delayed early reflection, still more a
fluctuating phasing.
$ jackd -Rch -dalsa -dhw:0 -r96000 -p1024 -n2
Completely out of sync, bad timing. Unusable.
I guess even with -p16 one should record all drums and the bass at the
same time, but recording every instrument one after another. This could
be possible even for -p512, but MIDI is completely out of timing at
-p1024, note, at 96KHz. Short attacked percussive sounds shouldn't be
played in unison.
For some usage PCI MIDI seems to be okay on my machine, but it's not
comparable to an Atari ST and SMPTE sync to a 4-track cassette recorder
or even to a C64 and click sync to a 4-track cassette recorder. PCI MIDI
still has to much jitter for a straight timing, that enables to record
one instrument after the other in perfect sync.
I guess as long as I could keep -p256 and I won't be able to do a lot
without increasing this value, hw MIDI could be usable for very simple
music, when the whole MIDI backing is recorded at the same time, but one
after the other instrument.
- Ralf
Drumstick is a C++ wrapper around the ALSA library sequencer interface using
Qt4 objects, idioms and style. ALSA sequencer provides software support for
MIDI technology on Linux. Complementary classes for SMF and WRK file
processing are also included. This library is used in KMetronome, KMidimon
and KMid2, and was formerly known as "aseqmm".
Changes:
* Removed the precompiled headers build option
* Fixed a bug that affected users running dumstick-based applications with
realtime priority enabled. There is a related problem in glib-2.22 that has
not yet been fixed (https://bugzilla.gnome.org/show_bug.cgi?id=599079).
This issue prevented to execute FluidSynth from inside KMid at startup in
those affected systems.
Copyright (C) 2009-2010, Pedro Lopez-Cabanillas
License: GPL v2 or later
Project web site
 http://sourceforge.net/projects/drumstick
Online documentation
 http://drumstick.sourceforge.net/docs/
Downloads
 http://sourceforge.net/projects/drumstick/files/0.4.1/
Hi,
this is all about making Linux Audio more useful.
The idea came about because on the one hand there are parts of Linux
audio that really need some coders attention and on the other hand there
are coders who don't know where to start. I realize that there never are
more than enough coders, so this is mainly about bringing attention to
the parts that need it the most.
To a degree it's what bug/feature trackers are there for, but those are
usually per application, and while there are category and priority
systems in place those are rarely used.
So what this is also about is bridging a gap between users, developers
and between applications.
It would be quite simple really.
An easy to find, central place, possibly a wiki or a tracker.
Anyone, a user most likely, describes his workflow and what the
showstopper is. This could be applications not syncing properly, or an
essential but missing feature. The idea is to tackle mainly
infrastructure and cross application problems, with the goal to make a
workflow actually work.
The user should have to specify all relevant information available, such
as version information, links, probably some kind of priority or urgency
indication and how hard he believes it would be.
He could also put up a reward of sorts, not necessarily monetary.
Any developer could pick up the task and work on it, possibly leaving a
notice.
The possible benefits I see are:
a) A kind of overview of what's needed the most, one place where you can
see what's actually important to users.
b) A way to identify and fix problems between applications - something I
believe is very important for a system that encourages the use of
multiple applications at once. I believe there are numerous
synchronisation/transport issues for example which are never really tackled,
despite this being a very important part of the infrastructure.
c) Emphasis on actual workflow and usability.
d) It would work for any program, even those without tracker and those
that aren't high profile and aren't usually in the center of attention.
Could this work? What do you think?
--
Regards,
Philipp
--
"Wir stehen selbst enttäuscht und sehn betroffen / Den Vorhang zu und alle Fragen offen." Bertolt Brecht, Der gute Mensch von Sezuan