Hi,
i was wondering:
With the new shiny -rt kernels and realtime scheduling available to non
root users via the usual mechanisms, there's the possibility of really
finetuning an audio/midi system.
The main issue i am interested in is the interplay between midi and
audio in such a system. How to tune the audio side to get a very
reliable system is pretty easy these days, thanks to the great jack
audio connection kit, alsa and the new -rt kernels.
But now i wonder how midi software fits into this. I'm here interested
in the special case of a software sequencer (like i.e. Rosegarden)
driving a softsynth (like i.e. om-synth or supercollider3) or whatever.
Ok, on a normal audio tuned -rt equipped linux system the SCHED_FIFO
priorities which are used for the different components look something
like this:
99 - system timer
98 - RTC
81 - soundcard IRQ handler
80 - jack watchdog
70 - jack main loop
69 - jack clients' process loops
50 - the other IRQ handlers
Now, i wonder how midi threads would fit in best into this scheme. Let's
assume our midi sequencer uses either sleep() or RTC to get woken up at
regular intervals, and let's further assume that it properly deals with
these timing sources to get relatively jitter free midi output given
that it get's woken up often enough by the scheduler. I further assume
that the alsa seq event system is used and midi events are not queued
for future delivery but always delivered immediately.
All this implies that for midi delivery timing to not be influenced by
audio processing on the system (which gets a problem especially at large
buffer size, where quite a bit of work is done at a time), all the stuff
that handles midi should run with realtime priorities above the jack
stuff (i.e. around 90). I wonder whether it also needs to have a higher
priority than the soundcard irq handler, too. Does the jackd main loop
"inherit" the priority of the soundcard irq handler?
Anyways, one more thing to note is for this to work nicely, the
softsynth needs to have an extra midi handling thread that is also
running with a priority in the 90s range, so it can timestamp the event
properly when it arrives.
So i wonder now: Assuming our system is setup as described above and all
midi handling is done from threads with sufficiently high pririties not
to get disturbed by audio stuff, will the alsa event system play nice?
I ask this, because i have setup a system as above with a simple midi
generator (see code below) and some different softsynths (one of which i
have written which does have its midi thread at an appropriate priority.
you can get a tarball here.
http://affenbande.org/~tapas/ughsynth-0.0.3.tgz
Beware it eats unbelievable amounts of cpu and is in no way considered
being finished. it just lay around handy for this test ;)). But i still
get some regular jitter in my sound.
Here's recorded example output (running jackd at a periodsize of 1024
and the test notes are produced at a frequency of 8hz). First with
ughsynth then with jack-dssi-host hexter.so. The effect is less
prominent with hexter, i suppose because the jack load with it is only
at 2 or 3% as opposed to ughsynth that uses 50% here on my athlon 1.2
ghz box. In case you don't hear what i mean: The timing of every ca. 7th
or 8th note is a little bit off.
http://affenbande.org/~tapas/midi_timing.ogg
So i wonder: what's going wrong? Is the priorities setup described above
not correct? Is alsa seq handling somehow not done with RT priority?
What else could be wrong? Please enlighten me :)
And yeah, i do _not_ want to hear about jack midi. It's a good thing,
and i'm all for it as it will make at least some scenarios work great
(sequencer and softsynth both being jack midi clients), but not all.
Thanks in advance,
Flo
midi_timer.cc:
#include <iostream>
#include <fstream>
#include <sstream>
#include <string>
#include <vector>
#include <cstdlib>
#include <iomanip>
#include <pthread.h>
#include <linux/rtc.h>
#include <sys/ioctl.h>
#include <sys/time.h>
#include <sys/types.h>
#include <sys/mman.h>
#include <fcntl.h>
#include <errno.h>
#include <unistd.h>
#include <poll.h>
#include <signal.h>
#include <time.h>
#include <alsa/asoundlib.h>
#define RTC_FREQ 2048.0
#define NOTE_FREQ 8.0
#define RT_PRIO 85
int main()
{
int fd;
fd = open("/dev/rtc", O_RDONLY);
if (fd == -1) {
perror("/dev/rtc");
exit(errno);
}
int retval = ioctl(fd, RTC_IRQP_SET, (int)RTC_FREQ);
if (retval == -1) {
perror("ioctl");
exit(errno);
}
std::cout << "locking memory" << std::endl;
mlockall(MCL_CURRENT);
// std::cout << "sleeping 1 sec" << std::endl;
// sleep(1);
snd_seq_t *seq_handle;
int err, port_no;
err = snd_seq_open(&seq_handle, "default", SND_SEQ_OPEN_OUTPUT, 0);
if (err < 0) {
std::cout << "error" << std::endl;
exit(0);
}
std::string port_name = "midi_timer";
// set the name to something reasonable..
err = snd_seq_set_client_name(seq_handle, port_name.c_str());
if (err < 0) {
std::cout << "error" << std::endl;
exit(0);
}
// this is the port others can connect to. we don't do autoconnect ourself
err = snd_seq_create_simple_port(seq_handle, "midi_timer:output", SND_SEQ_PORT_CAP_READ|SND_SEQ_PORT_CAP_SUBS_READ, SND_SEQ_PORT_TYPE_MIDI_GENERIC);
if (err < 0) {
std::cout << "error" << std::endl;
exit(0);
}
// on success we know our port no
port_no = err;
struct sched_param param;
int policy;
pthread_getschedparam(pthread_self(), &policy, ¶m);
param.sched_priority = RT_PRIO;
policy = SCHED_FIFO;
pthread_setschedparam(pthread_self(), policy, ¶m);
std::cout << "turning irq on" << std::endl;
retval = ioctl(fd, RTC_PIE_ON, 0);
if (retval == -1) {
perror("ioctl");
exit(errno);
}
snd_seq_event_t ev;
unsigned long data;
int ticks_passed = 0;
while(1) {
// then we read it
retval = read(fd, &data, sizeof(unsigned long));
if (retval == -1) {
perror("read");
exit(errno);
}
if ((float)ticks_passed >= (RTC_FREQ/NOTE_FREQ)) {
// std::cout << "play note" << std::endl;
ticks_passed -= (long int)(RTC_FREQ/NOTE_FREQ);
// play note
snd_seq_ev_clear(&ev);
snd_seq_ev_set_direct(&ev);
snd_seq_ev_set_subs(&ev);
snd_seq_ev_set_source(&ev, port_no);
ev.type = SND_SEQ_EVENT_NOTEON;
ev.data.note.note = 53;
ev.data.note.velocity = 100;
snd_seq_event_output_direct(seq_handle, &ev);
snd_seq_drain_output(seq_handle);
}
data = (data >> 8);
// std::cout << data << std::endl;
ticks_passed += data;
}
return 0;
}
--
Palimm Palimm!
http://tapas.affenbande.org
> are you using it in a professional environment? So far it's been used only
> for home/hobbyist situations, and I would be really interested to hear
> about any use in a more professional situation.
Denis,
My application is that of a "non-commercial" mastering studio. I do a bit of
mastering for my friends in Nashville, but so far I've only worked with the
rough mixes, nothing that's made it onto a record.
The reason I'm so excited about DRC is that it transforms my
not-mastering-quality Klipschorns into something that is very accurate, or at
least measures so, and in my opinion is comparable to the high-end mastering
and mixing studios I've visted.
For those who don't know, Klipschorns use a folded horn to load a 15" woofer
and convetional horn-loaded midrange and tweeter into a cabinet that fits
tightly in the corner of the room. They have extremely high efficiency and
low distortion, but abysmal phase and frequency response. Luckily, these are
EXACTLY the things that DRC is designed to fix. Most speakers have 10% or so
distortion at low frequencies and moderate listening levels. As far as I
know, this can't be removed by any sort of electronic correction. With this
setup and some modest room treatment, I've got very low distortion, AND flat
frequency response.
-Ben Loftis
http://www.harrisonconsoles.comhttp://www.studiooutfitters.com
I am trying to write an application that will monitor volume level coming in
through the line-in jack on my soundcard. Is this possible using alsa, and
if so, does anyone know of any examples, or am I in the wrong place?
_________________________________________________________________
DonÂ’t just search. Find. Check out the new MSN Search!
http://search.msn.click-url.com/go/onm00200636ave/direct/01/
Greetings:
Last month I updated the VST/Linux tutorial at
http://www.djcj.org/LAU/quicktoots/toots/vst-plugins/. Due to
circumstances, the updated page has only recently gone on-line, but no
further material needed to be added.
Please note that this update is the last that will be done for that
tutorial. As you can read on the updated page, I now feel that the FST
and DSSI projects are the currently preferred solutions for using
VST/VSTi plugins under Linux. Both systems are easy to build and use, so
I'll leave further explication to the developers of those projects.
I suppose at some point someone should add some material regarding the
use of VST plugins with Ardour, but it should really go on the Ardour
wiki (or maybe it is already there?).
Thanks to Patrick Shirkey for hosting this tutorial on his djcj site.
Best regards,
dp
Hi all,
I hope this is a simple question. I'm trying to compile Rezound on OSX,
and I'm getting this:
../../../config/platform/platform.h:10:3: warning: #warning no platform
determined!
Which leads to some other troubles later on.
So I've created a config/platform/darwin.h file (based on the bsd.h
file) which starts like this:
#ifndef __rez_platform_darwin_H__
#define __rez_platform_darwin_H__
#if defined(__Darwin)
#define rez_OS_DARWIN
#endif
And added OS_DARWIN to the platform.h file:
#ifndef __platform_H__
#define __platform_H__
#include "linux.h"
#include "solaris.h"
#include "bsd.h"
#include "darwin.h"
#if !defined(rez_OS_DARWIN) && !defined(rez_OS_LINUX) &&
!defined(rez_OS_SOLARIS) && !defined(rez_OS_BSD)
#warning no platform determined!
#endif
#endif
But I still get the error. In the darwin.h file, I have tried various
different syntax for defining Darwin:
(__darwin), (darwin), __Darwin__), etc etc
With no result. Can anybody shed some light on the right way to define
the platform in this situation?
thx + happy new year,
derek
--
derek holzer ::: http://www.umatic.nl
---Oblique Strategy # 81:
"Go to an extreme, move back to a more comfortable place"
Hi,
I have a question for some audio professionals out there.
What is the smallest sensible gain control step in dB.
Is it 0.5dB ?
I am asking, because if one is using a digital gain control in a 24bit
fixed point DSP, once could use almost any step size, so I am looking
for the smallest sensible size to use.
Some people mentioned earlier on a previous thread that there was
something called soft gain control, where the user moves the gain up a
step, but the mixer gradually(fairly quickly) adjusts the volume to the
new level, so no clicks are heard on the speakers. How does these soft
gain controls prevent the clicking? Do they wait for the zero crossing
point to adjust the gain?
James
tom christie <christie.tom(a)gmail.com> writes:
> I've written a little template program that just reads from one
> audio file and writes to another, using the sox library stlib.a.
> It's pretty simple to change it to do what you want.
Thanks, I'll give it a whirl. I still think that the code of the sox
executable should be able to trim just zeros at least at the start of
the file, and that the docs are hard to understand and missing stuff,
but there is nothing except list the complaints that I feel capable of
doing. I strongly suspect that the code does something different from
what it is supposed to do, but I find it impossible to figure out what
it is supposed to do, either. So I might be wrong about that.
--
David Kastrup, Kriemhildstr. 15, 44793 Bochum
Hi list(s),
just a short reminder: The Call for Papers, Call for Music etc. for the
4th International Linux Audio Conference (LAC2006) is still running until
January 8th, 2006. That means for those of you who drive home for Christmas
like me - use the time wisely :-). Write a paper, compose a piece, think
of a software demo - and submit your entry in time.
See all the details at: http://lac.zkm.de
Hoping to hear from you (and then see you) soon,
Goetz Dipper & Frank Neumann
LAC2006 Organization
Some colleauges of mine do need a tool for coding and decoding of
high-order Ambisonic for their research. They are aiming for seventh
order, played back over sixteen loudspeakers. They are now planning to
implement this, using either JACK or LADSPA.
As I have some experience with linux audio, I got involved. So I am
seeking some advice and answers here. The first thing I would like to
know is: Does anything like this already exist?
(I think there are some LDASPA plugins for Ambisonic, but I am not aware
of how high order these are.)
Secondly, what would be the better choice, JACK or LADSPA? We are
probably going to need som GUI on it, to configure the thing. And we
would like to have the flexibility to alter the coding/decoding
coefficients, turn on/off some filtering and so on.
With kind regards
Asbjørn