On Monday 29 September 2003 12:50, Rob wrote:
>On Monday 29 September 2003 12:50, Robert Jonsson wrote:
>> Anybody know of an application that allows streaming of midi
>> and/or audio over the net for the purpose of allowing several
>> people to jam together?
>I seem to remember something like that for Windows too, but
>remember that latency that would be more than acceptable for
>gaming (30-40ms) could make it impossible to jam as you're
Yes, I remember someone (non-technical) telling me about this great system
that allowed musicians on both coasts of North America to perform a "live"
piece together, across the internet. It was some kind of university project.
They even pulled out a newspaper or magazine article about it. I read it
through several times. I could not believe that you can get latencies (esp.
cross-continent) down low enough to allow "interactive jamming", where both
sides hear each other in real time.
Let's see (calculating on the back of an envelope): 3000 miles / 186000
miles/second... that's nominally 16msec... that could be tolerable... but
what about slowdown due to dielectric (speed of electric fields is less than
speed in vacuum)? what about delays in electronic circuits? what about
store/forward digital gear? At one point some traffic went via sattelite,
which adds 2 x 22K miles (or about 1/4 second). Hmm, that's why the delay on
some speech circuits (like when I phone my sister in the Dominican Republic)
is very noticeable! Lately, I think ground fibre is cheaper (and faster) than
satellite. For the moment, ignoring costs, I'm not even sure the network
wizards are able to splice together a dedicated circuit coast-to-coast with
audio hi-fi stereo bandwidth, even for "proof of concept". Even if they could
(like a permanent phone call?), there would be little point, because in a
digital (internet) network there would be real traffic, and hence variability
(jitter), which can only be smoothed by buffering and delay. Stutter is
usually worse than delay. So, I conclude that I'm mystified! Huh?
Now, if one is only concerned with one-way traffic, one can "cheat"! I
concluded from the article (and thinking hard about it) that they must have
used one site as a "reference site", piped their (partial) performance across
the continent (with whatever additional delay/buffering), and then had the
other orchestra "dub in" their part, and have that played at the 2nd site for
their "live" audience. Or the audience might have been at a 3rd site, at this
point it does not matter, just as long as it's not the 1st site! I seem to
recall that the audience was seated in an auditorium on 2nd coast. So, yes
they were "playing together" in some sense. And the audience was hearing the
performance "live". However, I cannot believe that the orchestra at the 1st
site was able to hear the 2nd site "live" at the same time they were playing?
Has anyone else heard about this? Details? Thoughts?
p.s. I used to do some sound recording for 16mm newsreel film stuff, decades
ago. Have you (with headphones on) ever tried to speak into a Nagra tape
recorder with a true read head after the write tape monitoring head? You hear
yourself about 1/2 second later. I had to take my headphones off, so I
wouldn't hear a delayed "echo". I think that is also true for "real" (long
delay) echo in recording? It can be paralyzing! Is that like stutterers?
p.p.s. I have recently been thinking a bit about psycho-acoustics, as I'm
(re)learning some guitar playing. If you consider nerve transmission speeds,
being able to play those real fast weedle-weedle-weedle guitar leads would
seem impossible. What must be happening is that you are telling your fingers
to move a fraction of a second before they actually move. Now add in the long
echo delay, and I suspect that's too much to handle: 3 time bases: what you
want to play, what you are playing (feel?), and what you hear. Comments?
Hello again - this may be a better question for the ardour list, but I can't
get subscribed to that, so...
Does anybody know - does Ardour support assigning tracks to outputs other than
just 1 and 2 - is this an Alsa issue?
To explain, my goal is to have 24 tracks of audio on Ardour, and not mix in
ardour, to split the tracks - track 1 goes out on output 1, track 2 on output
2, 3 on output 3, etc. (and the same with the inputs, if possible), so that I
can mix on my behringer ddx3216 32 channel digital mixer - this is one major
reason why I have an HDSP 9652, because it has 24 channels of ADAT litepipe
i/o, which I'm taking into the board...
but I don't know if Alsa supports this, and I don't know if Ardour supports
this. All I know is I'm configuring the system, and I just got the HDSP to
make sound for the first time, and when I open up ardour to see how things
are, I expect to be able to go to the routing screen off the track, and see
alsa have 24 outputs, but still I only see two...I don't know if I'm not yet
properly configured, or if my goal isn't even supported...
when I start Jack (right now anyway), I've been using:
jackd -R -d alsa -p 2048
and it starts just fine, and gives me this message:
You appear to be using the ALSA software "plug" layer, probably
a result of using the "default" ALSA device. This is less
efficient than it could be. Consider using a ~/.asoundrc file
to define a hardware audio device rather than using the plug layer
I have a feeling this has something to do with question 1 - am I right? what
does it have to do with it?
Thanks in advance for any info! :)
for those that were following my HDSP 9652 thread: yep, I got sound. I
downgraded the firmware and have applied Thomas's patch. What's weird,
though, is that somewhere along the way, I don't remember when, I got the new
kernel from Planet. apparently I kept the old one (the one that ends in
acpi), and was booting with that, and wasn't getting sound, and hadn't yet
really got a clean rebuild of the alsa drivers. I was working on that, and
had emailed the planet list with some questions about that. then, I rebooted
and started what I THINK is the new kernel (it ends in .rh90 on my boot
loader) - it didn't want to deal with the ethernet card, so I couldn't get
online *laugh* - but I was deleting a command in terminal and all of a sudden
for the first time heard a "bloop". so I played back some stuff using
audacity, hydrogen and ardour, and sure enough, sound. understandably,
things were just coming out every channel, and audacity played back some low
res (22khz) sounds all fuzzy (which is expected, since I don't think the HDSP
likes low res audio like that), but there was sound. I guess the patch and
new alsa drivers were talking with the new kernel but not the old. now I
figure I need to just clean out all the kernels (except the original redhat
one, you know, so I can run), and rebuild all of it. problem is, I'm afraid
of doing that with the new kernel, for fear that the ethernet problem isn't
connected to my dirty messy screwed up builds, and just has to do with the
I am trying to setup my new laptop to be a very friendly environment for
multimedia editing and software development. I have been googling for
several hours now and am having trouble coming to a conclusive decision
over which filesystem to run. I am leaning toward ReiserFS, but have
looked at XFS pretty closely. I have read a little on
ext3, but what I have found seems a little bit dated now. Any opinions
you guys might have would be very welcome. Here is a summary of the
strengths/weeknesses of each of them as far as I can tell.
XFS: Great for a multimedia server. Very fast performance with large
files. Not as fast as ReiserFS with smaller files (E.g. software
development and day to day use). Tendency to get corrupted on laptops.
ReiserFS: Seems to be a good balance. Pretty fast performance with large
files, excellent performance with smaller files, very fast at seeking out
the location of a file, robust in a laptop environment.
ext3: slow. Many tests online compared ext3 to XFS and Reiser and found
it failing regularly when used for large file read/writes.
Before I jump in and just do it...
Anyone have any horror stories of ReiserFS on a laptop, or ReiserFS with
Anyone have any anecdotes that illustrate that XFS is:
a. Suitable for use on a laptop now?
b. Suitable for use on a developers box and not just on a
Thanks for *any* insight into these two filesystems.
> > Anybody know of an application that allows streaming of midi and/or
> audio over
> > the net for the purpose of allowing several people to jam together?
> i recall ivica bukvic (sp ?) announcing such a tool a while ago, but i
> guess it was a very-high-bandwidth internet2 thing for universities, not
> for home users...
Yeah, it's called Soundmesh and your description is quite correct (including spelling). :-)
As far as the MIDI stuff is concerned, RTMix is capable of that by either relaying MIDI data to a client or a list of clients. Alternately, it can also convert MIDI to OSC, and/or to custom events and relay it that way.
Both apps can be found on my site (http://meowing.ccm.uc.edu/~ico/)
P.S. apologies if this is a dupe, been having some problems with webmail access
Hi experts and geniuses.
Please be gentle with a newbie who has been using mandrake linux for a couple
of years, but has staid clear of recompiling kernels etc, and just been a
Up until now I've been using windows as my DAW platform, having a degree in
sound engineering and being a teacher at a teachers training college. Having
followed the progress of the music-apps pretty closely and finally found some
time, I decided to see if I might get my feet wet with linux as a DAW. Since
I'm familiar with mandrake and found THAC's
rpms(http://rpm.nyvalls.se/sound9.1.html) I decided to go that route. I've
been lurking on this forum for a couple of months now, and hope to get some
help here in the process.
I got me a new harddisk for my DAW machine so that I wouldn't loose anything.
Installed Mandrake 9.1 (just the basic stuff) Then I installed ardour, jack
and the kernel-multimedia packages from THAC. I then tried to start jack with
'jackstart -d alsa -d hw:0'
and got the following response:
"jackstart: cannot get realtime capabilities, current capabilities are:
probably running under a kernel with capabilities disabled,
a suitable kernel would have printed something like "=eip"
I did a search on the google, and found out that someone had asked the same
thing on this list, but the answer was: "look in the faq" - which I then did.
And the faq say "recompile". I thought the idea of rpms was that I didn't
have to recompile - but I'm probably wrong. I just want to make sure before
i jump onto something which I'd rather not do - because it scares me... (I'm
a teacher and hobby sound engineer, not a linux wizard, but I'd like to
learn) So do I have to recompile?
Another thing: I have the staudio dsp24 cport soundcard which was recognised
and everything, but if anyone has a link or some tips on how to use this in
the most effective way under linux I'd be very happy! In windows we have a
virtual patchbay for attaching and rerouting the different ins and outs. How
do I do this in linux?
I'd be very grateful for all help! Best regards
Does this card work ok with the latest alsa? I had a quick look at this
which sais something about the spdif on the front box not working, which
confused me slightly because I didn't realise the card had a frontbox. I
presume the spdif works ok on the card itself?
thanks, just being careful,
I have running under Suse 8.1 with ALSA
I use the Envy24 control utility as a mixer. Seems to work fine with a stereo
this is sooo cute:
Could Ardour support it? The test's closing remarks are funny, too:
"All in all it's hard not to like the CS-32, something this small is
hard to criticize - kind of like being nasty to small children"
Frank Barknecht _ ______footils.org__