Hi,
Anybody know of an application that allows streaming of midi and/or audio over
the net for the purpose of allowing several people to jam together?
I seem to remember having heard of some such app for windows ages ago... but
that doesn't count does it? ;)
/Robert
trying to compile and install Thomas' hdspmixer 1.3 - anybody know what to do
about the following?:
[root@JamaisQuitteAudio hdspmixer-1.3]# ./configure
loading cache ./config.cache
checking for a BSD compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking whether make sets ${MAKE}... yes
checking for working aclocal... missing
checking for working autoconf... missing
checking for working automake... missing
checking for working autoheader... missing
checking for working makeinfo... missing
checking for c++... c++
checking whether the C++ compiler (c++ ) works... yes
checking whether the C++ compiler (c++ ) is a cross-compiler... no
checking whether we are using GNU C++... yes
checking whether c++ accepts -g... yes
checking whether make sets ${MAKE}... (cached) yes
checking how to run the C preprocessor... cc -E
checking for ANSI C header files... yes
./configure: line 1: fltk-config: command not found
./configure: line 1: fltk-config: command not found
checking for alsa/asoundlib.h... yes
checking for fltk-config... no
configure: error: fltk-config is required
[root@JamaisQuitteAudio hdspmixer-1.3]#
--
--------------
Aaron Trumm
NQuit
www.nquit.com
--------------
Hi,
I run across this problem a lot in Linux. Maybe someone can help me
make my Gentoo system do audio better. All the straight Alsa stuff works
well. However, when I'm browsing around I come to web sites that
probably want to play some audio, but this is Linux and Alsa, so things
don't work easily. At the web site
http://www.skale.org
I start getting this repeating glitch noise every 8 seconds or so. It
hasn't happened every time I've visited the site today, but it's
happened a lot of times.
In hdspmixer I see this glitch on all 26 playback channels, by the
way, so I'm assuming that this means it's some sort of OSS problem, but
I'm not sure.
What's wrong with my setup of Alsa, which in most other ways works
just fine, and why can I also not get mp3 (well, really xmms) to work
under Linux?
BTW - much non-Alsa audio from Games works fine, but also drives all
26 playback channels. Should it really work this way?
Thanks,
Mark
Let me first say I'm not terribly used to compiling Linux apps. I
usually intall Debian packages with apt-get. So I'm probably making a
very simple error, but I'm asking here because someone else might well
have built Audacity from source.
So here's the problem:
Whe I try compile Audacity 1.2 beta,
I get screefuls of messages like this:
libaudacity.a(PCMAliasBlockFile.o): In function
`PCMAliasBlockFile::BuildFromXML(wxString, char const **)':
/usr/include/wx/filename.h:100: undefined reference to
`wxFileName::Assign(wxFileName const &)'
/usr/include/wx/filename.h:100: undefined reference to
`wxArrayString::~wxArrayString(void)'
The system is Debian, mostly stable but I've installed quite a few bits
and pieces from testing. In response to earlier problems with
config and compilation I installed:
libwxbase2.4-dev
wxwin2.4-headers
libwxgtk2.4
zlib1g-dev
libwxgtk2.4-dev
But I can't see what's missing now and I don't understand enough about the
error messages, not being intimately falmiliare with libwx (it must have
something to do with that)
Any ideas, please?
--
Anahata
anahata(a)treewind.co.uk -+- http://www.treewind.co.uk
Home: 01638 720444 Mob: 07976 263827
On Monday 29 September 2003 12:50, Rob wrote:
>On Monday 29 September 2003 12:50, Robert Jonsson wrote:
>> Anybody know of an application that allows streaming of midi
>> and/or audio over the net for the purpose of allowing several
>> people to jam together?
>
>I seem to remember something like that for Windows too, but
>remember that latency that would be more than acceptable for
>gaming (30-40ms) could make it impossible to jam as you're
>envisioning.
Yes, I remember someone (non-technical) telling me about this great system
that allowed musicians on both coasts of North America to perform a "live"
piece together, across the internet. It was some kind of university project.
They even pulled out a newspaper or magazine article about it. I read it
through several times. I could not believe that you can get latencies (esp.
cross-continent) down low enough to allow "interactive jamming", where both
sides hear each other in real time.
Let's see (calculating on the back of an envelope): 3000 miles / 186000
miles/second... that's nominally 16msec... that could be tolerable... but
what about slowdown due to dielectric (speed of electric fields is less than
speed in vacuum)? what about delays in electronic circuits? what about
store/forward digital gear? At one point some traffic went via sattelite,
which adds 2 x 22K miles (or about 1/4 second). Hmm, that's why the delay on
some speech circuits (like when I phone my sister in the Dominican Republic)
is very noticeable! Lately, I think ground fibre is cheaper (and faster) than
satellite. For the moment, ignoring costs, I'm not even sure the network
wizards are able to splice together a dedicated circuit coast-to-coast with
audio hi-fi stereo bandwidth, even for "proof of concept". Even if they could
(like a permanent phone call?), there would be little point, because in a
digital (internet) network there would be real traffic, and hence variability
(jitter), which can only be smoothed by buffering and delay. Stutter is
usually worse than delay. So, I conclude that I'm mystified! Huh?
Now, if one is only concerned with one-way traffic, one can "cheat"! I
concluded from the article (and thinking hard about it) that they must have
used one site as a "reference site", piped their (partial) performance across
the continent (with whatever additional delay/buffering), and then had the
other orchestra "dub in" their part, and have that played at the 2nd site for
their "live" audience. Or the audience might have been at a 3rd site, at this
point it does not matter, just as long as it's not the 1st site! I seem to
recall that the audience was seated in an auditorium on 2nd coast. So, yes
they were "playing together" in some sense. And the audience was hearing the
performance "live". However, I cannot believe that the orchestra at the 1st
site was able to hear the 2nd site "live" at the same time they were playing?
Has anyone else heard about this? Details? Thoughts?
p.s. I used to do some sound recording for 16mm newsreel film stuff, decades
ago. Have you (with headphones on) ever tried to speak into a Nagra tape
recorder with a true read head after the write tape monitoring head? You hear
yourself about 1/2 second later. I had to take my headphones off, so I
wouldn't hear a delayed "echo". I think that is also true for "real" (long
delay) echo in recording? It can be paralyzing! Is that like stutterers?
p.p.s. I have recently been thinking a bit about psycho-acoustics, as I'm
(re)learning some guitar playing. If you consider nerve transmission speeds,
being able to play those real fast weedle-weedle-weedle guitar leads would
seem impossible. What must be happening is that you are telling your fingers
to move a fraction of a second before they actually move. Now add in the long
echo delay, and I suspect that's too much to handle: 3 time bases: what you
want to play, what you are playing (feel?), and what you hear. Comments?
--
Juhan Leemet
Logicognosis, Inc.
GuyCLO~ wrote:
>On Mon, 29 Sep 2003 16:58:22 -0230, Juhan Leemet wrote / a écrit:
>> Now add in the long
>> echo delay, and I suspect that's too much to handle: 3 time bases: what you
>> want to play, what you are playing (feel?), and what you hear. Comments?
>
>I have read your interesting text, and I think the points you bring are
>correct.
>But I think that the idea should not be dismissed just for the reason that
>it's not suitable for some people/sorts of music. Some people may want to
>play new-age in the same basement/city over IP :)
>In some cases, this app will be very valuable. Is it really hard to use
>aconnect, as Gustavo suggested?
Sorry. It is not my intention to dismiss the idea, but try to understand and
discuss the limits/boundaries. After all, we are constrained by physical
laws. If the (effective) transmission time from L.A. to NewYork is greater
than some perceptual delay (I don't know enough, to say what that is), then
the technique is not going to work (between L.A. and NewYork, but maybe it
will work between L.A. and Santa Monica?), at least for streaming
high-bandwidth audio. Popular press often gets this stuff wrong, since they
prefer sensationalism. No point attacking a problem that can't be solved. In
the same city, I guess it could be feasible, maybe, but we'll have to try it.
I recall the hoopla about video conferencing, and my computer consulting
client (a telco) basically gave up (at least for their own development
projects) after a couple of years of trying. Calculations all seemed good.
Marketing guys were enthusiastically (over) selling, as usual. In practice,
just didn't work well enough. This client was a telco, so they had available
all the bandwidth they wanted! Delays, jitter, quality, "artifacts" (of
compression?), and just plain unreliability made it horrible. This was for 2
and 3 way tele-conferencing. I think one-way video streaming (such as some
instructional videos) does (sort of) work, and mostly because you can add as
much delay as you want to allow filtering, buffers to catch up, etc.. We are
all being "trained" to accept lower quality video, and jerkier motion, etc.
Hopefully we won't degrade our music into those lame MIDI demos of "blues" or
"jazz". Reminds me of some classical music professors I have heard trying to
form a jazz band: technically great, mathematically precise, theoreticaly
"faultless", but no soul, and therefore lame and uninteresting. Not trying to
slag classical music professors. I imagine there are some here. "In theory
there is no difference between theory and practice. In practice there is." As
we compress and process, we remove some soul and feel. A lot of contemporary
music seems "over-processed", and therefore sounds all much the same. Maybe
the fault is mine, and I'm being far too demanding (discriminating?).
Internet jamming is easier with MIDI traffic. I had not thought in those
terms, probably because I'm currently (re)learning some guitar, which does
not translate into MIDI very well (expensive rigs, tracking is a problem). My
conception was that jamming = (two-way) "real-time audio streaming" (hard!).
Steve mentioned that some universities have special high-bandwidth,
low-latency interconnections set up. They must be quite special, and I'm not
sure how easily those capabiliites will be available to the rest of us.
I am finding this information interesting. Food for thought (and experiment).
I should get back to wrestling with my M-Audio Audiophile 2496, and getting
it to work with various applications. Too many distractions.
--
Juhan Leemet
Logicognosis, Inc.
Hi, I am very stuck, I have been trying hard for a long time to try to
get digital out sound working on my Gentoo Linux kernel 2.6 test-5
machine. I do not have any conventional speakers, just an optical
digital lead going from a spdif out to an external dolby digital reciever.
Can anyone please describe the exact process to me of getting digital
out working on alsa drivers using a 2.6xx kernel? I am aware that the
2.6x kernels come with alsa included and I have enabled it and my
via82xx soundcard as modules.
Your advice is welcome.
Q
jlc wrote:
>Erik de Castro Lopo wrote on Wed, 01-Oct-2003:
>> On Tue, 30 Sep 2003 13:56:27 +0000 (GMT)
>> glimt <glimt(a)littlebrother.org> wrote:
>>
>>> If people do this, it would be very cool to just have the remote peers
>>> function as inputs into jack.
>>
>> As cool as that sounds, there is something really weird about having
>> something with as much latency as the internet connected to something
>> as low-latency and sample accurate as JACK.
>
>Just think of it as a delay line :)
I think the biggest problem is that it is a (randomly?) _variable_ delay line.
Some early voice-over-internet work was done more than 20 years ago, and it
was deemed "not ready for prime time" mostly because of lack of "guarantee of
service/quality" (i.e. variable latency, and occasional dropout). The
underlying technology has not changed (that) much (if at all?). TCP/IP was
defined in RFCs when? 20 or 30 years ago? Some refinements, granted.
>Sooner or later, one of us will get around to implementing it....
That would be interesting. In general, viz. computers, I believe in the "*nix
credo" which goes something like "many ways to skin a cat". Choose your
favorite, or the one that best fits the situation. The more tools the better.
--
Juhan Leemet
Logicognosis, Inc.
Hello - fernando already asked this on the planet-ccrma list, but does anyone
know if the HDSP 9652 patch has been applied to the alsa cvs yet?
fernando said he rebuilt his alsa stuff from the cvs last night, but we don't
know if that patch had been applied...
--
--------------
Aaron Trumm
NQuit
www.nquit.com
--------------
Okay, I'm trying to be a little more fastidious about finding what's wrong. My HDSP/MF soundcard freezes with several applications, including play, mpg321, and when I play a soundfile in audacity. It always happens at the end of the soundfile, or if I command-C or push the stop button (in audacity) while the soundfile is playing. It says there's a segmentation fault, and then I can't get my card to respond to any app at all (they all say the device is busy or nonexistent). ps aux doesn't show anything at all. When I try /sbin/service alsasound restart, I get this:
/sbin/service alsasound restart
Shutting down sound driversnd-pcm-oss: Device or resource busy
snd-mixer-oss: Device or resource busy
snd-hdsp: Device or resource busy
snd-pcm: Device or resource busy
snd-timer: Device or resource busy
snd-rawmidi: Device or resource busy
snd-seq-device: Device or resource busy
snd-hwdep: Device or resource busy
snd: Device or resource busy
[ OK ]
ALSA driver already running
Sound driver snd-hdsp is already loaded
So that doesn't do anything at all. I have none of these problems with my other soundcard (an onboard nvidia chip). Really the only way I've figured how to fix it is to reboot. Is there a way to debug these things and find out how to fix the segmentation fault? Does anyone know what about the HDSP/MF (or its driver) would be helping to cause it? Is there a way to force ALSA to restart?
Thanks,
M