Paul Winkler writes:
>> That sounds impressive. Someone correct me if I am wrong, but
>> Jack requires X windows for just about everything.
>
>Not at all. Many of the clients do, but the server certainly doesn't.
>Ecasound is a featureful client that doesn't either.
Thank you. I am glad to be wrong about things like that. I
started reading some about Jack and thought it was totally X. I guess
you can say I don't know Jack.
Martin McCormick
Malcolm Baldridge writes:
>Funny, I've written a simple program (derived from the Jack "simple_client")
>recent to do something similar.
That sounds impressive. Someone correct me if I am wrong, but
Jack requires X windows for just about everything.
As a computer user who is blind, X is just not quite ready to
productively use yet.
My program uses the raw /dev/dsp PCM audio device. Without
any IOCTL directives, this device reads and writes a 8
thousand sample-per-second stream of 8-bit audio.
It is great for communications audio and no good for anything
else.:-)
My program idles by reading the samples and looking for any
that are a given value above or below 0X80 which is the value one gets
from the A/D converter under silence.
If there is a swing through 0 to the other side of the cycle,
a flag gets set. I make sure that several 0 crossings go by to avoid
transient trips and then I set the VOX delay timer which tells the
rest of the program to store samples in the output file.
There is a buffer between the input and output so as to
preserve the wave form and not chop it off.
I burn roughly 25 megs for each hour of captured audio.
I don't see anything wrong with making a new file for each
recording, but I think there is a limit to the number of files one can
have in a directory. That is the only pitfall I can see.
There is one more bell to this program. If the sample read is
0 or 1 at one extreme or 0Xfe or 0Xff at the other, I send the bell
character to warn that the audio is probably going in to clipping and
to reduce the input level. The warning functions like a peak indicator.
>What it does (for now, I'm sure I'll be adding more to it) is:
>
>1) Monitor the command-line specified input port for a sound above the
>squelch level.
>
>2) Apply a configurable DC Bias adjustment [squelch comparison is performed
>after this]
>
>3) Keep track of the peak samples processed to pass a "scaling value" to
>normalise the sound data. [This is also post-DC Bias adjustment.]
>
>3a) I'm also looking at various dynamic compression [gain reduction] strategie
Hi Mark,
> No, in my experience it doesn't favor louder always.
I'm talking about historical trends, not individual experience, individual
instruments, every single genre, etc.
> had pictures of how volumes have grown over the years
That's it!
> I do not for a second think that this has all happened because the best
> producers were doing a bad job 15-20 years ago.
I agree. (Doesn't everybody?)
> I'm just saying that I've not heard this argued in any other forum.
That's why I brought it up.
> Where do these come from, and why?
See my earlier posting here (nonlinear processes in inner ear).
I'm not claiming that everybody prefers this under all circumstances, just
that there MAY be a general preference for denser spectra that drives this
loudness increase. I also suggest that a preference for better dynamics
occasionally reverses this trend. Other explanations (for the gradual
increase in loudness) I've seen are not very satisfactory, esp. "Oh,
it's just fashionable." I think it's related to the general trend towards
richer orchestration which occurs in many genres.
Hi,
Sorry, but a very non-linux question.
I wonder if anyone here can say anyting about how the Windows
version of Audacity works? Is it stable? Does the VST enabler work? Is
it stable? Does it only provide for audio suite/non-realtime operation
on Audacity Audio, or does it allow Audacity to become a VSti platform
of any type?
Thanks in advance,
Mark
hi everyone!
for those who have not heard it yet, the second international Linux Audio
Conference is taking place at the ZKM Karlsruhe/Germany from 29.4. to
2.5.2004. see http://www.zkm.de/lad/ for details.
we have a number of very interesting presentations, all of which
will be streamed out live, for the unlucky folks who can't be here in
person. additionally, you will be able to download the presentation slides
in advance should you wish to follow a lecture.
there will be feedback channels on IRC, operated by folks who are in the
lecture rooms. they will relay questions from you to the live audience.
if all goes well, webcams will upload still images every 30 seconds to
give you an idea of the ambience and of which slide is currently up.
all important information on streaming relays, downloadable material, irc
channels etc. will be dumped to
http://linuxaudiodev.org/eventszkm2004.php3 .
this page will be updated very frequently during the next days.
the streams won't be up until tomorrow morning, but the chat rooms are
already there.
please forward this mail to any interested people. and no, we do not
fear the slashdot effect :)
enjoy,
joern
I have the beginnings of a program I have written that listens
to the digital stream from /dev/dsp and records sound to a file when
there is any sound. Communications types call this a VOX or Voice
Operated relay with X being the abbreviation for relay.
I want to also make a log of the time each recording started
which is relatively easy to do especially if one uses the extensive
time and date functions in UNIX. I think I can even store the stamps
in the binary file with the audio.
Retrieving those data in a meaningful manner, however, seems
to be a problem. I want the playback program to display the time of
the start of each recording as it begins to play.
The /dev/dsp device is buffered so what really happens when
you play a file is that the file pointer runs ahead of the actual byte
being pushed through the sound card at any given time.
The operating system blocks any more data being stuffed in to
/dev/dsp, but the byte being pushed in may be 64 Kbytes ahead of what
is playing right now. That's the problem. If I wrote a program to
display the time a recording started, you would see that stamp some
variable amount of time from instantly to maybe 8 seconds ahead of
when the corresponding sound came out the speakers.
I have been experimenting with methods of editing binary files
and one has the same problem there in that when you reach the place
you wanted to cut the virtual tape, the data which are flowing in to
/dev/dsp are about 8 seconds past that point and it is hard to
tell exactly where to edit.
Is there any function that returns some offset or index value
as to how much buffer is left?
The trick to putting time stamps in with the audio should be
to not ever allow a value of 0 to be stored from the A/D converter.
When a new recording starts, store a NULL or 0 in the file followed by
the 32-bit time and date value that UNIX systems store. Audio can
follow after that.
When playing back the file, look for the 0 and then use the
next 4 bytes to recover the time and date. This would be a piece of
cake if not for that variable buffer. The buffer is essential,
however, for proper sound.
I think that this buffer is actually the DMA channel at work
so an indicator that it had run out or would soon run out would also
be useful for those who want to synchronize sound with other
activities.
I have been playing with the VOX program for a couple of years
and it works pretty well for recording radio transmissions but two
transmissions could be anywhere from a quarter of a second apart to
days apart. The program neatly cuts out the silence, but one has no
idea when each recording actually started. In amateur radio, such
information is useful when unusual skip or propagation conditions are
present or in cases of malicious interference or malfunctions when one
is trying to document when a particular event happened.
For those of you used to recording high fidelity, the 8
seconds I refer to is with mono audio at 8,000 samples per second.
Other sample rates and sample widths will certainly de pleat the buffer
much faster.
Martin McCormick WB5AGZ Stillwater, OK
OSU Information Technology Division Network Operations Group
Hello all
I would like to know if anyone knows how I would set up a fluxbox/gentoo
system to stream 96khz wave files to multiple websites, remotely, without
installing too much audio software or hardware - there isn't room for an
actual machine, so I'm wondering if I can set up a computerless Linux
system, ie: just software no, hardware, just gentoo, fluxbox, and the wave
files, that I could hopefully control remotely from my laptop via wireless
ethernet - but again, only wireless ethernet protocol software, not
hardware - can't afford that hardware and Linux is open source so I figured
if I did a totally soft system, had all my waves, and fluxbox and gentoo,
and controlled it via my laptop, then I wouldn't have to buy more hardware
and I wouldn't have to find space for the server, because the server would
be hardwareless - this would make the applications machine independant too I
guess, so I was thinking I might develope using my totally soft linux
system, as well.
Also I can't install fluxbox or gentoo because I want it to run quickly and
quietly in the background, so just a kernel and no x window system, just a
kernel EMULATING an x window system, again, remotely, via the All-Soft
wireless network connection (All-Soft meaning only software, no hardware)
Also, if I could also be pointed toward a way to encode my 96khz wave files
at 92, 89, 43.2 and 5000.66 KhZ that would be great...
Thanks!
Tim,
> I don't think it's psycho-acoustics, distortion creates more harmonics,
> that's physics CMIIW.
"CMIIW" --- OK:
While it is well-known that distortion creates more frequency components,
in a classic experiment done in 1924 by Wegel and Lane [1], it was shown
that due to the nonlinear processing done by the inner ear, additional
tones could be heard provided that the intensity was loud enough.
That there are psychophysical effects which result in "louder produces more
harmonics" has long been established. If there are also other nonlinear
effects beyond this (nonlinear processing by the brain), then there may
also be psychoacoustic effects that result in "louder produces more
harmonics." This is very likely, not nonexistent.
--------------------
[1] R. Wegel and C. Lane. The auditory masking of one pure tone by another
and its probable relation to the dynamics of the inner ear. In Physics
Review, volume 23, pages 266-285. Cited in Curtis Roads, The Computer
Music Tutorial, MIT, 1996.
Hey Mark,
I don't think it's any accident that people who like distortion guitars also
like them loud. So I think there's more to it that "I'm louder than you."
But yes, there certainly seems to be some of what say there, unfortunately.
It forces some of us to crunch our stuff more than we want to. Maybe we
should "Just say 'No'?" :-)
Some speculation on the preference for loudness:
I almost posted something on this previously, but decided not to. I suspect
that the "loud is better" actually comes from a desire for denser spectra
and that strategies for providing denser spectra have been thwarted by
some people simply increasing the volume. Increasing the volume can produce
the illusion of a denser spectrum be increasing the amplitude of some
frequencies so that they can be heard, or it may simply cause more things
in the room to rattle. But once you set the volume for the louder songs,
those that actually have denser spectra may sound as though they don't
because many frequencies fall below the threshold and/or don't rattle
anything. They sound thin again. So people creating dense spectra ALSO
need to increase the volume to keep up with the volume settings, sad to say.
There may also be a psychoacoustic effect that louder produces more
harmonics or an illusion of such.
Two trends in the history of pop music that seem to support this denser
spectra idea are 1) Electric guitars; 2) Wall of sound. Occasionally
people back off from these, but the pop charts are full of songs with
dense spectra. I don't know about others, but every time I hear a female
solo voice, I am expecting to hear it followed by something with a dense
spectrum such as power chords. There are those who, of course, do not like
this and prefer purer tones, but even these people often prefer richer tones
that have lots of harmonics, such as organs, over simple sinusoids. If
you look at the development of any particular genre, you can also see (hear)
an increasing spectral density. Often this is accomplished by simply adding
more instruments.