hi eric!
Eric Dantan Rzewnicki wrote:
I just read through this again and a few questions
popped up:
ices runs on the sound source and connects to jackd, correct? (I think
this is what you state below, but just want to double check.)
yes. there is a jack graph that is producing sounds. ices-jack is
just another jack client. it does the encoding to ogg and optional
resampling and then sends the encoded stream to an icecast server
(which can be on another machine or local, the transmission between
source and server is via http)
Have you, or has anyone you know of run jackd and ices
on a box with no
soundcard using the jackd dummy driver? (more on why below.)
yes, i did that during testing of the lac streaming setup this year.
works nicely.
If I want to stream out ogg vorbis, at which point
does the encoding
happen, on the audio source/ices/jack box or the icecast box?
in the ices process. so if you use two boxen, the big iron should be
the jackd/ices box, since the encoding is computationally expensive.
the streaming can be done by a leftover - i guess a pentium 233
running icecast can saturate a 100mbit link. you want some memory
though.
Other than ices and icecast, do both the icecast box
and the ices box
need all of the svn packages you list above?
the icecast box does not need the *-tools. (nor does ices strictly
speaking, but they are nice to have). i described a "bleeding edge"
best-of-xiph.org setup. if you don't want to play with video or
very-low-bandwidth speech streaming, you can omit the speex and
theora packages as well as flac. ices/icecast check for them at
compile time - i wanted all bells and whistles, but if the libs are
not present, those features will be left out with no problems.
you may even get away with using your distro's libvorbis, although i
think ices-kh does require ogg2 and won't compile with plain old ogg.
now fire up
icecast, fire up ices, connect it to your jack graph,
and the fun starts.
the default config files are extensively commented, but here's my
config, in case you need some more inspiration:
http://spunk.dnsalias.org/download/ices.xml
http://spunk.dnsalias.org/download/icecast.xml
(the source and server run on different hosts, and icecast runs
chrooted and as user icecast)
here source = ices, server = icecast? (again, just double checking.)
yes, sorry for my sloppy choice of words.
I've planned a streaming self-feeding composition
project to
specifically make use of the repeated vorbis encode/decode artifact
buildup.
when you do this, consider that under ideal circumstances, re-ogging
a pcm stream that has been ogg-encoded before will not add new
encoding artifacts, unless you reduce the bitrate.
the degradation you hear on my recording is mostly due to repeated
ad/da conversion, a mixer that catches harddisk hum and a reduction
of the level....
maybe ogg->wav->mp3->wav and back would be more interesting. both
codecs should have different data reduction models and ideally not
only take away information but create artifacts different from each
other.
or maybe add an analog edge to your graph (pun intended :-).
The basic structure will have:
1) a single original source audio file;
2) several instances of a script that selects a random section of the
audio file, either stretches or squishes the section's time scale,
possibly reverses it, and then loops and plays it out to jack ports
with various randomly parameterized controllers controlling aspects
of the sound (panning, volume, etc.);
3) ices connected to jack and streaming the output of the several
instances of 2) to icecast;
you need one ices process per such instance. ices can encode to
different bitrates, but handles only *one* incoming stream (which
can have an arbitrary number of channels).
4) recording script that records from jack ports and
writes ogg vorbis
files to a pool from which the 2)'s will select on subsequent
iterations.
hmm. the internet tranmission does not generate any interesting
additions, except for possibly varying round-trip times or maybe
dropouts....
The script in 2) will have to decode the ogg files
before selecting a
chunk and manipulating it. I guess since jack only deals with floats 3)
and 4) will have to encode to vorbis separately, unless 3) can be
scheduled to start a new dump file, say, every hour.
i don't understand this, but yes, ices can be made to start a new
dump file at any time by sending it a USR1 signal. make it a cronjob
and there you go.
The whole thing will be given a disk space quota.
Probably 4) will check
this and choose previous output files at random to delete in order to
stay within the quota.
:)
I plan to do 2) & 4) in ecasound python eci
scripts. Each instance of 2)
will, hopefully, run continuously selecting the duration of it's runtime
on each iteration and simply starting over by selecting a new section of
audio from the pool of files when it finishes. 4) will probably be
scheduled hourly via cron. Hopefully I can get this thing to run
continuously for years on end. Since it won't actually be sending any
audio out to a soundcard I would like to run it on my fileserver with
jackd using the dummy driver.
Anyway, that's the plan. We'll see if I ever get it implemented ...
good luck!
--
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
- Brian W. Kernighan
Jörn Nettingsmeier
Lortzingstr. 11, 45128 Essen, Germany
http://spunk.dnsalias.org (my server)
http://www.linuxaudiodev.org (Linux Audio Developers)