Hey,
In case you didn't know, the high res timers patch (which Ingo calls "a
critical piece of RT infrastructure") has been merged into Ingo's RT
preempt patch. So I think it's safe to assume it will be going in the
mainline kernel too.
This is great news for Linux audio as sub HZ timers enable lots of cool
features. We will be able to send a MIDI clock that is as solid as you
get from dedicated hardware.
Lee
Hi,
i played around with extra buffering the input/output of libconvolve
(new tarball [1] and updated jack_convolve [1] (understands the
--partitionsize=frames argument now which makes it use the specified size
for the partition size instead of the jack buffersize), and like
expected this doesn't do CPU usage any good.
Easy to see in this example:
jack_buffersize = 1024
partitionsize = 2048
Now the convolution code is executed only every second jack process()
cycle. If the previous DSP usage was like 20% in every process cycle
then it's ca. 25% in every other cycle now (estimate).
The solution to even out the load is to use an extra thread [2].
For best performance i would assume that the DSSI needs an extra thread
with RT scheduling (if available) and an RT prio which should be lower
than all the other jack and midi threads of i.e. the DSSI host and other
jack clients.
So i got basically two questions:
a] is it possible to use threading in a DSSI?
b] would a RT prio of 1 (for the convolution thread) be an OK
compromise? It will be lower than all audio stuff on a typical jack
system? What is jackd's default RT prio again?
Regards,
Flo [3]
[1] - http://tapas.affenbande.org/?page_id=5
[2] - yes, i'm aware that this needs again some extra buffering ;) But
this whole larger-partitionsize-than-jack-buffersize-thing is all about
trading latency for cpu niceness. If the convolution is used as non RT
effect [like i.e. in a DAW for prerecorded material], then latency
doesn't matter as long as the host compensates for it.
[3] - i'll probably be offline from the 12th on, as i can't pay my phone
bill, so be quick with answers ;)
--
Palimm Palimm!
http://tapas.affenbande.org
liboscqs is a library to provide a Query System and Service Discovery for
applications using the Open Sound Control (OSC) protocol [1]. The initial
proposal for the OSC Query System was provided by Andrew W. Schmeder and
Matthew Wright in July 2004 [2]. The next paragraph has their abstract:
A Query System is proposed for inter-application control scenarios. The
queries enable namespace exploration, documentation, type-signature,
return-type-signature and parameter constraint specification, current-value
polling, identification of common interpretation maps via osc-schema, and
error reporting.
See [2] for the full paper describing their proposal. This project is the
result [3] of the various discussions that followed, but remains very close
to the original proposal.
Besides a Query System, this library provides Service Discovery. This allows
applications to annouce their presence locally and on a whole computer network.
For more information, source tarballs, RPM packages, and Debian packages please
see the homepage at:
http://liboscqs.sourceforge.net/
- Martin
Factfile:
- liboscqs uses liblo [4] as an OSC server. Thanks Steve!
- The liboscqs source uses the scons [5] build tool
- liboscqs supports Service Discovery using either Howl or Spread
- liboscqs has only tested on Linux so far, but the intent is to support all
POSIX systems.
- liboscqs is FSH 2.3 compliant.
References:
[1] http://www.cnmat.berkeley.edu/OpenSoundControl/
[2] http://www.opensoundcontrol.org/papers/query_system/
[3] http://liboscqs.sourceforge.net/schema/OSCQS-schema-0.0.1.pdf
[4] http://plugin.org.uk/liblo/
[5] http://www.scons.org/
Howdy Folks:
Would anyone happen to have a link to and/or copy of the WAVE-EX file format
spec handy? I've tried Googling, but closest I come is a (now broken) link
to one of the M$ sites. Searching M$ directly didn't work either.
Thanks!
|-------------------------------------------------------------------------|
| Frederick F. Gleason, Jr. | Director of Broadcast Software Development |
| | Salem Radio Labs |
|-------------------------------------------------------------------------|
| Easiest Color to Solve on a Rubik's Cube: |
| Black. Simply remove all the little colored stickers on the |
| cube, and each of side of the cube will now be the original color of |
| the plastic underneath -- black. According to the instructions, this |
| means the puzzle is solved. |
| -- Steve Rubenstein |
|-------------------------------------------------------------------------|
Hi all,
I've got someone violating the license on Secret Rabbit Code. The
offending binary-only download is listed here:
http://pelit.koillismaa.fi/plugins/dsp.php
but I'm having trouble getting a contact email address for
pelit.koillismaa.fi and/or koillismaa.fi.
I've attempted a whois which directs me to:
https://domain.ficora.fi/
but I still can't find an email address.
Any help that someone may be able to offer would be appreciated.
Cheers,
Erik
--
+-----------------------------------------------------------+
Erik de Castro Lopo nospam(a)mega-nerd.com (Yes it's valid)
+-----------------------------------------------------------+
The main confusion about C++ is that its practitioners think
it is simultaneously a high and low level language when in
reality it is good at neither.
Hi,
I've been contemplating the problem of reconciling different latencies
of various sound sources, but so far the only solutions I have been able
to think of seemed sort of awkward, and googling didn't help much, either.
Specifically, here's the issue I'm looking at: I have a sequencer
application that has several MIDI output ports, each connected to some
device that accepts MIDI input. Those devices may have vastly different
latencies (for instance, I may want to use my digital piano, without any
noticeable latency, together with timidity, which has serious latency,
even when started with options like -B2,8, never mind the latency introduced
by my USB sound card), but of course I don't want to hear a time lag between
those devices. I don't mind a little bit of overall latency; the piano may
wait for timidity as long as they're in sync.
I'm currently scheduling all my events through one queue (is that the
recommended method? I've been wondering whether it would make more sense
to have, say, one queue per output port, but I don't see how this would
help), and the only solution I have been able to think of is to explicitly
schedule events for faster devices at a later time. This is clumsy, and
it's exacerbated by the fact that I'd like to schedule events in terms of
ticks rather than milliseconds. Since latencies are usually measured in
milliseconds, that means I have to convert them to ticks, considering
the current tempo of the queue. There's gotta be a better way.
Ideally, there are two things I'd like to do:
1. Assign a delay dt to each output port, so that an event scheduled
at time t0 will be sent at time t0+dt. Like this, I could compute the
maximum latency of all my devices, and the output port attached to a
device would get a delay of (max latency - latency of device), so
that everything would be in sync.
2. Automatically determine the latencies of the devices I'm talking
to. In theory, this should be possible. For instance, if timidity is
connected to jack, it could get jack's current latency, add its own
latency, and report the result. Is this science fiction?
Any thoughts on these issues would be appreciated!
Best,
Peter
Hello all,
sorry for cross-posting to lad -- this just in case there are any
interested developers out there interested in this topic...
On Fri, 29 Jul 2005, Dan Mills wrote:
>> Imagine the ease and fun of having asterisk hooked up to jack and
>> doing voip;)
> Does anyone know of a SIP or asterisk client that does jack?
This is still very much work in progress, but FarSight project -
http://farsight.sf.net - is working on to create a library for handling
audio/video calls and conferencing, with multi-protocol (SIP, MSN, etc)
support, and built on top of the gstreamer media framework. I'm involved
with adding SIP support (very much standard compliant, and open-source) to
the project.
And, as gstreamer has JACK support (btw; Andy Wingo from gstreamer was one
of the early members of the JACK team), you will be able to do lots of
nice stuff with this technology (= with apps utilizing FarSight) once the
project matures a bit more. It is still open who will first adopt
FarSight, but it is targetted towards IM apps such as Gaim, aMSN, Kopete,
etc... and who knows what in the end.
If anyone is interested, come and take a look at the project and join the
fun! :) I'm not an official FarSight developer (at least not yet :)), so
detailed questions should probably be directed to the FarSight mailing
lists...
> Ideally something command line that can be controlled via tcp messages?
You can already do some basic audio/video streaming over RTP to/from JACK
using just gst-launch (put together chains of rtp, codec and jack
sink/sources). This can be easily controlled from the command-line. But,
but, as mentioned already, this is still work in progress...
--
http://www.eca.cx
Audio software for Linux!
The first (alpha) release of JAPA is now available at
<http://users.skynet.be/solaris/linuxaudio>
>From the README:
JAPA is a 'perceptual' or 'psychoacoustic' audio spectrum
analyser. This means that the filters that are used to
analyse the spectrum have bandwidths that are neither
constant (as in JAAA), nor proportional to the center
frequency (as in a 1/3 octave band analyser), but tuned
to human perception. With the default settings, JAPA uses
a filter set that closely follows the Bark scale.
In contrast to JAAA, this is more an acoustical or musical
tool than a purely technical one. Possible uses include
spectrum monitoring while mixing or mastering, evaluation
of ambient noise, and (using pink noise), equalisation
of PA systems.
JAPA allows you to measure two inputs at the same time,
compare them, store them to memory and compare them to
stored traces. It offers a number of resolutions, speeds,
and various display options. The dual inputs and memories
will find their way into future JAAA versions as well.
This is a source release. You will also need libclalsadrv,
libclthreads (from the same place), and libfftw3f.
Enjoy !
--
FA