Hi,
i played around with extra buffering the input/output of libconvolve
(new tarball [1] and updated jack_convolve [1] (understands the
--partitionsize=frames argument now which makes it use the specified size
for the partition size instead of the jack buffersize), and like
expected this doesn't do CPU usage any good.
Easy to see in this example:
jack_buffersize = 1024
partitionsize = 2048
Now the convolution code is executed only every second jack process()
cycle. If the previous DSP usage was like 20% in every process cycle
then it's ca. 25% in every other cycle now (estimate).
The solution to even out the load is to use an extra thread [2].
For best performance i would assume that the DSSI needs an extra thread
with RT scheduling (if available) and an RT prio which should be lower
than all the other jack and midi threads of i.e. the DSSI host and other
jack clients.
So i got basically two questions:
a] is it possible to use threading in a DSSI?
b] would a RT prio of 1 (for the convolution thread) be an OK
compromise? It will be lower than all audio stuff on a typical jack
system? What is jackd's default RT prio again?
Regards,
Flo [3]
[1] - http://tapas.affenbande.org/?page_id=5
[2] - yes, i'm aware that this needs again some extra buffering ;) But
this whole larger-partitionsize-than-jack-buffersize-thing is all about
trading latency for cpu niceness. If the convolution is used as non RT
effect [like i.e. in a DAW for prerecorded material], then latency
doesn't matter as long as the host compensates for it.
[3] - i'll probably be offline from the 12th on, as i can't pay my phone
bill, so be quick with answers ;)
--
Palimm Palimm!
http://tapas.affenbande.org
liboscqs is a library to provide a Query System and Service Discovery for
applications using the Open Sound Control (OSC) protocol [1]. The initial
proposal for the OSC Query System was provided by Andrew W. Schmeder and
Matthew Wright in July 2004 [2]. The next paragraph has their abstract:
A Query System is proposed for inter-application control scenarios. The
queries enable namespace exploration, documentation, type-signature,
return-type-signature and parameter constraint specification, current-value
polling, identification of common interpretation maps via osc-schema, and
error reporting.
See [2] for the full paper describing their proposal. This project is the
result [3] of the various discussions that followed, but remains very close
to the original proposal.
Besides a Query System, this library provides Service Discovery. This allows
applications to annouce their presence locally and on a whole computer network.
For more information, source tarballs, RPM packages, and Debian packages please
see the homepage at:
http://liboscqs.sourceforge.net/
- Martin
Factfile:
- liboscqs uses liblo [4] as an OSC server. Thanks Steve!
- The liboscqs source uses the scons [5] build tool
- liboscqs supports Service Discovery using either Howl or Spread
- liboscqs has only tested on Linux so far, but the intent is to support all
POSIX systems.
- liboscqs is FSH 2.3 compliant.
References:
[1] http://www.cnmat.berkeley.edu/OpenSoundControl/
[2] http://www.opensoundcontrol.org/papers/query_system/
[3] http://liboscqs.sourceforge.net/schema/OSCQS-schema-0.0.1.pdf
[4] http://plugin.org.uk/liblo/
[5] http://www.scons.org/
Howdy Folks:
Would anyone happen to have a link to and/or copy of the WAVE-EX file format
spec handy? I've tried Googling, but closest I come is a (now broken) link
to one of the M$ sites. Searching M$ directly didn't work either.
Thanks!
|-------------------------------------------------------------------------|
| Frederick F. Gleason, Jr. | Director of Broadcast Software Development |
| | Salem Radio Labs |
|-------------------------------------------------------------------------|
| Easiest Color to Solve on a Rubik's Cube: |
| Black. Simply remove all the little colored stickers on the |
| cube, and each of side of the cube will now be the original color of |
| the plastic underneath -- black. According to the instructions, this |
| means the puzzle is solved. |
| -- Steve Rubenstein |
|-------------------------------------------------------------------------|
Hi all,
I've got someone violating the license on Secret Rabbit Code. The
offending binary-only download is listed here:
http://pelit.koillismaa.fi/plugins/dsp.php
but I'm having trouble getting a contact email address for
pelit.koillismaa.fi and/or koillismaa.fi.
I've attempted a whois which directs me to:
https://domain.ficora.fi/
but I still can't find an email address.
Any help that someone may be able to offer would be appreciated.
Cheers,
Erik
--
+-----------------------------------------------------------+
Erik de Castro Lopo nospam(a)mega-nerd.com (Yes it's valid)
+-----------------------------------------------------------+
The main confusion about C++ is that its practitioners think
it is simultaneously a high and low level language when in
reality it is good at neither.
Hi,
I've been contemplating the problem of reconciling different latencies
of various sound sources, but so far the only solutions I have been able
to think of seemed sort of awkward, and googling didn't help much, either.
Specifically, here's the issue I'm looking at: I have a sequencer
application that has several MIDI output ports, each connected to some
device that accepts MIDI input. Those devices may have vastly different
latencies (for instance, I may want to use my digital piano, without any
noticeable latency, together with timidity, which has serious latency,
even when started with options like -B2,8, never mind the latency introduced
by my USB sound card), but of course I don't want to hear a time lag between
those devices. I don't mind a little bit of overall latency; the piano may
wait for timidity as long as they're in sync.
I'm currently scheduling all my events through one queue (is that the
recommended method? I've been wondering whether it would make more sense
to have, say, one queue per output port, but I don't see how this would
help), and the only solution I have been able to think of is to explicitly
schedule events for faster devices at a later time. This is clumsy, and
it's exacerbated by the fact that I'd like to schedule events in terms of
ticks rather than milliseconds. Since latencies are usually measured in
milliseconds, that means I have to convert them to ticks, considering
the current tempo of the queue. There's gotta be a better way.
Ideally, there are two things I'd like to do:
1. Assign a delay dt to each output port, so that an event scheduled
at time t0 will be sent at time t0+dt. Like this, I could compute the
maximum latency of all my devices, and the output port attached to a
device would get a delay of (max latency - latency of device), so
that everything would be in sync.
2. Automatically determine the latencies of the devices I'm talking
to. In theory, this should be possible. For instance, if timidity is
connected to jack, it could get jack's current latency, add its own
latency, and report the result. Is this science fiction?
Any thoughts on these issues would be appreciated!
Best,
Peter
Hello all,
sorry for cross-posting to lad -- this just in case there are any
interested developers out there interested in this topic...
On Fri, 29 Jul 2005, Dan Mills wrote:
>> Imagine the ease and fun of having asterisk hooked up to jack and
>> doing voip;)
> Does anyone know of a SIP or asterisk client that does jack?
This is still very much work in progress, but FarSight project -
http://farsight.sf.net - is working on to create a library for handling
audio/video calls and conferencing, with multi-protocol (SIP, MSN, etc)
support, and built on top of the gstreamer media framework. I'm involved
with adding SIP support (very much standard compliant, and open-source) to
the project.
And, as gstreamer has JACK support (btw; Andy Wingo from gstreamer was one
of the early members of the JACK team), you will be able to do lots of
nice stuff with this technology (= with apps utilizing FarSight) once the
project matures a bit more. It is still open who will first adopt
FarSight, but it is targetted towards IM apps such as Gaim, aMSN, Kopete,
etc... and who knows what in the end.
If anyone is interested, come and take a look at the project and join the
fun! :) I'm not an official FarSight developer (at least not yet :)), so
detailed questions should probably be directed to the FarSight mailing
lists...
> Ideally something command line that can be controlled via tcp messages?
You can already do some basic audio/video streaming over RTP to/from JACK
using just gst-launch (put together chains of rtp, codec and jack
sink/sources). This can be easily controlled from the command-line. But,
but, as mentioned already, this is still work in progress...
--
http://www.eca.cx
Audio software for Linux!
The first (alpha) release of JAPA is now available at
<http://users.skynet.be/solaris/linuxaudio>
>From the README:
JAPA is a 'perceptual' or 'psychoacoustic' audio spectrum
analyser. This means that the filters that are used to
analyse the spectrum have bandwidths that are neither
constant (as in JAAA), nor proportional to the center
frequency (as in a 1/3 octave band analyser), but tuned
to human perception. With the default settings, JAPA uses
a filter set that closely follows the Bark scale.
In contrast to JAAA, this is more an acoustical or musical
tool than a purely technical one. Possible uses include
spectrum monitoring while mixing or mastering, evaluation
of ambient noise, and (using pink noise), equalisation
of PA systems.
JAPA allows you to measure two inputs at the same time,
compare them, store them to memory and compare them to
stored traces. It offers a number of resolutions, speeds,
and various display options. The dual inputs and memories
will find their way into future JAAA versions as well.
This is a source release. You will also need libclalsadrv,
libclthreads (from the same place), and libfftw3f.
Enjoy !
--
FA
Hi folks,
I'm having some trouble building a working asound.conf to get my soundcards
functional using jack.
The audio hardware consists of 3 Terratec ice1712 boards: 2x EWS88/D + 1x
Phase88, hooked up using the EWS connect clock synchronisation cables. My
goal is to get full duplex audio I/O for my home studio: 16 channels ADAT
in/out and 8 channels analogue in/out. I verified that this configuration
works from my old windows 2000 partition (using the terratec ASIO driver).
Till now I did not succeed getting this gear fully working in my linux
environment; it seems that I'm having trouble with the asound.conf to setup
the binding for the devices.
- running jack addressing the hw: devices directly works flawless; but that
way I can only use one card at a time
- as per descriptions from the mailing list archive I tried to setup an
aggregate device using the alsa multi type plugin. Since Jack needs to
mmap the buffer it needs an additional route type plugin to make the
capture stream interleaved. However, at this stage I run into a few
issues:
* due to the different number of ports the ice1712 has for
playback (12) and capture (10) I can't do full duplex I/O.
* when I do a capture only from the route plugin I hear a distorted
sound (quiet, metalic, you can hear the rhythm from the orignal
track), which I suspect is caused by the route plugin. I can't
verify if this effect is specific to my 64bit machine or also
present on 32bit
* I tried to setup different input/output for capture and playback
using the asym plugin type, but this attempt caused jackd to kill
the graph after a few seconds full of xruns.
If anyone has a pointer to or copy of a working asound.conf for multitrack
recording/playback using multiple ice1712 based cards I'd be very interested
to see those.
Some details about the software I'm running:
Advanced Linux Sound Architecture Driver Version 1.0.9b.
kernel 2.6.12-gentoo-r4 on AMD64 hardware
Jackd 0.99.0
Here's my non-functional attempt at a asound.conf for binding 2 Terratec
EWS88/D cards:
=============================================================================
# card0 is the onboard via82xx / ac97 sound
#
# card1 is a EWS88/D (ADAT 1-8)
# card2 is a EWS88/D (ADAT 9-16)
# card3 is a Phase88 (analog 17-24)
#
# card4 is a Midiman Midisport 4x4 USB Midi interface
#default
pcm.!default {
type hw
card 0
}
ctl.!default {
type hw
card 0
}
# Terratec EWS88/D hardware
#
pcm.ews_0 {
type hw
card 1
}
ctl.ews_0 {
type hw
card 1
}
pcm.ews_1 {
type hw
card 2
}
ctl.ews_1 {
type hw
card 2
}
# Aggregate of the 2 EWS88/D cards
#
ctl.ews_16 {
type hw
card 1
}
pcm.ews_16 {
type multi
slaves.a {
pcm "ews_0"
channels 12
}
slaves.b {
pcm "ews_1"
channels 12
}
bindings [
# adat 1-8
{slave a channel 0}
{slave a channel 1}
{slave a channel 2}
{slave a channel 3}
{slave a channel 4}
{slave a channel 5}
{slave a channel 6}
{slave a channel 7}
# spdif card 1
#{slave a channel 8}
#{slave a channel 9}
#{slave a channel 10}
#{slave a channel 11}
# adat 9-16
{slave b channel 0}
{slave b channel 1}
{slave b channel 2}
{slave b channel 3}
{slave b channel 4}
{slave b channel 5}
{slave b channel 6}
{slave b channel 7}
# spdif card 2
#{slave b channel 8}
#{slave b channel 9}
#{slave b channel 10}
#{slave b channel 11}
]
}
# jack can't mmap the streams above since they are
# scattered around at different memory locations
ctl.ews_jack {
type hw
card 1
}
pcm.ews_jack {
# asym allows for different handlng of in/out devices
type asym
playback.pcm {
type route
slave.pcm "ews_16"
ttable.0.0 1
ttable.1.1 1
ttable.2.2 1
ttable.3.3 1
ttable.4.4 1
ttable.5.5 1
ttable.6.6 1
ttable.7.7 1
ttable.8.8 1
ttable.9.9 1
ttable.10.10 1
ttable.11.11 1
ttable.12.12 1
ttable.13.13 1
ttable.14.14 1
ttable.15.15 1
#ttable.16.16 1
#ttable.17.17 1
#ttable.18.18 1
#ttable.19.19 1
}
capture.pcm {
type route
slave.pcm "ews_16"
ttable.0.0 1
ttable.1.1 1
ttable.2.2 1
ttable.3.3 1
ttable.4.4 1
ttable.5.5 1
ttable.6.6 1
ttable.7.7 1
ttable.8.8 1
ttable.9.9 1
ttable.10.10 1
ttable.11.11 1
ttable.12.12 1
ttable.13.13 1
ttable.14.14 1
ttable.15.15 1
#ttable.16.16 1
#ttable.17.17 1
#ttable.18.18 1
#ttable.19.19 1
#ttable.20.20 1
#ttable.21.21 1
#ttable.22.22 1
#ttable.23.23 1
}
}
=============================================================================
Cheers,
Frank.
--
+---- --- -- - - - -
| Frank van de Pol -o) A-L-S-A
| FvdPol(a)coil.demon.nl /\\ Sounds good!
| http://www.alsa-project.org _\_v
| Linux - Why use Windows if we have doors available?
Hi all,
I remember there's a kernel pcmcia bug preventing the development for
the Audigy2 pcmcia notebook sound card driver.
See http://www.alsa-project.org/alsa-doc/index.php?vendor=vendor-Creative_Labs#…
Is there any new updates on the situation? Has the bug been fixed? or
anyone working on it?
Thanks,
Raymond