I'm just asking these out of curiosity. I know you guys love these
kinds of questions :)
I know that the rule is to never block in a real time audio thread,
but what about blocking for resources shared between real-time
threads? In the case of a sequencing app that uses one thread per
audio/instrument track (as most do), is it OK to share a lock between
real time scheduled tracks? I ran into this question after
implementing a scripting engine for our commercial audio plugin using
python, which uses a global lock to serialize access to it's data
structures.
Also, I've gathered are that adding app-thread => RT-thread
message-passing to avoid using locks while modify the application's
basic shared data structures is useless, since the real-time thread
will have to wait for execution of that code one way or another, given
the overhead of acquiring the lock is negligable. This would mean that
you should just use locks and assume the user will cope with hearing
small overruns when adding/removing audio components. True? Not true?
I hope I worded these well enough. Cheers!
Let's stop this flame for a moment and see what LV2 misses in order to
let me kill EPAMP and live an happier life.
#1. Support for interleaved channels and non-float data
Input and output data is often found in these formats.
#2. Changing sample rate without re-instantiating all effects.
Gapless playback when chaning songs, for example, should be possible
without performing black magic.
#3. Some serious connection logic thing (all the "equal channels" thing etc.).
This needs a thousand flame wars and *deep* thinking.
#4. Support for time stretching when using non real-time audio sources.
#5. Informations about delay time introduced by the algorithm itself
to do syncing with video-sources (for example).
#6. Some way for the host to make sense of the meaning of some
parameters and channels, to better support global settings and stuff.
#7. Global explicit initialization/finalization functions for more
exotic platforms (they wouldn't harm, so why not having them).
#8. Rules to find plugins possibly platform-specific and outside of
the specification; possibly one compile-time valid path.
#9. Maybe more strict requirements on both hosts and plugins
(especially about thread-safety).
I see there is some indication in the core spec, but I don't know
about extensions and/or other possible concurrency issues.
#10. Something (a library possibly) to make use all of this features
easily from the host author's POV.
Can we start discussing about these issues and see if they are solved
already/how to implement them/how to make them better?
Stefano
Greetings AudioScience linux customers and others,
This is to inform you that hpklinux version 3.10.00 is available from our
website http://audioscience.com/internet/download/linux_drivers.htm
The major change in this release is the addition of ASI89xx tuner series, and
removal of ASI4xxx (still supported by driver 3.08). Of course bugs have
been fixed, and new minor features added, see the release notes for details.
While I have your attention, if you are a user of our cards and have a few
moments please reply and let me know which distro(s) and kernel version(s)
you currently support, and whether you are using or intend to use HPI or ALSA
thanks and regards
--
--
Eliot Blennerhassett
www.audioscience.com
Hello,
First of all, I'm sorry for the worst cross-posting ever.
My name's Stefano D'Angelo and I'm the author of NASPRO
(http://naspro.atheme.org), a project whose main aim is to build up a
system to make all sound processing APIs interoperable.
Since my project was recently accepted to become part of the Atheme
community (http://www.atheme.org) and began working close to the
audacious (http://audacious-media-player.org) developers, I could see
there actually were serious reasons behind the lack of standardization
for sound processing in the arena of media players.
Now, I wrote a first draft of a new sound processing API called EPAMP
especially targetted at media players; you can find it here:
http://naspro.atheme.org/content/epamp-draft-1
With this mail, I'm trying to get some feedback (possibly in the Talk
page on the NASPRO wiki) about it and possibly to involve anyone who
is interested in its development.
Anyway, notice that the user interface part of the API is still
missing and that my project (NASPRO) will support this new API, giving
applications using it the possibility to access non-EPAMP plugins
without touching a single line of code (currently supported APIs are
audacious' and LADSPA, while DSSI, LV2, GStreamer and possibly VST are
more or less planned for the near future).
In case you're not interested and want to start a flame, please just ignore me.
This message is being sent to the communities/developers behind the
following projects:
- Amarok
- Aqualung
- Audio Overload
- BMPx
- Banshee
- cmus
- Decibel Audio Player
- Exaile
- FALF Player
- Freevo
- GStreamer
- Helix Player
- Herrie
- JaJuk
- JuK (KDE multimedia)
- Linux Audio Developers
- KPlayer
- Kaffeine
- lamip
- The LAMP
- MPlayer
- Miro
- mpg123
- mpg321
- Muine
- MPD
- music on console
- Noatun
- ogg123 (vorbis-tools)
- Ogle
- Quod Libet
- Rhythmbox
- Sipie
- SnackAmp
- Sonata
- Songbird
- UADE
- SMPlayer
- VLC
- wxMusik
- XMMS2
- Xfmedia
- Xine
- Zinf
If you know someone else who could be interested, feel free to forward this.
Best regards,
Stefano D'Angelo
zanga.mail(a)gmail.com
hello ")
Ok, I am writing a remote and SysEx-editor for e-mu
synthesizers (http://ppcontrol.sf.net). I am asking for
help on how to use the alsa sequencer the way it was
intended to be used for this kind of application. You know,
there are so many options and flags and stuff to be used
and I feel a little lost.
Basically I send a lot of SysEx messages and need to make
sure to catch all the answers. My current implementation is
as following:
I have a single sequencer (duplex, nonblocking), a queue
and two ports: one with read and one with write caps. To
send a SysEx command I just queue an event with that
message. (So far so good?).
Now I just wait for an constant time (usleep) before I
check the input queue. Problem: if the time is too short, i
would have missed the answer, if I wait to long I waste
time.
My question is: how would I make sure to catch all answers
from the device without using a thread and without just
waiting for an constant time before I check if there is an
answer( which is alo not very fail save, a message might
take a little longer if there is traffic on the midibus)?
Thank you.
Jan
On 03/06/2008, Steve Harris <steve(a)plugin.org.uk> wrote:
> In LADSPA there's a "magic" control out port called "_latency" or
> something, that should apply to LV2 aswell, but I'm not sure if the
> spec says so.
For the record -- since this is something I've tried to search for in
the past and have had trouble finding a definitive answer to -- the
conventional LADSPA port name is apparently "latency", with no
underscore.
Some hosts (such as Rosegarden) will accept either "latency" or
"_latency"; in RG's case that's because I wasn't sure which was
supposed to be correct when I coded it. But others (such as Ardour)
will only accept "latency", as I discovered when I released a plugin
that used "_latency" and forgot to test it in Ardour first.
Chris
Hello,
First of all, I'm sorry for the worst cross-posting ever.
My name's Stefano D'Angelo and I'm the author of NASPRO
(http://naspro.atheme.org), a project whose main aim is to build up a
system to make all sound processing APIs interoperable.
Since my project was recently accepted to become part of the Atheme
community (http://www.atheme.org) and began working close to the
audacious (http://audacious-media-player.org) developers, I could see
there actually were serious reasons behind the lack of standardization
for sound processing in the arena of media players.
Now, I wrote a first draft of a new sound processing API called EPAMP
especially targetted at media players; you can find it here:
http://naspro.atheme.org/content/epamp-draft-1
With this mail, I'm trying to get some feedback (possibly in the Talk
page on the NASPRO wiki) about it and possibly to involve anyone who
is interested in its development.
Anyway, notice that the user interface part of the API is still
missing and that my project (NASPRO) will support this new API, giving
applications using it the possibility to access non-EPAMP plugins
without touching a single line of code (currently supported APIs are
audacious' and LADSPA, while DSSI, LV2, GStreamer and possibly VST are
more or less planned for the near future).
In case you're not interested and want to start a flame, please just ignore me.
This message is being sent to the communities/developers behind the
following projects:
- Amarok
- Aqualung
- Audio Overload
- BMPx
- Banshee
- cmus
- Decibel Audio Player
- Exaile
- FALF Player
- Freevo
- GStreamer
- Helix Player
- Herrie
- JaJuk
- JuK (KDE multimedia)
- Linux Audio Developers
- KPlayer
- Kaffeine
- lamip
- The LAMP
- MPlayer
- Miro
- mpg123
- mpg321
- Muine
- MPD
- music on console
- Noatun
- ogg123 (vorbis-tools)
- Ogle
- Quod Libet
- Rhythmbox
- Sipie
- SnackAmp
- Sonata
- Songbird
- UADE
- SMPlayer
- VLC
- wxMusik
- XMMS2
- Xfmedia
- Xine
- Zinf
If you know someone else who could be interested, feel free to forward this.
Best regards,
Stefano D'Angelo
zanga.mail(a)gmail.com
Quoting Stefano D'Angelo <zanga.mail(a)gmail.com>:
> It would work like this:
>
> stereo input -> hosts converts to 5.1 (the effect states it needs 6
> channels) -> apply effect with 6 in channels and 6 out channels -> 5.1
> output
How would the host convert the stereo input to 5.1? In my view, that is the
whole purpose of the effect.
Sampo
Radio Recommendation
For those interested in Linux Audio, Pd, SuperCollider, .... It's in
German, though. Streaming info: http://www.dradio.de/streaming/ The
HQ-Ogg-stream is the best:
http://www.dradio.de/streaming/dkultur_hq_ogg.m3u
Deutschlandradio Kultur - Neue Musik - 03.06.2008 - 00:05 CEST(!)
Musik aus dem digitalen Baukasten
Die Linux Audio Conference 2008 in Köln
Von Hubert Steins
Computer sind erschwingliche, universelle Musikmaschinen. Doch die
vermeintliche Freiheit trügt, denn Softwaregiganten wie Microsoft und
Apple dominieren das Innenleben auch der Musikrechner. Künstlerische
Autonomie erfordert aber Kontrolle über die Produktionsmittel, sagen
viele Musiker und Medienkünstler, die deshalb das quelloffene
Linux-Betriebssystem einsetzen.
Einmal im Jahr treffen sich die Linuxmusiker zur Linux Audio
Conference, zuletzt im Frühjahr 2008 an der Kunsthochschule für Medien
in Köln. Die Szene ist bunt: Die Präferenz für Linux als
Betriebssystem ist der gemeinsame Nenner, der hier Komponisten
akademischer Neuer Musik mit Technomusikern oder den Programmierern
von Aufnahmesoftware an einen Tisch, auf eine Bühne bringt. Für
Deutschlandradio Kultur hat Hubert Steins die Protagonisten der Szene
wie den Musiker und Mathematiker Miller Puckette getroffen
http://www.dradio.de/dkultur/programmtipp/vorschau/793304/
Ciao
--
Frank Barknecht _ ______footils.org__
Hi all,
I think this is a network setup user error rather than anything
complicated, but anyone know why I get long pauses (5-10secs) when
calling lo_server_thread_new for the first time in a process?
Is it likely to be some timeout I can adjust?
cheers,
dave