Hello,
First of all, I'm sorry for the worst cross-posting ever.
My name's Stefano D'Angelo and I'm the author of NASPRO
(http://naspro.atheme.org), a project whose main aim is to build up a
system to make all sound processing APIs interoperable.
Since my project was recently accepted to become part of the Atheme
community (http://www.atheme.org) and began working close to the
audacious (http://audacious-media-player.org) developers, I could see
there actually were serious reasons behind the lack of standardization
for sound processing in the arena of media players.
Now, I wrote a first draft of a new sound processing API called EPAMP
especially targetted at media players; you can find it here:
http://naspro.atheme.org/content/epamp-draft-1
With this mail, I'm trying to get some feedback (possibly in the Talk
page on the NASPRO wiki) about it and possibly to involve anyone who
is interested in its development.
Anyway, notice that the user interface part of the API is still
missing and that my project (NASPRO) will support this new API, giving
applications using it the possibility to access non-EPAMP plugins
without touching a single line of code (currently supported APIs are
audacious' and LADSPA, while DSSI, LV2, GStreamer and possibly VST are
more or less planned for the near future).
In case you're not interested and want to start a flame, please just ignore me.
This message is being sent to the communities/developers behind the
following projects:
- Amarok
- Aqualung
- Audio Overload
- BMPx
- Banshee
- cmus
- Decibel Audio Player
- Exaile
- FALF Player
- Freevo
- GStreamer
- Helix Player
- Herrie
- JaJuk
- JuK (KDE multimedia)
- Linux Audio Developers
- KPlayer
- Kaffeine
- lamip
- The LAMP
- MPlayer
- Miro
- mpg123
- mpg321
- Muine
- MPD
- music on console
- Noatun
- ogg123 (vorbis-tools)
- Ogle
- Quod Libet
- Rhythmbox
- Sipie
- SnackAmp
- Sonata
- Songbird
- UADE
- SMPlayer
- VLC
- wxMusik
- XMMS2
- Xfmedia
- Xine
- Zinf
If you know someone else who could be interested, feel free to forward this.
Best regards,
Stefano D'Angelo
zanga.mail(a)gmail.com
hello ")
Ok, I am writing a remote and SysEx-editor for e-mu
synthesizers (http://ppcontrol.sf.net). I am asking for
help on how to use the alsa sequencer the way it was
intended to be used for this kind of application. You know,
there are so many options and flags and stuff to be used
and I feel a little lost.
Basically I send a lot of SysEx messages and need to make
sure to catch all the answers. My current implementation is
as following:
I have a single sequencer (duplex, nonblocking), a queue
and two ports: one with read and one with write caps. To
send a SysEx command I just queue an event with that
message. (So far so good?).
Now I just wait for an constant time (usleep) before I
check the input queue. Problem: if the time is too short, i
would have missed the answer, if I wait to long I waste
time.
My question is: how would I make sure to catch all answers
from the device without using a thread and without just
waiting for an constant time before I check if there is an
answer( which is alo not very fail save, a message might
take a little longer if there is traffic on the midibus)?
Thank you.
Jan
On 03/06/2008, Steve Harris <steve(a)plugin.org.uk> wrote:
> In LADSPA there's a "magic" control out port called "_latency" or
> something, that should apply to LV2 aswell, but I'm not sure if the
> spec says so.
For the record -- since this is something I've tried to search for in
the past and have had trouble finding a definitive answer to -- the
conventional LADSPA port name is apparently "latency", with no
underscore.
Some hosts (such as Rosegarden) will accept either "latency" or
"_latency"; in RG's case that's because I wasn't sure which was
supposed to be correct when I coded it. But others (such as Ardour)
will only accept "latency", as I discovered when I released a plugin
that used "_latency" and forgot to test it in Ardour first.
Chris
Hello,
First of all, I'm sorry for the worst cross-posting ever.
My name's Stefano D'Angelo and I'm the author of NASPRO
(http://naspro.atheme.org), a project whose main aim is to build up a
system to make all sound processing APIs interoperable.
Since my project was recently accepted to become part of the Atheme
community (http://www.atheme.org) and began working close to the
audacious (http://audacious-media-player.org) developers, I could see
there actually were serious reasons behind the lack of standardization
for sound processing in the arena of media players.
Now, I wrote a first draft of a new sound processing API called EPAMP
especially targetted at media players; you can find it here:
http://naspro.atheme.org/content/epamp-draft-1
With this mail, I'm trying to get some feedback (possibly in the Talk
page on the NASPRO wiki) about it and possibly to involve anyone who
is interested in its development.
Anyway, notice that the user interface part of the API is still
missing and that my project (NASPRO) will support this new API, giving
applications using it the possibility to access non-EPAMP plugins
without touching a single line of code (currently supported APIs are
audacious' and LADSPA, while DSSI, LV2, GStreamer and possibly VST are
more or less planned for the near future).
In case you're not interested and want to start a flame, please just ignore me.
This message is being sent to the communities/developers behind the
following projects:
- Amarok
- Aqualung
- Audio Overload
- BMPx
- Banshee
- cmus
- Decibel Audio Player
- Exaile
- FALF Player
- Freevo
- GStreamer
- Helix Player
- Herrie
- JaJuk
- JuK (KDE multimedia)
- Linux Audio Developers
- KPlayer
- Kaffeine
- lamip
- The LAMP
- MPlayer
- Miro
- mpg123
- mpg321
- Muine
- MPD
- music on console
- Noatun
- ogg123 (vorbis-tools)
- Ogle
- Quod Libet
- Rhythmbox
- Sipie
- SnackAmp
- Sonata
- Songbird
- UADE
- SMPlayer
- VLC
- wxMusik
- XMMS2
- Xfmedia
- Xine
- Zinf
If you know someone else who could be interested, feel free to forward this.
Best regards,
Stefano D'Angelo
zanga.mail(a)gmail.com
Quoting Stefano D'Angelo <zanga.mail(a)gmail.com>:
> It would work like this:
>
> stereo input -> hosts converts to 5.1 (the effect states it needs 6
> channels) -> apply effect with 6 in channels and 6 out channels -> 5.1
> output
How would the host convert the stereo input to 5.1? In my view, that is the
whole purpose of the effect.
Sampo
Radio Recommendation
For those interested in Linux Audio, Pd, SuperCollider, .... It's in
German, though. Streaming info: http://www.dradio.de/streaming/ The
HQ-Ogg-stream is the best:
http://www.dradio.de/streaming/dkultur_hq_ogg.m3u
Deutschlandradio Kultur - Neue Musik - 03.06.2008 - 00:05 CEST(!)
Musik aus dem digitalen Baukasten
Die Linux Audio Conference 2008 in Köln
Von Hubert Steins
Computer sind erschwingliche, universelle Musikmaschinen. Doch die
vermeintliche Freiheit trügt, denn Softwaregiganten wie Microsoft und
Apple dominieren das Innenleben auch der Musikrechner. Künstlerische
Autonomie erfordert aber Kontrolle über die Produktionsmittel, sagen
viele Musiker und Medienkünstler, die deshalb das quelloffene
Linux-Betriebssystem einsetzen.
Einmal im Jahr treffen sich die Linuxmusiker zur Linux Audio
Conference, zuletzt im Frühjahr 2008 an der Kunsthochschule für Medien
in Köln. Die Szene ist bunt: Die Präferenz für Linux als
Betriebssystem ist der gemeinsame Nenner, der hier Komponisten
akademischer Neuer Musik mit Technomusikern oder den Programmierern
von Aufnahmesoftware an einen Tisch, auf eine Bühne bringt. Für
Deutschlandradio Kultur hat Hubert Steins die Protagonisten der Szene
wie den Musiker und Mathematiker Miller Puckette getroffen
http://www.dradio.de/dkultur/programmtipp/vorschau/793304/
Ciao
--
Frank Barknecht _ ______footils.org__
Hi all,
I think this is a network setup user error rather than anything
complicated, but anyone know why I get long pauses (5-10secs) when
calling lo_server_thread_new for the first time in a process?
Is it likely to be some timeout I can adjust?
cheers,
dave
Set_rlimits 1.3.0 has been released. This release integrates a Makefile
patch by Lucas C. Villa Real and adds an option to specify non-standard
library locations (which must be secured directories) via LD_LIBRARY_PATH to
the called executable.
It is available from
http://www.physics.adelaide.edu.au/~jwoithe/set_rlimits-1.3.0.tgz
Regards
jonathan
Hello,
DRC 2.7.0 is out and is available at:
http://drc-fir.sourceforge.net/
Here are the release notes:
A new method for the computation of the excess phase component inverse, based
on a simple time reversal, has been introduced. The sample configuration files
have been rewritten to take advantage of the new inversion procedure. Sample
configuration files for 48 KHz, 88.2 KHz, 96 KHz sample rates have been added.
The homomorphic deconvolution procedure has been improved to avoid any
numerical instability. A new Piecewiswe Cubic Hemite Interpolating Polynomial
(PCHIP) interpolation method, providing monotonic behaviour, has been
introduced in the target response computation. All the interpolation and
approximation procedures have been rewritten from scratch to provide better
performances and accuracy.
Bye,
--
Denis Sbragion
InfoTecna
Tel: +39 0362 805396, Fax: +39 0362 805404
URL: http://www.infotecna.it
Hello,
I try to squeeze as much performance as possible out of my upcomming
Linux synthesizer and try manual vectorization with following construct
in c, mainly to vectorize away multiplications :
typedef float v4sf __attribute__ ((vector_size(16)));
union f4vector
{
v4sf v __attribute__((aligned (16)));
float f[4] __attribute__((aligned (16)));
};
On AMD 64bit Turion (single core) on 64 Studio in 64bit mode this doesnt
improve performance at all, actually it even get worse. Is GCC that good
at optimizing on its own? I have no access to Intel processors at the
moment but would love to know how to benefit from SIMD optimizations of
float operations.
Sources on the web are rather thin...
Cheers,
Malte
--
Malte Steiner
media art + development
-www.block4.com-