Hi, very noob C question. I do this:
renato@acerarch /usr/include/alsa $ grep snd_seq_open *
seq.h:int snd_seq_open(snd_seq_t **handle, const char *name, int streams, int mode);
seq.h:int snd_seq_open_lconf(snd_seq_t **handle, const char *name, int streams, int mode, snd_config_t *lconf);
so I see that the function snd_seq_open has its prototype declared in seq.h... but
where is the actual function definition?
cheers
renato
hi *!
sorry for the slightly off-topic post, but since spatial audio has been
a frequent topic lately, i think some people here might be interested.
linux or FLOSS won't be exactly in the limelight, but yours truly will
make sure there are at least 2-3 boxes with your favourite OS and audio
tools humming along in various places. oh, and you might come early and
watch a few high-end mixing consoles boot - the startup screen will
bring tears to your eyes (as will the price tag, unfortunately :)
unfortunately, there will have to be an admission fee, which we haven't
decided on yet. but we're trying to keep it reasonable. don't shout at
me when it turns out to be a bit more costly than LAC, though...
jörn
*.*
ICSA 2011 - International Conference on Spatial Audio
November 10 - 13, Hochschule für Musik, Detmold
Organizers:
Verband Deutscher Tonmeister (VDT), in cooperation with
Deutsche Gesellschaft für Akustik e.V. (DEGA), and
European Acoustics Association (EAA).
Contact/Chair:
Prof. Dr.-Ing. Malte Kob
Erich-Thienhaus-Institut
Neustadt 22, 52756 Detmold
Mail: icsa2011attonmeister.de
Phone: +49-(0)5231-975-644
Fax: +49-(0)5231-975-689
Summary:
The International Conference on Spatial Audio 2011 takes place from
November 10 to 13 at Detmold University of Music.
This expert‘s summit will examine current systems for multichannel audio
reproduction and complementing recording techniques, and discuss their
respective strengths and weaknesses.
Wavefield synthesis systems, a higher-order Ambisonics array, as well as
5.1/7.1 installations in diverse acoustic environments will be available
for comparative listening tests during the conference.
Structured plenary talks, paper and poster sessions will revisit
fundamentals and present latest research.
A series of workshops will be dedicated to practical implementations of
spatial sound capture and playback methods, and their esthetic and
psychoacoustical implications for music perception.
Concerts that include music specially arranged for the conference will
let you experience various spatial sound systems in "live" conditions.
Call for papers and music:
Your contributions are welcome, either as presentations, posters, or
workshops. Submissions will undergo a review process, and accepted
contributions will be published in the conference proceedings.
The conference language is English.
We are planning structured sessions on the following topics:
* Multichannel stereo
* Wave field synthesis
* Higher-order Ambisonics / spherical acoustics
* 3D systems
* Binaural techniques
An additional session will be dedicated to related miscellaneous
contributions, such as hybrid systems and perception/evaluation of
spatial music reproduction.
Hi,
I am trying to get my own ALSA plug-in to work with some real time controls. ( Is this the right place to ask?)
I am successful with the PCM part of it. What I mean is from:
http://www.alsa-project.org/alsa-doc/alsa-lib/pcm_external_plugins.html
External Plugin: Filter-Type Plugin
static snd_pcm_sframes_t
myplg_transfer(snd_pcm_extplug_t *ext,
const snd_pcm_channel_area_t *dst_areas,
snd_pcm_uframes_t dst_offset,
const snd_pcm_channel_area_t *src_areas,
snd_pcm_uframes_t src_offset,
snd_pcm_uframes_t size)
{
// my PCM processing works
// I want to add a control parameter that can be set in real time by an app
}
SND_PCM_PLUGIN_DEFINE_FUNC(myplg)
{
// This all works for PCM processing just like the examples
...
err = snd_pcm_extplug_create(&mix->ext, name, root, sconf, stream, mode);
if (err < 0) {
free(mix);
return err;
}
...
}
SND_PCM_PLUGIN_SYMBOL(myplg);
Now I want to add a simple real time adjustment, an integer value that can be sent by an application to adjust the sound (PCM samples) in real time. I tried some ways of doing it it but without success, I am not understanding the basics.
I first looked at doing a ctl. I was able get separately (without PCM processing) get a control to work but that looks like ts only for hardware control? I can't connect it to my PCM processing.
http://www.alsa-project.org/alsa-doc/alsa-lib/ctl_external_plugins.html
I looked at LADSPA but I'm not sure where that is going to take me.
I am now looking at
http://www.alsa-project.org/alsa-doc/alsa-lib/pcm_external_plugins.html
External Plugin: I/O Plugin
But, I am confused because I also see pcm type callbacks.
So, I do not have a specific coding question (yet) but I just need a general direction. ...
-Should I use ctl_external_plugins and figure out how to use it with my PCM?
-Should I use a LADSPA example?
-Should I go wth External Plugin: I/O Plugin?
-Something else?
I hope I am clear enough about my question and thanks for any pointers you can provide.
Bob
Ross Bencina is the author of AudioMulch and has been extremely
involved in PortAudio, ReacTable, and other projects. His new article
on realtime audio programming is a MUST read for anyone new to the
area, and worth reading as a reminder even for experienced developers.
http://www.rossbencina.com/code/real-time-audio-programming-101-time-waits-…
I'm off to count how many violations Ardour contains ...
--p
Hi All,
Just a quick post to let you know about a new JACK binding for Java,
called JNAJack, at http://code.google.com/p/java-audio-utils/
JNAJack is a minimal object-oriented wrapper to the JACK Audio
Connection Kit API. It uses Java Native Access (JNA) rather than
custom JNI to interface with the underlying Jack API, simplifying
development and deployment - no compilation required and it (*should*)
work cross platform. Use of JNA means that performance is not quite on
a par with the custom JNI code in JJack, but it is still fine for low
latency usage, and some further performance optimisations are in the
wings. Unlike JJack the aim of this project is to support full and
typesafe OOP access to the Jack API from Java, and nothing else. Most
important aspects of the audio API are included. MIDI and transport
support will be implemented in the future.
Well, I say this is new, it was mostly written quite a while back as
part of my Praxis InterMedia System project
http://code.google.com/p/praxis/ (a Java based cross-media patcher
environment). I'm finally getting around to releasing some of this
stuff separately as well.
Best wishes,
Neil
--
Neil C Smith
Artist : Technologist : Adviser
http://neilcsmith.net
Hi!
This question could have also been asked on jack-devel, but since LAD
probably has a broader audience:
I recently started hacking on a jack-driven matrix mixer (goals so far:
GUI, maybe network controls (OSC?), maybe LV2 host), and I wonder if
there are "frameworks" for test-driven development, so I can come up
with unit and acceptance tests while implementing new functionality.
Has anyone ever done test-first with jack? One could start jackd in
dummy mode with a random name, start some clients, wire inputs to
outputs and compare the generated signal to the expected result, maybe
with some fuzzy logic to allow for arbitrary delays.
OTOH, if there are existing mocking libraries for jackd, things might be
a bit more straight forward (provide an input buffer to be returned by
jack_port_get_buffer, call the process function and check the result
that's written to the output buffer).
Any pointers will be highly appreciated.
Cheers