This was just announced on sursound. Might be worthwhile to adopt for
anyone working with spatial audio datasets and files...
-------- Forwarded Message --------
Subject: [Sursound] AES69-2015 standard for file exchange - Spatial
acoustic data file format
Date: Sun, 15 Mar 2015 16:13:47 +0100
From: Markus Noisternig <Markus.Noisternig(a)ircam.fr>
Reply-To: Surround Sound discussion group <sursound(a)music.vt.edu>
To: Surround Sound discussion group <sursound(a)music.vt.edu>
Dear Sursounders,
We are pleased to announce the recent publication of the AES69-2015
standard for file exchange - Spatial acoustic data file format. See also
the AES press release at http://www.aes.org/press/?ID=293
<http://www.aes.org/press/?ID=293>
The new AES69-2015 standard defines a file format to exchange
space-related acoustic data in various forms. These include HRTF, as
well as directional room impulse responses (DRIR). The format is
designed to be scalable to match the available rendering process and to
be sufficiently flexible to include source materials from different
databases.
This project was developed in AES Standards Working Group SC-02-08 and
standardizes the Spatially-oriented format for acoustics (SOFA), which
aims at storing and transmitting any transfer-function data measured
with microphone arrays and loudspeaker arrays. See
http://www.sofaconventions.org/ <http://www.sofaconventions.org/> for
further information and ongoing format discussions.
Open source application-programming interfaces (API) for Matlab, Octave,
and C++ are available online at
http://sourceforge.net/projects/sofacoustics/
<http://sourceforge.net/projects/sofacoustics/>
All the best,
Markus and Piotr
--
Markus Noisternig
Acoustics and Cognition Research Group
IRCAM, CNRS, Sorbonne Universities, UPMC
Paris, France
Piotr Majdak
Psychoacoustics and Experimental Audiology
Acoustics Research Institute
Austrian Academy of Sciences
Vienna, Austria
Srinivasan S wrote:
> $ aplay -f dat -D VOUTL new.wav
> Playing WAVE 'new.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Stereo
> aplay: set_params:1087: Channels count non available
You are trying to play a two-channel file on a single-channel device.
Regards,
Clemens
Srinivasan S wrote:
> Could you please provide any inputs w.r.t the loopback card using
> snd-aloop & alsaloop, how this loopback card can be used to connect
> the GSM two way call simultanoeusly to the UDA1345TS codec on MCASP0
> of the am335x (UDA1345TS ie., real sound card)
snd-aloop creates a virtual sound card; it is not used with a real sound
card.
> The codec has two output channels VOUTL, VOUTR & two input channels VINL , VINR
>
> With this am able to achieve only one way call at a time by running
> only one application at a time
To allow a capture device to be shared, you need to use dsnoop. Your
asound.conf already does this.
To allow a playback device to be shared, you need to use dshare or dmix.
(dshare allows to use _different_ channels; dmix allows mixing multiple
sources into the same channels.) Your asound.conf does not do this; it
uses "hw" instead.
Regards,
Clemens
I've just been reading the ALSA Programming HOWTO by Matthias Nagorni and one
detail caught my attention immediately. This is the idea of using plughw instead
of directly addressing your soundcard.
On the face of it this seems *much* easier, but there surely must be a catch.
Can anyone explain what the downside of doing this might be?
--
Will J Godfrey
http://www.musically.me.uk
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.
Has anyone looked at OCA as a method of service discovery/remote control
for Linux audio? This is supposed to end up as another aes* "real soon
now" but the current spec is available at:
http://ocaalliance.com/technology/specifications/
For the downloading and there are some products out there that use it now.
(well at least one anyway)
Some background: I have been looking at AoIP and reading what I could.
AES67 has the biggest complaint that it has poor service discovery (well
none actually). I have been reading product manuals for various AoIP
formats and what I have found is that some of the other ones do not have
very good/any discovery either. I do not know if this is the protocols
fault of the product but the setup for a Ravenna AoIP DAC/ADC box to a
Ravenna PCIe card requires the user to know what the IP for both units are
and then log in to both units via HTTP(s) to set them up in some sort of
static configuration. This sounds no better than raw AES67. (Some other
AoIP things might be better)
So along the way I stumbled on OCA. This is not another OSC, though it
could do that job too.
I will put this in terms of Linux/ALSA/Jack because that is what I know.
As an example assume two linux boxes, one with an Audio IF and another
with audio SW, A and B. Box A is headless, boots up with jack running. Box
B has no Audio IF because it is new and only has PCIe sockets, other than
that has everything a normal Desktop would have as a DAW.
The way OSC works would be for the user on Box B to open a window
something like the "Connections" window on qjackctl. It would show all
local connections the same as Qjackctl does now... System on both sides
that can be expanded, it would also show "Box A"... but when clicked to
expand, a box would pop up showing what lines are local there on Box A's
Jack. There would be a dropdown/whatever that allowed the user to set the
number of lines to set up between the boxes in which direction. SO the
user does that. Now the user can connect whatever Box A internals to these
I/O lines and the BOX A on the local window will now expand to show those
lines which will be labeled the same as on BOX A. All of this in one app.
But there would be more. Next we want to set the actual ALSA device
levels, so we open an ALSA mixer, one of the devices will be Box A's ALSA
card and the levels can be set.
Now because Box A really isn't doing too much, we want to run a soft synth
on there as well, So long as the OCA server already understands tha SW, it
would already show as a capablility of that box. The I/Os would show up as
if they were already available on the jack graph, but the app would not
yet be running (because there may be a number of different ones available)
but as soon as one of those connects was connected, the OCA server would
start that synth and make the connections when it was started (MIDI and
audio). Clicking on any of that synths i/o's would give the user a control
interface with that synth.
This to me is the way remote discovery/control should work. Does that make
sense? Does this look like I have read the OCA spec right? Does this sound
worth while?
I have only scratched the surface to give some idea what we are talking
about. OCA would not replace MIDI or OSC, but it could find them and
connect them from one box to another. Some of the kinds of controls OCA
has might be better for remote control of mixer kinds of things like
faders, sends, eq and such because these things are defined already and
where not, any other controls are discoverable/query-able and could be set
up on the fly in SW (not so much for HW). This is the same thing that
already happens with ALSA controls and ALSA mixer.
Anyway, I am going to try making a server and client based on this spec.
--
Len Ovens
www.ovenwerks.net