Has anyone looked at OCA as a method of service discovery/remote control
for Linux audio? This is supposed to end up as another aes* "real soon
now" but the current spec is available at:
http://ocaalliance.com/technology/specifications/
For the downloading and there are some products out there that use it now.
(well at least one anyway)
Some background: I have been looking at AoIP and reading what I could.
AES67 has the biggest complaint that it has poor service discovery (well
none actually). I have been reading product manuals for various AoIP
formats and what I have found is that some of the other ones do not have
very good/any discovery either. I do not know if this is the protocols
fault of the product but the setup for a Ravenna AoIP DAC/ADC box to a
Ravenna PCIe card requires the user to know what the IP for both units are
and then log in to both units via HTTP(s) to set them up in some sort of
static configuration. This sounds no better than raw AES67. (Some other
AoIP things might be better)
So along the way I stumbled on OCA. This is not another OSC, though it
could do that job too.
I will put this in terms of Linux/ALSA/Jack because that is what I know.
As an example assume two linux boxes, one with an Audio IF and another
with audio SW, A and B. Box A is headless, boots up with jack running. Box
B has no Audio IF because it is new and only has PCIe sockets, other than
that has everything a normal Desktop would have as a DAW.
The way OSC works would be for the user on Box B to open a window
something like the "Connections" window on qjackctl. It would show all
local connections the same as Qjackctl does now... System on both sides
that can be expanded, it would also show "Box A"... but when clicked to
expand, a box would pop up showing what lines are local there on Box A's
Jack. There would be a dropdown/whatever that allowed the user to set the
number of lines to set up between the boxes in which direction. SO the
user does that. Now the user can connect whatever Box A internals to these
I/O lines and the BOX A on the local window will now expand to show those
lines which will be labeled the same as on BOX A. All of this in one app.
But there would be more. Next we want to set the actual ALSA device
levels, so we open an ALSA mixer, one of the devices will be Box A's ALSA
card and the levels can be set.
Now because Box A really isn't doing too much, we want to run a soft synth
on there as well, So long as the OCA server already understands tha SW, it
would already show as a capablility of that box. The I/Os would show up as
if they were already available on the jack graph, but the app would not
yet be running (because there may be a number of different ones available)
but as soon as one of those connects was connected, the OCA server would
start that synth and make the connections when it was started (MIDI and
audio). Clicking on any of that synths i/o's would give the user a control
interface with that synth.
This to me is the way remote discovery/control should work. Does that make
sense? Does this look like I have read the OCA spec right? Does this sound
worth while?
I have only scratched the surface to give some idea what we are talking
about. OCA would not replace MIDI or OSC, but it could find them and
connect them from one box to another. Some of the kinds of controls OCA
has might be better for remote control of mixer kinds of things like
faders, sends, eq and such because these things are defined already and
where not, any other controls are discoverable/query-able and could be set
up on the fly in SW (not so much for HW). This is the same thing that
already happens with ALSA controls and ALSA mixer.
Anyway, I am going to try making a server and client based on this spec.
--
Len Ovens
www.ovenwerks.net
Date: Thu, 26 Feb 2015 22:04:39 +0000
From: Fons Adriaensen <fons(a)linuxaudio.org>
To: Cedric Roux <sed(a)free.fr>
On Wed, Feb 25, 2015 at 11:46:37PM +0100, Cedric Roux wrote:
> can someone explain to me what this bandwidth computation means?
There is nothing magical about it, it's just a pragmatic
approximation that results in the 3dB BW (for high + or -
gain) being expressed in octaves.
Expressing the BW in a logarithmic unit (such as octaves)
make sense because the magnitude response of a second order
parametric is symmetric on a log(f) scale but not on a linear
one.
An absolute value in Hz or even a relative one (the same divided
by center frequency) is informative only for small BW, where the
linear distance to the -3dB points left and right would be more
or less equal.
For large BW values (as often used in sound engineering) this
is no longer true. For example for a center frequency of 1 kHz
and two octaves BW the -3dB points are at 500 Hz and 2 kHz.
For four octaves that would be 250 Hz and 4 kHz, giving a -3dB
BW in Hz that is larger than twice the center frequency, which
is somewhat counter-intuitive.
The 'mathematical' way to define the BW of an second order
allpass would be at the +/- 90 degree phase points. This also
corresponds to the -3 dB points for an infinite notch, as used
by M & R. But keeping this value fixed in a parametric results
in curves that are not even symmetric for + and - gains. The
'bumps' would be much wider than the 'notches'. The sqrt (gain)
factor restores this symmetry. But using this means that there
is no longer any simple relation to what looks like bandwidth
on a frequency response plot and the actual 'mathematical'
value.
Ciao,
--
FA
A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)
----- End forwarded message -----
--
FA
A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)
What are good ways to pull a list of JACK ports in python and detect their readiness to accept connections? And/or, is there a better way to detect 'ready' status of JACK-aware applications?
--
Jonathan E. Brickman | jeb(a)ponderworthy.com | (785)233-9977
Ponderworthy | http://ponderworthy.com
Music of compassion; fire, and life!!!
Hi LAD,
I have a technical question regarding the FIL equalizer
(by Fons Adriaensen).
The code uses Mitra-Regalia lattice filter (as described
in [1]). After reordering things here and there I see it's
indeed the case (surprise!).
[1] might be hard to get, but there is [2] with a lot of
details too, especially for bandwidth.
The only remaining point that I don't get is the bandwidth
manipulations. [1] uses for its parameter 'a' ('_s2' in FIL)
the formula:
a = (1 - tan(Omega/2)) / (1 + tan(Omega/2))
'Omega' being I don't really know what (-3dB notch bandwidth
for a gain of 0 maybe, if I read the paper correctly).
FIL uses bandwith expressed in octave and does:
_s2 = (1-b)/(1+b)
with:
b = bandwidth * 7 * (f0/fs) / sqrt(gain)
('f0' is the center frequency of the equalizer, 'fs'
is the sampling rate)
Reading [2] we see the factor 'sqrt(gain)' ('gain' is 'K')
that we find in the FIL's formula (specifically the formula
for k2 at page 13, after equation (17)).
But the "bandwidth * 7 * (f0/fs)" remains a total mistery
to me. It seems to be 'gamma' as found in [2], but 'gamma'
is way more complicated than what we see in FIL's code.
So the questions are:
- can someone explain to me what this bandwidth computation means?
- how it is derived starting from a bandwidth expressed in octave?
- And if we use the notations of [1] and [2] how do we relate it
to Omega or the various versions found in [2]? (which one is it by
the way? I thought it was the "at the bandedge frequencies the gain
is 'gain/2 dB'" one but it's not the case) (I wrote a little program
to plot things and as far as my program is correct bandedge frequencies
don't have a gain of 'gain/2 dB')
Regards,
Cédric.
[1] P. A. Regalia and S. K. Mitra, “Tunable Digital Frequency Response
Equalization Filters,” IEEE Trans. Acoust., Speech, Signal Process.,
vol. ASSP-35 (1987 Jan.).
[2] http://www.musicdsp.org/files/EQ-Coefficients.pdf
Hello all,
The version 1.1.0 of ams-lv2 is now available:
The two main "features" of this version are:
- Ported tons of additional plugins from AMS (VCEnv & VCEnv II,
Multiphase LFO, VC Organ, etc.)
- Ported the changes from version AMS 2.1.1 (Bit Grinder, Hysteresis,
the bug fixes, etc.)
In addition, this release sees a lot of bug fixes or optimization.
As a reminder, ams-lv2 are plugins to create modular synthesizers
ported from Alsa Modular Synth.
Here is a demo of an older version of these plugins.
http://www.youtube.com/watch?v=LWfF71NerkQ
ams-lv2 1.1.0 can be downloaded here:
https://github.com/blablack/ams-lv2/releases/tag/v1.1.0
enjoy :)
Aurélien
Thanks,
I've created a job-page on the github job-board now.
Sounds like it be a good place.
regards
hermann
Am 13.02.2015 um 12:44 schrieb Cillian de Róiste:
> Hi,
>
> You could try posting the details on:
> https://github.com/opensourcedesign/job-board or asking on freenode
> #opensourcedesign
>
> Good luck!
> Cillian
>
> 2015-02-12 7:13 GMT+01:00 Hermann Meyer <brummer-(a)web.de>:
>> Hi
>>
>> The topic say’s it all, any Graphic designer around here, who like to create
>> a new, overall design for the guitarix project?
>> If so, please contact me.
>>
>> regards
>> hermann
>>
>> _______________________________________________
>> Linux-audio-user mailing list
>> Linux-audio-user(a)lists.linuxaudio.org
>> http://lists.linuxaudio.org/listinfo/linux-audio-user
>
>
Hi
The topic say’s it all, any Graphic designer around here, who like to
create a new, overall design for the guitarix project?
If so, please contact me.
regards
hermann
Hi!
My name's Álvaro, from Barcelona [it's my first post to this list, so I say
hello :) ]. I'm dealing with some issues with time-scaling and I'm a little
bit lost trying to find the problem.
I'm trying to record an analog audio input using *arecord*, in order to do
some fingerprinting (features extraction). The problem is that after every
reboot of the recording machine, the record has a different time scale so,
as the fingerprint algorithm works with the relative position of energy
peaks for different frequencies and the time them happen, the algorithm
sometimes works, but sometimes fails.
Could you give me any guidance to know where to search a possible solution
for the time-scaling issue? Thank you very much in advance!
Álvaro
---
Forza saluti a tutti bacioni auguri in boca di lupo arrivederci ea presto
pino
Hello guys,
I've followed this community for a while now and this is my contribution
: a survey about LAD.
*Why ?*
I've made this survey to get a snapshot of what LAD community is, and to
help understand how it works. This community has always worked in an
informal way with ups and downs, and I thought it would be really useful
to look at it with objectivity.
*Who ?*
It is open to any contributors of any Linux Audio Sofwares.
*Who/What's for**?*
This study aims to stay within LAD community and to provide very
practical information. It does not intend to be part of a essay or any
academic work. Actually its only purpose will be to provide a solid
ground for further discussion about LAD's future. I've already started
to gather qualitative data (as some of you know) an, while you answer
this, I'll collect some more data about the projects themselves. Once
all put together, we should get a very precise overview of what this
community actually is.
*How ?*
*The survey itself takes around 30minuts to complete and they will be
anonymized *before any publication (if you wonder why I ask for your
name : it's to help me to build a projects-interactions graph as well as
few other things).
*When ?*
Hopefully, I hope to get most answers within 2 or 3 months. After that
it'll take a couple of weeks to analyze the data and write something
about them. I'll keep you informed.
.
*Where ?*
*Here's the link : *
http://tumulte.me/lime/index.php/survey/index/sid/174631/token/bcdqg4tww7n6…
I've made my best to think about the most relevant questions and
possible answers, but if you thinks something is missing, wrong or
whatever please contact me
Thanks in advance,
Etienne/Tumulte