I have been playing with numbers (a little) for AoIP and AES67 (because I
can actually read the spec). The media clock (AKA wordclock) is derived
from the wall clock which is synced via PTP. There are 3 sample rates
supported 44.1k (not sure if any physical devices really do), 48k and 96k.
The first thing I find is that it is not possible to get even word clock
via simple math. The wall clock moves one tick per usec which at 48K is
20.833rep. (44.1k is a mess) I would suggest this is why AVB and AES67 at
lowest latency already uses 6 sample frames which is a nice even 125 usec.
It is obvious to me from this, that the true media clock is derived by
external circuitry. The one "must support format" is 48k at 48
samples/packet. (both 16 and 24bit on the receive end of a stream) This is
1ms packet time. (there are AoIP devices that only support AES67 with this
format) So I am guessing that it must be easy to derive a media clock from
a 1ms clock input. The computer (CPU board) would have a GPIO with a 1ms
clock out derived from the cpu wall clock which in turn is kept in sync
with the network via PTP. I am guessing there is already a chip in
production that does this with almost no support circuitry.
The side effect of the 1ms/packet standard is that streams are limited to
8 channels at 48K and 4 channels at 96k. So for more channels more streams
are required. This maximum comes from the limit that the full packet must
fit into one standard ethernet packet with the MTU at 1500 (minus headers
etc.). The transport does not do packet reassembly of fragmented packets
and basically drops them.
1 ms latency in the jack world is the same as 16/3. This may explain why
/3 works better for other formats such as USB or FW. It may also be the
bridge that works best with OPUS at 5 ms.
--
Len Ovens
www.ovenwerks.net
Is anyone in contact with Lars? I tried to send him an email but it immediately
bounced, so i think I've got an address that's no longer valid.
--
Will J Godfrey
http://www.musically.me.uk
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.
Hello all,
Today I got an email of a user asking me to help him make a plugin
called 'zitaretuner' work. I never wrote such a plugin, and I didn't
even know it existed. So I can't help this user.
Of course this made me curious, and I managed to get a copy of
the source code of this lv2 plugin. And I wasn't very amused.
As expected it's based on zita-at1, and again a complete disaster.
The DSP code of zita-at1 is written as a neat self-contained C++
class with a very clean interface, and this is done explicitly to
make it re-usable.
But instead of re-using it, the author of the plugin decided to
rewrite it in C, and combine it in the same source file with parts
of libzitaresampler (instead of using that as a library as it is
meant to), and with whatever is required to turn it into an lv2.
The whole thing is just a single source file.
The same author (who is know only as 'jg') didn't bother to add
a decent GUI, relying on the plugin host to create one. That means
for example that the note selection buttons (which also double as
'current note' indicators in zita-at1), are replaced by faders.
Only $GOD knows what they are supposed to control.
And as a final topping on the cake, that whole crappy thing is
presented as if I were the author of it all. No mention at all
that things have been modified, and by whom or why. This alone
is a clear violation of the license under which zita-at1 was
released. And whoever did it doesn't even have the courage to
identify him/herself.
I've complained about this sort of thing before, and this time
I'm really pissed. So let one thing be clear: I will never again
release any code under a license that allows this sort of thing
to happen.
Ciao,
--
FA
A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)
I have been doing some reading on audio over IP (or networking of any
kind) and one of the things that comes up from time to time is collisions.
Anything I read about ethernet talks about collisions and how to deal with
them. When I was thinking of a point to point layer two setup, my first
thought was there should be no collisions. Having read all the AES67 and
other layer 3 protocols there does not seem to be mention of collisions
really. My thought is that on a modern wired network there should be no
collisions at all. The closest thing would be a delay at a switch while it
transmitted another packet that in a hub would have been a collision.
So my thought is that AoIP at low latencies depends on a local net with no
collision possible. Am I making sense? or am I missing something?
The various documents do talk about three kinds of switches, home,
enterprise and AVB. It is quite clear what makes an AVB switch, but what
does an "enterprise" switch have over any other switch aside from
speed? I am sure I am being small minded in my thoughts here. For example,
I am expecting very little non-audio bandwidth and I am guessing that the
average home switch does not prioritize any style of packet over another.
I guess I am asking what parts of a switcher are important for audio? I am
guessing that HW encription (offloading the SSL from the server to the
switch) is not something that helps audio. Even with proprietary
protocols, it would not make sense to use encription. There seems to be a
real plug and play emphasis where the audio enabled LAN is firewalled from
the rest of the world. The streams are multicast so any box that
recognizes the packets can use those channels and offer it's own audio
channels. Any box can send control messages to reroute audio... there must
be some kind of authentication for that.... (light bulb) this is why AES67
does not include controls and perhaps discovery either. I can just see
someone walking into a concert with a notepad and deciding they want
their own mix... I am guessing the promoter would not want people walking
away with their own "direct from the mixer" audio track of the event
either. Not quite plug and play then. The wireless router wants to limit
what traffic it deals with.
So AES67 allows the control to be offloaded to a web interface with access
control (switch SSL offloading would not help here anyway). Discovery ends
up being manual at first glance. I am thinking it will not be long before
the control and setup will automate loging into the web IF and setting
things up. A system wide user/password and an interface that defaults to
streaming it's own inputs as multicast is already discoverable.
--
Len Ovens
www.ovenwerks.net
>
>
> On 10-10-2014 21:14, Fons Adriaensen wrote:
> > Hello all,
> >
> > Today I got an email of a user asking me to help him make a plugin
> > called 'zitaretuner' work. I never wrote such a plugin, and I didn't
> > even know it existed. So I can't help this user.
> >
> > Of course this made me curious, and I managed to get a copy of
> > the source code of this lv2 plugin. And I wasn't very amused.
> >
> > As expected it's based on zita-at1, and again a complete disaster.
> >
> > The DSP code of zita-at1 is written as a neat self-contained C++
> > class with a very clean interface, and this is done explicitly to
> > make it re-usable.
> >
> > But instead of re-using it, the author of the plugin decided to
> > rewrite it in C, and combine it in the same source file with parts
> > of libzitaresampler (instead of using that as a library as it is
> > meant to), and with whatever is required to turn it into an lv2.
> > The whole thing is just a single source file.
> >
> > The same author (who is know only as 'jg') didn't bother to add
> > a decent GUI, relying on the plugin host to create one. That means
> > for example that the note selection buttons (which also double as
> > 'current note' indicators in zita-at1), are replaced by faders.
> > Only $GOD knows what they are supposed to control.
> >
> > And as a final topping on the cake, that whole crappy thing is
> > presented as if I were the author of it all. No mention at all
> > that things have been modified, and by whom or why. This alone
> > is a clear violation of the license under which zita-at1 was
> > released. And whoever did it doesn't even have the courage to
> > identify him/herself.
> >
> > I've complained about this sort of thing before, and this time
> > I'm really pissed. So let one thing be clear: I will never again
> > release any code under a license that allows this sort of thing
> > to happen.
>
>
I can understand you are very angry about this. Does GPL
really allow someone to use someone else's GPL code,
release it, and pretend everything was written by the original
person?
Hi everybody,
Sorry for the lack of communication from my side. I've been ill and mostly
lying in bed the past 2-3 weeks (still do) and my team is on vacation.
Otherwise you'd have seen that announcement already. There will be a more
formal announcement with all the relevant information real soon now, but
let me first give you the most important bits so that you can already mark
the date in your calendars:
Our beloved Linux Audio Conference (LAC) will take place at the
Johannes Gutenberg University (JGU) in Mainz (Germany)
from Thu, April 9 to Sun, April 12 2015
Last year's LAC was at the ZKM in Karlsruhe, and Mainz (the capital of the
German Bundesland of Rhineland-Palatinate) is not very far from there. In
fact it's right in the vicinity of Frankfurt/Main airport (some 17 min with
the ICE fast train), so it should be easy for everybody to get here.
The LAC regulars among you probably know me, as I've been to most LACs
myself. But for those who don't, I'm the head of the computer music
research group at the JGU, mathematician, computer scientist, open source
developer, and long time Linux user and Linux audio enthusiast. I'm
organizing LAC 2015 on the behalf of the LAC organization team together
with my team and some of my friends and colleagues, and we've already been
busy to get our hands on all the needed locations and technical equipment.
The entire conference and the concert events will take place on our campus
(which is fairly big, our university has some 35000 students, kind of like
a small city of its own). All the relevant locations are within walking
distance: lecture halls, concert venues, restaurants and cafeterias.
Ok, I guess that this is all I have to say right now. :) As I said, a more
formal announcement will follow later. I hope to meet many of you at LAC
2015 at Mainz, so that we can make it a great conference! If you have any
further questions at this point then please don't hesitate to contact me by
email or the usual social network facilities (I'm on both G+ and fb);
please see my signature below.
Thanks,
Albert
-------- Original Message --------
> Subject: Re: [LAU] Linux Audio Conference 2015?
> Date: Mon, 06 Oct 2014 19:20:03 +0200
> From: Giso Grimm <gg3137(a)vegri.net>
> To: linux-audio-user(a)lists.linuxaudio.org
>
> On 06/10/14 09:03, Jeremy Jongepier wrote:
> > On 10/04/2014 07:10 PM, Giso Grimm wrote:
> >> On 10/01/2014 01:00 PM, Giso Grimm wrote:
> >>> does anyone know when and where the next linux audio conference will
> >>> take place?
> >>
> >> No LAC in 2015?
> >
> > I think there will be an official announcement soon. Attentive
> > internauts have already spotted the location too.
> >
>
> Thanks! I am looking forward to it, and thanks to all organizers in the
> background!
>
> Giso
>
--
Dr. Albert Gr"af
Computer Music Research Group, JGU Mainz, Germany
Email: aggraef(a)gmail.com
WWW: https://plus.google.com/+AlbertGraef
Hello
I managed to make my example plugin
<https://bitbucket.org/xaccrocheur/ksi> (based on the only LV2 synth
tutorial <http://www.nongnu.org/ll-plugins/lv2pftci/#A_synth> that I
could find) monophonic :)
Now the only think that is left is a portamento function, to go from one
note to another (/exactly/ like so-404
<http://d00m.org/%7Esomeone/so404/> does).
I guess it has to do with the
void on(unsigned char key, unsigned char velocity) {
m_key = key;
m_period = m_rate * 4.0 / LV2::key2hz(m_key);
m_envelope = velocity / 128.0;
}
void off(unsigned char velocity) {
m_key = LV2::INVALID_KEY;
}
functions, I need to store the "note out" value to go from that, to a
"note in"...
Does anyone know the algorithm to implement that? That would really help me.
Thanks
PS - Excuse my poor English, I'm working on it
--
Philippe Coatmeur
Having read the AES67 spec, and reading all the various takes on it... I
have seen this thought a few times:
"AES67 is a good start in that it provides for a common timing method to
rate-lock everything on the network, but it does not announce the streams
or provide a common control protocol." [1]
Dispite the quote above, it seems all IF makers do have an open control
protocol called HTML. That is all of them seem to have web control
interface as a method of control as well as any other network control.
This is not the same as interoperability, but it does mean one app could
control more than one device even though different protocols.
Discoverability is also less of an issue than it is made out to be because
of the multicast nature of thiings. The audio packets are reconizable both
by the addressing as well as content and with the html control open, the
session parameters are also a known factor.
I have also seen this thought from every manufacture or AoIP protocol
vendor:
We welcome an open standard and want to support it. Interoperability is
good for us. (my paraphrase)
I would suggest that no-one wants to be the maker whos stuff works with
other products, but requires extra attention to get set up. This means
extra support which costs money. The aes67 document does suggest some of
the discovery types available:
Bonjour
SAP
Axia Discovery Protocol
Wheatstone WheatnetIP Discovery Protocol
Of these 4 SAP seems to be the one not attached to someone. (it is quite
old as these things go) But all of them seem to be at least somewhat open.
That is, I suspect that these were the choices put up but there was no
agreement. It would not be hard to support all four, but I suspect one of
them will just get used and become standard. Any forum messages I have
read just suggest the dev use SAP as happens.
In any case the format of the session information is in the standard, so
once it is found it can be used.
As for control, the web interface may become more standard (if it isn't
already, these protocols seem to come with the firmware). Http does not
only mean human interface.
I think in the same way that these people got together for a common
protocol, a common control and announce standard will show up.
Anyway, it is probably worthwhile creating an AES67 driver for Linux as it
would obviously allow the use of the next batch of audio interfaces to
show up. Even if the user has to use a browser to set the inteface up and
enter the session parameters, this is still better than the control Linux
drivers have over some of the audio interfaces that are "supported" in
ALSA now. The use of the IF DSP power, mixing, eq, etc. may not effect the
the DAW use so much (though I think it will), but it may suggest new uses
for the interface.
[1]
http://www.c2meworld.com/creation/new-protocols-enhance-console-networking/
--
Len Ovens
www.ovenwerks.net
Lets say we have a synth and a sequencer connected via jack.
The sequencer spits out a command timed at position 4 (i'm deliberately keeping
the numbers generic), and the synth then knows that on the 4th sample it has to
do whatever the command says - again, I know it's a bit more 'interesting' than
that. All fine and dandy, the synth bundles up it's data an chucks the buffer
full of audio to jack.
But, what happens if the synth was registered with jack before the sequencer?
Presumably it is now going to get it's MIDI data *after* it has already
processed that callback.
Say now, I have this brilliant counter melody idea - well, it could happen :)
However, I don't have a keyboard with me. Still I can just fire up jack
keyboard and link it to the sequencer. Only now. the sequencer is pushing out
its MIDI data before it gets anything from the keyboard.
I know this sounds contrived but I have frequently done exactly this, using
alsa MIDI, and I confess I've no idea what actually happens in that situation
either!
--
Will J Godfrey
http://www.musically.me.uk
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.
Hi Linux Audio Developers,
TL;DR; Discussing experience driven design for linux audio.
I'd like to discuss the "age of experiences". Allow me 10 minutes of
your time, to watch a video by Aral Balkan talk about development of
technology, FLOSS, design, and the future.
To start, please watch the following clip: I've skipped into the video
to get the section I think is most interesting to discuss on this
list:
https://www.youtube.com/watch?feature=player_detailpage&v=ldhHkVjLe7A#t=1625
To bring this discussion to a productive start, I'd like to concider
the tools we have available as the linux-audio community: they
certainly have features, and empower the user to own thier tools, and
the data used with those tools.
Should we improve experience for users?
Should we design "experience driven open" software?
Should we forward the UX of Linux Audio to the "age of experiences"?
What do users know, that developers might not?
What is it that needs to change? Are there even issues here?
If so, how do we (the community as a whole) try to solve this?
I hope this is a productive and inclusive discussion, and politely
request remaining on topic ;)
To a fruitful discussion, -Harry
--
www.openavproductions.com