[LAD] AoIP question

Len Ovens len at ovenwerks.net
Sat Oct 11 17:59:44 UTC 2014


On Sat, 11 Oct 2014, Winfried Ritsch wrote:

>>> http://www.rtnet.org/
>
>> This looks like the same idea I had in the begining. I may look farther
>> when I have time. Prioritizing audio (RT) through the net and tunnelling
>> the low priority packets through. It is nice to know I don't have to do
>> all the work and there is already something like this out there.
>>
> [...]
>
> Note: rtnet uses time slots and works at kernel level. This means there are
> reserved times for each device, which guarantee very low latency (lower than
> sample-time) and no packet drop for very low latency. The interface using

In my case I only envisioned one rt time slot for audio, but it was a 
kernel driver idea as well. While it is possible to do sub sample rate, 
with a 100M NIC, a full sized packet is more than a sample time.

> rtnet could do like 3-samples latency. No collisions without buffering, which
> switches uses to prevent them. Anyhow it works only in a dedicated Ethernet
> zone... so it is easy to use to implement customized dedicated solutions,
> (multichannel speaker systems or multichannel microphone arrays) but hard to
> do fit any standard for use with every device or unpredictable device count

Yes, that was my realization as well. It is a one purpose solution. AES67 
is higher latency, but very usable. One just has to look at the latencies 
we are using now. 128 samples (64/2) is very common and about as low as 
USB can go. The Intel HDA stuff seems to bottom out at 64/3 or 128/2. The 
lowest most guis will show is 16/2. It is one thing to build an ethernet 
AI and make it work, but making it also work with lots of other things if 
there is a standard is (I think) worth while. The thing with AES67 is that 
while the standard is quite new, there is already HW out there and it uses 
older standards for just about everything that Linux already understands. 
Linux already has PTP, multicast, igmp, etc. It seems that all that is 
needed is to write the glue so these streams can be seen in Jack or Alsa.

Making an AoIP audio IF in open HW That uses an open standard makes sense 
too as it is right away usable by the whole computing community on most 
OS. The first step is to make an open AES67 kernel driver. This is not as 
easy as it might seem. The net landscape for AoIP can be constantly 
changing with new IFs showing up and others leaving. The Audio card (PCIe) 
method of dealing with AoIP, presets a number of channels that the 
computer can see and then allows the SW to connect those channels to 
whatever is available. This might be a good way to deal with a SW driver 
as well. In the case of a HW AI, the number of audio ports is known so it 
is an easier issue. Of course in the HW case, an alsa/jack client/driver 
need not be written as it may be easier to just go straight AES67 from the 
audio HW. This would limit the use of the HW for DSP :)  In the case of 
the computer that is using the AoIP IF, having jack deal with 64 (put 
whatever number in here) i/o channels when only using 8 does not make 
sense. 8 is a pretty common number (even a lot of HDA internal sound bits 
have 8 out) so maybe make the first 8 i/os the lowest number alsa device 
and other 8 at a time as needed. These higher sets of channels could be 
added to jack as clients with no resampling needed because they are in 
sync. I do not know jack well enough to know if adding AIs to jack this 
way would have some delay between the first 8 ports and the next 8+ ports.

--
Len Ovens
www.ovenwerks.net



More information about the Linux-audio-dev mailing list