[LAD] Open Source Audio Interface
len at ovenwerks.net
Thu Sep 11 21:24:56 UTC 2014
On Thu, 11 Sep 2014, Arnold Krille wrote:
> Just some cents...
> On Thu, 11 Sep 2014 12:44:45 -0700 (PDT) Len Ovens <len at ovenwerks.net>
>> Unlike MADI, empty channels would not be filled with null data, but
>> rather just not sent. In MADI, 64 channels are always sent even for a
>> payload of 1. In this case we should send multiples of two. There is
>> no reason that the channel count should not change from frame to
>> frame, but of course the receiving sw would have to have time
>> (non-real time) to reset itself and audio sw that doesn't know how to
>> deal with more channels all the sudden might need restarting too :)
>> In Jack's case, if the first two were set up as the sound card, the
>> rest could be added as jack clients. On the AI end they would all be
>> jack clients anyway and jack would see the whole complement of codecs
>> all the time. I feel it is worth while having jack in the AI because
>> this would allow routing. There would be no having the outputs be
>> channels 9 and 10 because that happens to be how s/pdif appears
>> because those could look to the host like 1 and 2.
> Varying channel count per packet/frame means a varying number
> of malloc/free per packet -> bad for realtime. Its probably better to
> stay with one channel-count once the stream is set up.
> Thinking about it some more, I think one of the reasons why several
> vendors use dedicated network-devices with dedicated drivers is to
> reduce the need for malloc/free in the kernel/driver as much as
> possible and just use fixed buffers once the channel-count (and
> word-size) are known.
I have worded things badly. Audio packet size will always be the same for
a number of channels if they are all audio or some midi. What I meant (if
I did not really say it) is that the receiving end would detect a change
in packet size and reset its buffers etc. to deal with it. Perhaps an
example would help.
the user wishes to record a scratch track - rough voice and guitar (or
keys) to a beat box (click). They set the AI to send only tracks 3 and 7
as tracks one and two and record. Then they want to record an 8 track drum
track so they route the 8 mics to 8 channels. The AI starts sending 8
tracks. The host sets up larger buffers and starts seeing 8 tracks.
That is really a bad example though, because why? It makes more sense to
use the same number of tracks through a session so that mic pres stay with
the same tracks. A better example would be the user plugs in a second AI
in to the first AIs second NIC. Now the first unit looks like it has 16
channels rather than 8, so the packet size increases and this change is
recogized by the host at that time. Or perhaps the user plugs a s/pdif
input to the AI and we only want to send the channel if it has data.
Where I see your info as being useful though, is if I choose to send data
(which could be varying sized packets) I may still wish to set the data
packet size to a fixed size depending on what is left from audio. The
whole network end of things needs to be decoupled from real time.
To be honest, I have spent a lot less time thinking about the network
traffic part of things than the audio part of things. The reason for this
is that the use cases for allowing network through our line is small. As
has already been commented, a desktop can have a second NIC and the laptop
generally has wireless. The main usecase for network through the same IF
might be radio work where one might have a SIP connection (phone line)
going through net. (anyone see others?) Because the AI has a second NIC
and an OS, it should be possible to set up the SIP session on that box.
More information about the Linux-audio-dev