That is such good news. What(low cost) hardware would this development be
used on to support the developers with testing/debugging and maybe even
* MOTU LP32 (Preferred)
* MiniDSP https://www.minidsp.com/products/network-audio/avb-dg (I think
MOTU's switch uses midDSP switch hardware)
I hope someday it will be possible to connect 4 or more 8 channel ADAT
modules (32 channels) to a PC under Ubuntu via AVB with low latency. The
only option to get this done under Windows is a Focursrite DANTE based
Rednet 3 right now because Thunderbolt is not really available there as
well. Plan to get Rednet3, but that does not solve the Linux environment
which I prefer. Would love to be able to use the Rednet 3 under Linux but
since DANTE is proprietary , so unlikely.
My two wishes:
[a] Multi (16+) channel low latency audio I/O using ADAT audio AD/DA
[b[ Bitwig supporting LV2 plugins.
With those two, the Linux Audio environment would be perfect and the world
a better place.
*(Apology for the re-sends and ignore the previous edits. Web based Gmail
is such a annoyance and un-logically structured)*
On Mon, October 23, 2017 11:53 am, Philippe Bekaert wrote:
> wireshark reveals that dante is using ptp v1 - it's not that
Correct, I have not looked up the Audinate patent(s) yet, I do not know
what was unique enough to award a patent. In any case ALC Networx did not
seem to think that the Ravenna approach of using PTPv2 (IEEE 1588-2008)
for synchronization and RTP/RTCP (IEEE 1733-2011 and IETF RFC3550 and
RFC3650) for media streams would be covered by any patents.
> Especially rtp is not difficult : it's just one packet format, and we
> need none of the special features, essentially just the time stamp
> in the header and the audio data.
Yes, time stamp calculations and latency adjustments seemed like the only
part that would possibly be a little tricky. The rest seems like just
manipulating sample values to get into the correct order in the packet.
Jack uses floating point for sample format and RTP uses integers, so the
usual conversion between integer and floating point that you would do for
sending samples to a hardware sound card applies.
> I'm sticking with linux in the first place, but using platform
> independent tools whereever available.
> Hence also my consideration of the asio C++11 library for networking
OK, the first time I misread that as an ASIO library, i.e. the Steinberg
designed Windows specific multi-channel audio specification, not asio C++
library for asynchronous I/O.
I checked my fedora installation and that is easily available as the
asio-devel package. I think libasio-dev is the equivalent in debian.
Seems that should not be a problem for any recent distribution.
Actually, now that I look at the details that seems to be just headers,
I'm not sure where the actual libraries are implemented. Something to
> I'm considering to have a user set up system time synchronisation
> with the audio network clock master and use system time as
> reference indeed, as you also proposed.
One thing I do not know is what the clock will be set to using one of the
standalone audio devices as the PTP master. I'm not sure if those devices
have a battery backed clock, so potentially if you set your system clock
to the PTP clock in a network which only contains a Ravenna device for
example, your system clock could jump to some unexpected value,
potentially far in the past. Just something to consider for documentation
purposes for the moment, hopefully I will find a way to get access to some
hardware for testing.
> It won't work if you have different audio devices synchronised with
> different master clocks
That is always the case, whether using network audio, USB, or AES/EBU or
S/PDIF connections, so again I think just a note for the documentation.
Typically any devices which are on the same network segment or VLAN will
end up using the same master clock because of the PTP best master clock
algorithm so in practical installations should not be a problem. I think
you would have to specifically change the default clock domain from clock
0 to some other value to have that problem.
A second level of consideration is that two devices which are using the
same master clock could still be configured to use different sample rates,
so those two devices could still not be used together as a single
aggregate audio device. That could be manually checked at the beginning,
but eventually should be part of whatever configuration method exists,
compatibility of stream parameters should be verified before those streams
are allowed to be configured together as part of the aggregate device.
On Mon, October 23, 2017 4:32 am, Philippe Bekaert wrote:
> ALC Networx and audinate (several times) concerning patent issues
My limited understanding is that the original Dante protocol is or was
covered by a patent regarding the non-standard clocking method they used,
but Ravenna specifically uses only IEEE and IETF specifications which
should be not covered by any current patents, with the exception of
features which should be part of the hardware (e.g. if the Ethernet
adapter vendor is licensing a patent for some hardware feature we should
not need to care). I have not heard of any patents asserted against IEEE
1588-2008, and protocols like RTP and RTCP have been implemented for many
years by browsers and media players, so I think there should be no patents
> many choices for the implementation of each of the required components
> (full implementation generic libraries, or ad hoc pieces of open source
My preference would be re-use anything available, days are too short to
re-implement software which is already working. That presumes that a
library for what is needed is well maintained, and that the function is
not so simple that understanding how to use the library is not more work
than just implementing for the jack driver.
One thing that will influence what libraries you may choose to use is
whether you want to make the software cross platform or linux only. My
thinking there is that there are virtual sound card implementations for
Windows and Mac OS already, so jack on Windows and Mac OS can use the
existing sound system interface.
If you implement as linux only, you could potentially take certain
shortcuts such as requiring that the user install linuxptp and synchronize
the system clock to the audio PTP domain first. That way you could use
the system clock as the clock for scheduling the net driver.
> Right now, I'm using the free merging AES67 sound card driver on a
> macbook pro to generate AES67 streams, and later also to receive.
I do not have a Mac, so I did not even think of that. That is a good
choice, essentially the equivalent of the free Lawo R3lay driver I was
testing on Windows. The only disadvantage I can see is the driver is
limited in what buffer sizes and sample rates it supports, so you can only
test a subset of what is possible to support, but definitely should allow
getting all the basic work completed with just software implementations on
> I'm running ptp4l (hardware timestamping support, on a server) or ptpd
> (if you don't have hw ts - on my laptop)
I believe ptp4l should also support software timestamp.
I can run ptp4l on my workstation to synchronize to my BeagleBone ptp
master, and ethtool reports that there is no PTP hardware clock available:
Time stamping parameters for enp3s4f0:
PTP Hardware Clock: none
Hardware Transmit Timestamp Modes:
Hardware Receive Filter Modes:
I am continuing to copy the jack-devel mail list for now, hopefully there
are some others who will be interested in participating, if not eventually
we can take this to a private distribution until something is ready for
I am interested to know more about how jack threaded signalling works ...
I know that jackd2 allows multithreading - what is the mechanism on
Linux for synchronisation ?
Is it using pthread conds, futex or something else ?
Sorry, due to posting with wrong email, I was told the post wouldn't
stick. Sorry to double dip.
Hello all, I have installed both versions of zinaddsubfx(jack and ALSA)
on Mint 18.2 64. I am using qjackctl. I have gotten both amsynth and
qsynth to work fine. When I start zinaddsubfx(jack) I can make midi
connections in qjackctl ALSA tab and I see the meters move in
zinaddsubfx. On the audio tab in qjackctl I do not see zinaddsubfx
listed and cannot make the audio connections.
If I start zinaddsubfx(ALSA) I see this program on the qjackctl audio
tab but not on the ALSA tab in qjackctl and cannot make midi connections.
Has anyone gotten it to work? Thanks. JoeF.