Hi *,
I wanted to start a discussion of what kind of AVB connectivity makes the most sense for jack.
But please keep in mind what AVB is and isn't.
a) Fully functional Backend
b) Media Clock Backend with AVB jack clients
pro a)
- the AVDECC connection management could be done seamlessly in a jack way
- out of the box avb functionality
con a)
- only one talker/listener, single audio interface
- huge programming effort
- no dynamic audio mapping
pro b)
- multiple talkers/listeners with multiple audio interface using alsa api
- avoiding huge code addition to the backend, thus much easier to maintain
- AVDECC handling per client for dynamic audio mapping
con b)
- cpu load... price for multiple talkers/listeners
I'm excited to hear your opinions!
Best,
Ck
--
Diese Nachricht wurde von meinem Android-Gerät mit K-9 Mail gesendet.
Hello all.
I have embarked on a long journey of porting my radio automation and
mixing system from OSX (making extensive use of Apple's CoreAudio API)
to a Linux/JACK/gstreamer system and potentially, making it a cross
platform application. The application is actually a collection of
programs, ranging from a studio UI, library management UI, database
back-end, and an faceless audio-engine/mix system. I am currently
working on the audio-engine port, since potentially I could port that
and continue to use the old UIs on Macs until I can get those ported as
well.
In order to make this manageable, I decided to pull some audio features
out of my existing audio-engine, notably AudioUnit effects hosting, IAX
phone support, and multiple audio-device support. I should be able to
add VL2 effects hosting and IAX back in at some point. But for the
short term, I can use jack to host effects in another program for now,
and just give up IAX/asterisk integration. The multiple audio-device
support, via my own clock drift measurement and re-sampling scheme, is
a loss for now, but it looks like alas-in and alas-out programs could
be stop-gap approaches to that, and appears to use a similar underlying
approach.
So far, the porting is going much faster than I had expected. Jack2 is
a very clean and well though out API. It is not unlike CoreAudio, only
it's MUCH easier to use. Leveraging JACKs inter-application audio
routing, I have further broken my audio-engine down into a mixing
system and control session host, with separate programs executed for
playing and recording/streaming audio content. This a very nice, clean
way to break thing up, made possible by JACK. Thanks.
That is the big picture. I have come across some questions that I need
a little help with, all regarding the start-up process of the jackd
server:
1) At first, JACK seems to be designed for a single jackd server
running on a system, with the server control API lacking, as far as I
can tell, a method to control multiple servers by name. But then, with
the name command line option, I can actually run multiple servers tied
to different audio devices. Does the server control API simple not
support names at this point? Maybe this is just how JACK evolved,
originally one jackd per machine, then named server support added, but
not fully implemented across all the API?
2) Again, with the named server: I can start a named jackd server. I
can connect to it using QjackCtl by setting the name property in
the advanced tab of QjackCtl settings. But when I try to connect to the
named server from my audio-engine application, via a jack_client_open()
call, passing the name char string, my application instead tries to
start up a jackd server instance, using the command noted in the jackd
man page (first line of $HOME/.jackdrc). Is this a bug, or am I
missing some detail?
3) Regarding the jack_client_open() behavior when no server is yet
running: the function call does seem to execute the first line of
$HOME/.jackdrc and start the jackd server running. However, it
appears to inherit all the file descriptors from my application. This
is a problem because my application is designed to self-restart on a
crash. With jackd holding my application's TCP control socket open, my
application can't restart (bind again to the desired TCP port) until
after I kill the jackd process. I assume the auto-jackd startup code
is forking and execing, and the code simply isn't closing the parent's
file descriptors. Is this a bug or intentional? Is there a way I can
detect if a jackd server is running ahead of time, so I can start the
server myself using my own fork/exec which would closing my descriptors
on the child, then, once I know jackd is running, call
jack_client_open() in my app?
4) because my audio-engine is a faceless application, and can be run
without a desktop session, I need it to be able to connect to a jackd
server run by other users, or to start a jackd server that can be used
by other users. I can verify that with Ubuntu 18.10, I can not start
jackd from a user account, then connect to it from my application
running as a different user, even if I run my app as root, not that I
intend to do that as a real world workaround. Is there some approach,
group permissions possibly, to allowing other users to access a jackd
server?
5) Wishful thinking: Give jackd the ability to read QjackCtl config
files, so I could configure things from the GUI, then stop jackd and be
able to restart it from the server-control API or command line with a
command line option pointing to a config file. Better yet, make a
persistent JACK-aware place to store such file in the file hierarchy.
Thanks,
Ethan Funk
Hi there,
before I go and open a ticket in Jack 2's Github issue tracker,
requesting a new release, I thought I'd ask whether this is already in
the making and ready in the short term?
IMHO the metadata support should be made available to Jack 2 users as
soon as possible, so developers can rely on it being available with a
broader user base.
Also, the latest release is now almost 1 1/2 years old.
Cheers, Chris
Hi, I am just starting out trying to get a linux based studio up and
running. I am having a bit of troble getting my sound card to work. I have
been trying to slove the problem myself, reading forums etc but I get the
feeling I'm just going around in circles because I don't actually know what
I am doing. I manage to change the error messages in jack but not actually
solve them. This is the latest output from jack when I try to start it.
10:11:25.234 Statistics reset.
10:11:25.245 ALSA connection change.
10:11:25.329 D-BUS: Service is available (org.jackaudio.service aka
jackdbus).
Cannot connect to server socket err = No such file or directory
Cannot connect to server request channel
jack server is not running or cannot be started
JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping
unlock
JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping
unlock
10:11:25.407 ALSA connection graph change.
10:12:01.086 D-BUS: JACK server is starting...
10:12:01.199 D-BUS: JACK server was started (org.jackaudio.service aka
jackdbus).
Cannot connect to server socket err = No such file or directory
Cannot connect to server request channel
jack server is not running or cannot be started
JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping
unlock
JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping
unlock
Wed May 15 10:12:01 2019: Starting jack server...
Wed May 15 10:12:01 2019: JACK server starting in realtime mode with
priority 10
Wed May 15 10:12:01 2019: self-connect-mode is "Don't restrict self connect
requests"
Wed May 15 10:12:01 2019: ERROR: Cannot lock down 82280346 byte memory area
(Cannot allocate memory)
Wed May 15 10:12:01 2019: Acquired audio card Audio0
Wed May 15 10:12:01 2019: creating alsa driver ...
hw:PCH|hw:PCH|1024|2|48000|0|0|nomon|swmeter|-|32bit
Wed May 15 10:12:01 2019: configuring for 48000Hz, period = 1024 frames
(21.3 ms), buffer = 2 periods
Wed May 15 10:12:01 2019: ALSA: final selected sample format for capture:
32bit integer little-endian
Wed May 15 10:12:01 2019: ALSA: use 2 periods for capture
Wed May 15 10:12:01 2019: ALSA: final selected sample format for playback:
32bit integer little-endian
Wed May 15 10:12:01 2019: ALSA: use 2 periods for playback
Wed May 15 10:12:01 2019: graph reorder: new port 'system:capture_1'
Wed May 15 10:12:01 2019: New client 'system' with PID 0
Wed May 15 10:12:01 2019: graph reorder: new port 'system:capture_2'
Wed May 15 10:12:01 2019: graph reorder: new port 'system:playback_1'
Wed May 15 10:12:01 2019: graph reorder: new port 'system:playback_2'
Wed May 15 10:12:01 2019: New client 'PulseAudio JACK Sink' with PID
1324
Wed May 15 10:12:01 2019: Connecting 'PulseAudio JACK Sink:front-left' to
'system:playback_1'
Wed May 15 10:12:01 2019: Connecting 'PulseAudio JACK Sink:front-right' to
'system:playback_2'
Wed May 15 10:12:01 2019: New client 'PulseAudio JACK Source' with PID 1324
Wed May 15 10:12:01 2019: Connecting 'system:capture_1' to 'PulseAudio JACK
Source:front-left'
Wed May 15 10:12:01 2019: Connecting 'system:capture_2' to 'PulseAudio JACK
Source:front-right'
Wed May 15 10:12:02 2019: Saving settings to
"/home/vinnie/.config/jack/conf.xml" ...
10:12:03.253 JACK connection change.
10:12:03.254 Server configuration saved to "/home/vinnie/.jackdrc".
10:12:03.254 Statistics reset.
10:12:03.262 Client activated.
10:12:03.263 Patchbay deactivated.
10:12:03.356 JACK connection graph change.
Wed May 15 10:12:03 2019: New client 'qjackctl' with PID 2727
Anyway if anyone can point me into a direction of where to start reading so
I can get some understanding of what I am doing/should be doing that would
be much apperciated.
Thanks
Hi,
I'm writing an MacOSX app that is able to capture audio from speaker and save the audio file to disk in wav or whatever format for further replay.
I googled a lot and found out that Jack might help me to do this kind of thing.
Is there any tutorials for this? Should I download JackOSX? I have cloned jack2 source code and successfully build example-clients/capture_client.c .
Could you give me some guide?
Thank you very much
I am using a Python API
(https://github.com/spatialaudio/jackclient-python) to instantiate JACK
clients in independent Python processes. With reasonable audio block
lengths (256 to 4096) it works very well to do pretty demanding
real-time signal processing tasks (just FYI and to understand the context).
The API allows me to detect occurring xruns with a reported signal
delay, coming from the JACK library. Sometimes xruns occur and the
reported delay is 0ms, while the audio stream clearly experiences a
dropout. I've also seen higher delays reported, but right now I cannot
tell under what conditions.
I just wonder how the delay times are gathered, how reliable or
conclusive they are in general and what an xrun "without delay" might mean?!