Hi Team,
Hope you are doing good!
I am facing a problem in using jack in my application. Need your valuable
help on this.
I am able to loop back the audio through one headphone connected to the PC.
I am just looking to switch the headphone for loop back. please guide me
how i can perform this task.
I am able to connect to the jackd server as client using jack client open
API. As per my understanding I need to pass the device name in the command
line arguments to jackd server executable which i am running to run the
server. but it is always using the default port audio device for audio
processing.
I am working on windows 10 system.
Please let me know if any other information is required from my side.
please help. I appreciate your inputs in advance.
Thanks.
Regards
Ankur Sharma
9990304661
Hi,
I am a complete newbie to jack programming. I am trying to use Jack to send MIDI events to other applications. I am used to Mac OS X programming and the CoreMidi model, so from what I read about Jack, it seem it should be a fairly easy transition. I am still programming in Mac OS, but I am trying to make the code easily portable to Linux, hence my option for Jack. The problem is I can’t find good reasources for learning Jack. The reference to the API is good and they provide example code, but there is not enough explanatory text about how to put the pieces together, or a good tutorial to follow. Most of the stuff I found was about the audio part of jack, not so much about MIDI. So I have many, many questions, like:
- I believe I understand the basic procedures for (1) opening a client, (2) registering a port and (3) installing callbacks to deal with data. That part is similar to CoreMDI. But I don’t find any calls that I could make to just send MIDI bytes to the jack server, except from a process callback. The callback model seems natural to me in an input port, when we are waiting for data to come from other parts of the system and we don’ know when they will come. But I want to send MIDI at any arbitraty time that I decide within my application. How do I do that?
- It is not completely clear to me how jack deals with time. It seems everything is done in numbers of samples. Is that meant to keep it in synch with the audio? In that case, do I need to convert timestamps to number of samples and constantly keep track of changes in sampling rate? What is the sanctioned way of doing that? (couldn’t find it in the examples).
- I tried to compile and run the examples in Xcode with the provided project and source files, but it couldn't even get pass the #includes. I didn’t see in the provided project (the one that came with the Extras folder in the JackOSX instalation) any references to the jack framework. Tried to include it myself, but Xcode would still not see the jack headers. If I try to include the jack header with a full hardcoded path, it finds the first file (‘jack/jack.h’) but fails with other includes. Maybe the project file is outdated. Could anyone help with that?
Any help, tip or link would be highly appreciated. Sorry about the long message. Promisse I’ll keep them shorter from now on (If I don’t get kicked out of the list first…) ; - )
Carlos.
Hello! I use Jack to multitrack out of Akai MPC software into my DAW
(Cakewalk). My audio interface supports 32-bit audio, and I use 32-bit audio
driver bit depth when not using Jack (such as recording vocals into a
finished instrumental in Cakewalk) so I know it works well.
The problem is, when I do use Jack to connect MPC to Cakewalk, in Cakewalk's
settings, the bit audio driver depth is stuck on 24 bit & greyed out so I
can't change it. I want to be able to record my MPC beats into Cakewalk at
32 bit. Some say it's overkill but it's impossible to make 32-bit clip &
distort & Cakewalk does all its internal work at 32 bit. In 24-bit Cakewalk
has to convert the audio to 32 bit first in real-time, which causes a
performance hit. My CPU usage is precious because I know I'll fill my
projects up with resource-hungry plug-ins. I really want to use 32 bit
pretty bad for those and a few other reasons.
Is there a way to force Jack to use 32-bit audio? Thanks for your time!
--
Sent from: http://jack-audio.10948.n7.nabble.com/Jackit-f3.html
Hello everyone,
I was wondering if with jack I could create a virtual microphone to use, I
have a hosted server which obviously doesnt have sound card, and I would
like to stream sound via network to the server and process it using some
python packages, for that my script fails because it looks for the sound
card, and I would like to setup a virtual microphone to feed my
application, is this possible?
Greetings
The FFADO project is pleased to announce FFADO version 2.4.2. This is a bug
fix release which mainly addresses some lingering issues associated with
python3. While most of the fixes relate to the build process, one could
affect the mixer GUI at runtime.
FFADO 2.4.2 is a source-only release which can be downloaded from the
project's website at http://ffado.org/, or directly from
http://ffado.org/files/libffado-2.4.2.tgz.
Thanks to those who contributed towards this version: Jinke Fan, Hector
Martin, Orcan Ogetbil, David Runge and Jonathan Woithe.
On Thu, September 12, 2019 8:05 am, crocket wrote:
> I've spent months on perfecting the bridge between JACK and ALSA.
I think you have created the most complex monitor controller ever. This
is that system that has S/PDIF from the motherboard audio to digital input
on a USB connected interface that you described earlier?
-----------------------------
--> is a connection that doesn't cross the boundary of a computer.
==> is a connection that occurs between two computers.
Anything that doesn't cross ==> occurs on my desktop computer. Anything
that happens after ==> occurs on Raspberry Pi 3 B+.
netjack1 runs on my desktop computer. Ethernet cable is the only cable
that connects two machines.
1. ALSA dmix --> SPDIF out of Realtek ALC887 onboard soundcard --> SPDIF
In of X-Fi HD USB --(alsa_in or zita-a2j)--> netjack1 ==(ethernet
cable)==> ALSA jack backend --> I2S Amplifier(HiFiBerry Amp2) --> Speakers
2. netjack1 --(alsa_out or zita-j2a)--> headphone jack of X-Fi HD USB on
my desktop computer --> headphones
-----------------------------
Is this being used for music production, or just listening to audio files?
PulseAudio can do something similar in a self-contained fashion, at the
expense of higher latency and the occasional dropped sample. And of
course the old fashioned way to do it would be with an analog monitor
controller or small mixer. The more modern audio production focused way
would be to use Dante or similar if you really want network connectivity,
or the ability to add and remove devices, have large distance separation,
etc.
My recollection is that the questions you posed were somewhat isolated,
and mostly how to accomplish various specific tasks with jackd or alsa.
It may have been better to start with a more general description of what
you were trying to accomplish and ask about the best way to get to that
result, because I can't imagine anyone recommending that convoluted signal
chain just for listening to audio from your desktop and a separate
computer (RPi in your case) at the same time. Using an on board audio
interface just to send audio over S/PDIF to a USB interface to drive
headphones seems pretty bizarre, I never could make sense of what you were
trying to accomplish with that rather than just sending audio to the USB
interface to start with.
--
Chris Caudle
I've spent months on perfecting the bridge between JACK and ALSA.
I've perfected it, but the perfection is still worse than using ALSA alone.
Plus, the maintenance cost of the bridge is high in the short term and the
long term.
Just don't bridge different audio systems. Your life is too precious for
that kind of time wasters.
Hi all,
I'm new to this list and would like to ask the people on this list for
some advise.
I'm searching for a tool to transfer hundreds of audio channels from an
existing application to some code of our own.
The existing application is SuperCollider and as I understand it
supports connections by JACK.
But is JACK suitable to connect hundreds of audio channels? We need low
latency. Will JACK be low latency? How many times are the samples
internally copied?
Is it true that all audio will be routed through some network layers?
How difficult will it be to add audio input to our own application to
receive audio from JACK?
I hope you people can help me a little bit further with these questions.
Greetings.
Hi,
the latest available JACK library 1.9.11 crashes any application
accessing the JACK library if the application has been built with ASLR
enabled. ASLR has been in Windows since 2007 and is enabled by default
in Microsoft's C++ toolchain since 2012.
With this feature being enabled in more and more applications over the
years, the current situation is that a lot of applications crash
silently when JACK is installed on the system without giving the user
any clue about what is going on. This includes audio middlewares like
FMOD (until v1.10.14 implemented a safety protection against bad ASIO
drivers) or Wwise and likely other popular DAWs.
On top of that, it forces other developers whose software uses the JACK
library to disable ASLR in their build settings.
It should also be noted, that this is not always possible, for instance
in the case of a developer using the JACK library to create a plugin
that is used in a different host software of which the developer has no
control about the build process.
Is there anything that can be done to fix the Windows version of JACK?
Best,
Jörg