When I tried to use it sometime ago, puzzling through the docs that came
with it left me just puzzled, without anything convolved. (Sorry, maybe
I'm just stupid.)
--
David W. Jones
gnome(a)hawaii.rr.com
authenticity, honesty, community
http://dancingtreefrog.com
On Tue, February 23, 2016 11:40 am, Jonathan Brickman wrote:
> and short chain -- IP inputs to zita-njbridge to hardware -- which has
> to be synchronous with the physical audio chipset. So the DSP %
> usage on the hardware-connected chain becomes low, because
> it is as simple as it is, and each of the chains in action have a lot
> less also, distributing the work carefully.
This seems like a pretty complicated scheme. What is your actual end goal?
What do you want to happen differently than your current situation?
Is it just that at 85% DSP usage your system is not capable of running
anything additional, and you want to run additional processes?
If that is all, then first try adjusting the latency of your current setup
to be equivalent to the latency you will have with the Rube Goldberg setup
of a stack of dinky little ARM processors connected together with
resampling through a network.
If you read the zita-njbridge man page you see that the minimum latency is
the sum of the periods of each asynchronous connection, so at minimum
double your current latency, plus resampling latency, plus network delay.
So for a start set your system latency to 2x or 3x what you currently have
and see how that works.
--
Chris Caudle
Eight-core, 4GHz AMD, 8G RAM. Running a lot of things all at once, 60
high-demand tones out of two Yoshimis (120 total), JACK DSP % usage is
85%, but CPU usage through htop is a different thing, one CPU is being
kept by one Yoshimi at 64%, the rest less than 15% and some down to
1-2%. JACK is not stressing any CPU!
So my primary question is, what is JACK DSP % usage, what are its
limiting factors?
--
Jonathan E. Brickman jeb(a)ponderworthy.com (785)233-9977
Hear us at http://ponderworthy.com -- CDs and MP3 now available!
<http://ponderworthy.com/ad-astra/ad-astra.html>
Music of compassion; fire, and life!!!
Greetings,
This piece is a longer one that develops a bit slowly. I made up a
tag for this: AcousticWaveRock. The instruments are acoustic guitar
and acoustic bass guitar, to which a few synths sounds were added as
well as drums and percussion, sampled from a Korg Microstation
JazzBrush Kit.
What's new in this project, as I progressively learn, is the use of
what is called 'multing' tracks, eg. taking bits of a track and
copying/moving them to another track. This was used to move some
guitar parts to another track where a delay, for instance, was
applied. It could have been done with automation although using
another track is simpler IMHO, especially when a different EQ is
chosen. No automation curves to make.
Also got recently a headphone amp (Behringer HA400) that's connected
to jack's 3 and 4 playback outputs (1 and 2 still remain connected to
the M-Audio studiophone speakers), the audio card being a 1010LT. It
is now possible to listen in stereo ! Previously the headphone jack
from the M-Audio speakers was used, and it did not give much of a
stereo effect. The switch from one set of playback device to another
is done using Ardour's monitor output choices.
Also, there's the use now of two mics for each of the acoustic
instruments, each mic recording to its own track. There's an AT2020
in omni mode, no cut, and a M-Audio Pulsar (not Pulsar II).
This piece 'goes forward' in the sense that themes only occurs
once. When they occur they are repeated, but they do not come back
later. The supporting structure stays the same though.
Cheers.
https://soundcloud.com/nominal6/c2015-19a
Hello,
I have written generative composition with a chomsky grammar parsed by
and *old* program called gramophone2. Unfortunately, my tests
composition have multitrack but gramophone2 generates midi type 0 files
(1 merged track).
So, all midi sequencers i tried does only import track #1 while there
are 3 or 4 in the midi file with different speeds (i'm playing with yet
another phasing music). It may be a grammophone2 bug, but before to dive
into the code (if i find it) how could i convert the generated midi type
0 into a type 1 so that tracks speeds can be aligned ?
With the hope the question is well asked and not out of topic.
- Benoît
On Mon, Feb 15, 2016 at 5:52 PM, Takashi Sakamoto <o-takashi(a)sakamocchi.jp>
wrote:
> Why did you add additional comments to the fixed-released bug? It's not
> within my intension to mention about the bug. I think your behaviour is
> unwelcome to developers. You should have used button of 'This bug affects
> you' or something similar...
>
I'm not sure what the point of your lecture was/is. But thank you for
initially pointing me to the bug-report where my "inappropriate" comments
got this bug correctly resolved. Furthermore, hopefully this means people
updating to 16.04LTS in the near future will still have use of their Envy24
sound cards. The end result is that the problem was acknowledged, a new
build was done, and the issue was resolved.
See
https://bugs.launchpad.net/ubuntu/xenial/+source/mudita24/+bug/1534647/comm…https://bugs.launchpad.net/ubuntu/xenial/+source/mudita24/+bug/1534647/comm…https://bugs.launchpad.net/ubuntu/xenial/+source/mudita24/+bug/1534647/comm…https://bugs.launchpad.net/ubuntu/xenial/+source/mudita24/+bug/1534647/comm…
Specifically, I can now confirm working ICE1712 cards, and associated
mudita24(1) application, on
4.4.0-6-generic #21-Ubuntu SMP Tue Feb 16 20:32:27 UTC (*).
Working Audio:
card 0: DMX6Fire [TerraTec DMX6Fire], device 0: ICE1712 multi [ICE1712
multi]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 3: M66 [M Audio Delta 66], device 0: ICE1712 multi [ICE1712 multi]
Subdevices: 1/1
Subdevice #0: subdevice #0
Working Midi:
16:0 TerraTec DMX6Fire MIDI-Front DMX6fire 0
16:32 TerraTec DMX6Fire Wavetable DMX6fire 0
(*): 4.4.0-6.21 kernel retrieved and manually installed onto Ubuntu
14.04LTS Skylake-based system:
https://launchpad.net/ubuntu/xenial/amd64/linux-headers-4.4.0-6/4.4.0-6.21https://launchpad.net/ubuntu/xenial/amd64/linux-headers-4.4.0-6-generic/4.4…https://launchpad.net/ubuntu/xenial/amd64/linux-image-4.4.0-6-generic/4.4.0…https://launchpad.net/ubuntu/xenial/amd64/linux-image-extra-4.4.0-6-generic…
........
Well, if you improve something you use, it's better to follow each rules of
> development.
>
> In your case, at first, you should watch release schedule of Ubuntu 16.04.
> https://wiki.ubuntu.com/XenialXerus/ReleaseSchedule
>
> You can see date of KernelFreeze is Apr. 7th 2016. In my opinion, till
> then, what you're expected is:
> - Read wiki page about Ubuntu kernel development cycle and understand it
> - https://wiki.ubuntu.com/Kernel
> - Test with daily build image
> - http://cdimages.ubuntu.com/daily-live/current/
> - When you still find your bug, seek it in launchpad.net
> - https://launchpad.net/bugs
> - When you find similar bug, watch it by subscibing or something like it.
> - When you cannot find similar bugs, register it as new one.
> - but you should keep more days to investigate duplicated bug
>
> At least, additional comments without enough consideration seems not to be
> helpful to yourself, against your expectation. And you should not test with
> packages in PPA. PPA is just Private Package Archive. The packages in PPA
> do not always go for official release.
>
> Although I said some lectures, I wish this bug will be fixed in Ubuntu
> 16.04 release.
>
Instead of wishing, I'd rather make things happen and get things fixed.
--Niels.
http://www.nielsmayer.com
Hi,
after an off-list correspondence with Hermann, I noticed something,
that might explain the difficulties those new to jconvolver have got.
[rocketmouse@archlinux ~]$ pacman -Ql jconvolver
jconvolver /usr/
jconvolver /usr/bin/
jconvolver /usr/bin/fconvolver
jconvolver /usr/bin/jconvolver
jconvolver /usr/bin/makemulti
"Ralf_Mardorf commented on 2016-02-20 16:22
Hi,
as it turns out by two LAU thread
http://lists.linuxaudio.org/pipermail/linux-audio-user/2016-February/104030…http://lists.linuxaudio.org/pipermail/linux-audio-user/2016-February/104075…
some users experienced difficulties using jconvolver.
I couldn't understand why, since I've got examples how to do this, but
then I noticed that the AUR package doesn't install the examples from
the tarball.
Consider to add those examples.
Regards,
Ralf" - https://aur.archlinux.org/packages/jconvolver/#news
This might be an issue for other distro's packages too.
Regards,
Ralf
--
Diese Nachricht wurde von meinem Android Mobiltelefon mit WEB.DE Mail gesendet.
Hermann Meyer <brummer-(a)web.de> schrieb:
What you talk about is so called multi file support, and thru, that isn't implemented, but, that didn't mean it is not stereo. Just use a stereo IR file, and you have stereo reverb. Note, that the marity of Reverb files comes as Stereo files.
Jc-gui may be usefull to get some working config files for jconv, they could give you a Hint how to wrote them, and you could start with edit them, to create your own, with multi file support, when you need it.
--
Diese Nachricht wurde von meinem Android Mobiltelefon mit WEB.DE Mail gesendet.
Ralf Mardorf <ralf.mardorf(a)alice-dsl.net> schrieb:
On Sat, 20 Feb 2016 08:54:40 +0100, Hermann Meyer wrote:
>@david
>There is the old jc-gui, which generate config files for jconvolver.
>I didn't know if it still works, I ain't use it any-more, but, you
>could try it if you wish.
>
>https://github.com/zzzzrrr/jcgui
One issue I already know from at least the old guitarix, didn't test if
this is still an issue for the new guitarix, is an issue when using
$ Jc_Gui -v
Jc_Gui version 0.8
too. It's _not_ really stereo, since there seems to be no way to use a
left.wav and a right.wav. So assumed you like to use jconvolver as
reverb, it will not sound that fantastic, as it sounds when writing a
config with an text editor.
_____________________________________________
Linux-audio-user mailing list
Linux-audio-user(a)lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-user
Hi,
if I chose "Audio System: JACK" I can start a new session, but if I
chose "Audio System: ALSA", I get "Failed to open audio device".
Perhaps I make a mistake.
There's no special reason that I want to use ALSA instead of JACK, I
just want to test it. I never tried to do it before, with any other
version of Ardour.
$ pacman -Q ardour
ardour 4.7-1
$ grep jack /var/cache/aur/current/ardour-4.7-1-PKGBUILD
--with-backends="jack,alsa" \
--libjack=weak \
$ amidi -l;aplay -l;arecord -l
Dir Device Name
IO hw:0,0 HDSPMx579bcc MIDI 1
**** List of PLAYBACK Hardware Devices ****
card 0: HDSPMx579bcc [RME AIO_579bcc], device 0: RME AIO [RME AIO]
Subdevices: 1/1
Subdevice #0: subdevice #0
**** List of CAPTURE Hardware Devices ****
card 0: HDSPMx579bcc [RME AIO_579bcc], device 0: RME AIO [RME AIO]
Subdevices: 1/1
Subdevice #0: subdevice #0
Regards,
Ralf