liquidsfz-0.1.0 has been released
The main goal of liquidsfz is to provide an SFZ sampler implementation
library that is easy to integrate into other projects. A standalone jack
client is also available.
liquidsfz is implemented in C++ and licensed under the GNU LGPL version
2.1 or later. The release tarball can be downloaded here:
https://github.com/swesterfeld/liquidsfz#releases
--
Stefan Westerfeld, http://space.twc.de/~stefan
Since I originally wrote audiorack using Apple's CoreAudio, I made
design decisions based on the functioning of that API. Many of those
choices needed to be reconsidered as I adapted the design to the jack
API.
A very big structural difference between the APIs is how "rendering"
sample buffers is accomplished. CoreAudio provides a separate callback
to get or deliver a block of samples for EVERY audio interface your
program attachs to. For my program using that API, my design is based
on an audio interface being choosen as a "master." This is the low
latency interface that my code's core mixer is driven by. Other
interfaces, both ins and outs, get an extra buffer between them and the
core mixer. For the call-back code of these "slave" interfaces, my
code compared the time stamps between the core mixer buffer, and the
slave buffer to make a "phase" adjustment using Apple's varispeed
resampling audiounit plugin with a PID error loop controlling the
resampling ratio. This keeps the buffers, on average, in sync, but with
extra delay to handle kernel callback scheduling jitter. i.e. no
guarantee what order the OS will schedule the call-backs, even if they
are on the same sample clock. So with this scheme, I could use any
number of interfaces I wanted, each with it's own slightly different
clock rates and drift, with one interface selected as the low latency
master. After years of tweaking the PID filter, I had it working very
well, with no cost (other than processor overhead of the re-sampling)
to the master interface.
Jack on, the other hand, has a single callback from which samples are
received from jack source ports, and new sample data is delivered to
jack destination ports. A very nice and clean approach, driven with a
single clock source. And appropriate for interconnecting audio streams
between programs. I like it a lot.
I have lost the ability to allow for my software to handle re-sampling
on multiple clock domains. I was thinking that zita-a2j, etc, was my
path to get the functionality back. If I didn't have it working so well
on OSX, I wouldn't lament the loss with jack. But it's hard to give up
on it!
Thanks for the Ubuntu Studio Control links.
Ethan...
On Sun, 2019-11-17 at 00:21 +0100, Ralf Mardorf wrote:
> On Sat, 16 Nov 2019 15:49:34 -0700, Ethan Funk wrote:
> > > Why do you need zita-a2j/j2a anyway ? Using a single
> > > multichannelcardis usually the better solution.
> >
> > I have one multichannel audio interface for everything
> > important:program out, studio monitors, headphones, guest mic, host
> > mic,etc. But it sure is nice to be able to use the built-in audio
> > for acue channel and talkback mic, where latency is not
> > important. Alsohandy for USB turntables, and other random devices
> > that areoccasionally used in a radio show without latency being
> > important.
>
> You are mistaken. Try to avoid anything that affects
> "everythingimportant". To sync different devices by software affects
> "everythingimportant".
> On Sat, 16 Nov 2019 14:47:26 -0700, Ethan Funk wrote:
> > Does anyone know where I can find the source code for UbuntuStudio
> > Control?
>
> https://packages.ubuntu.com/source/eoan/ubuntustudio-controls
> https://launchpad.net/ubuntustudio-controls
>
On Sun, Nov 17, 2019 at 12:21:47AM +0100, Ralf Mardorf wrote:
> You are mistaken. Try to avoid anything that affects "everything
> important". To sync different devices by software affects "everything
> important".
It doesn't affect anything on the sound card that is directly
controlled by Jack. From Jack's POV, a2j and j2a are nothing
special, just regular clients.
--
FA
I like to announce the initial release for Xmonk.lv2 version 0.1
Xmonk is a sound generator based on the Faust `SFFormantModelBP` from
physmodels.lib by Mike Olsen.
It's a monophonic formant/vocal synthesizer.
The source is a sawtooth wave and the "filter" is a bank of resonant
bandpass filters.
Xmonk provide a interface to drive the filter/synthesizer via mouse
movement.
Additional Xmonk provide a midi in port to drive it with any midi
controller.
On top Xmonk provide a virtual keyboard to drive it with your PC
keyboard/mouse combination.
Surely you could use your DAW automation to drive it the way you want.
Xmonk could play in two different modes, on is the usual note on/off,
the other is sustain, means the last note play forever, while you play
with the vowel filter settings.
Xmonk could be used with a free scaled temperament, or as selected with
Equal Tempered scales from 12 to 53.
Here you get it:
https://github.com/brummer10/Xmonk.lv2
So, have some fun with it,
hemrann
Dear all,
This weekend the linuxaudio.org VM's will have to be moved to a
different cloud region which could lead to some downtime of services
provided by linuxaudio.org (i.e. mail and web sites). We'll try to keep
impact as low as possible and are aiming at a seamless migration. This
migration is necessary because the cloud region linuxaudio.org now lives
in is being phased out. We'll keep you posted about the progress.
Best regards,
Jeremy
I'm using rasbian-buster-lite and once up and running the performance is
seriously impressive. However it's dog-slow to start up. The last message I
see before the greatest delay (about 30 seconds) is:
Raspberry Pi service to switch to ondemand cpu governor (unless shift key is pressed) during boot up.
I suspect the *actual* delay is connected with networking, but with systemD I'm
totally lost as to where everything is, so any help would be greatly
appreciated.
On the Pi 3 I'm able to get a devuan image, and that's a doddle to sort out,
but there isn't an image for the Pi 4 :(
--
Will J Godfrey
http://www.musically.me.uk
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.