liquidsfz-0.1.0 has been released
The main goal of liquidsfz is to provide an SFZ sampler implementation
library that is easy to integrate into other projects. A standalone jack
client is also available.
liquidsfz is implemented in C++ and licensed under the GNU LGPL version
2.1 or later. The release tarball can be downloaded here:
https://github.com/swesterfeld/liquidsfz#releases
--
Stefan Westerfeld, http://space.twc.de/~stefan
Since I originally wrote audiorack using Apple's CoreAudio, I made
design decisions based on the functioning of that API. Many of those
choices needed to be reconsidered as I adapted the design to the jack
API.
A very big structural difference between the APIs is how "rendering"
sample buffers is accomplished. CoreAudio provides a separate callback
to get or deliver a block of samples for EVERY audio interface your
program attachs to. For my program using that API, my design is based
on an audio interface being choosen as a "master." This is the low
latency interface that my code's core mixer is driven by. Other
interfaces, both ins and outs, get an extra buffer between them and the
core mixer. For the call-back code of these "slave" interfaces, my
code compared the time stamps between the core mixer buffer, and the
slave buffer to make a "phase" adjustment using Apple's varispeed
resampling audiounit plugin with a PID error loop controlling the
resampling ratio. This keeps the buffers, on average, in sync, but with
extra delay to handle kernel callback scheduling jitter. i.e. no
guarantee what order the OS will schedule the call-backs, even if they
are on the same sample clock. So with this scheme, I could use any
number of interfaces I wanted, each with it's own slightly different
clock rates and drift, with one interface selected as the low latency
master. After years of tweaking the PID filter, I had it working very
well, with no cost (other than processor overhead of the re-sampling)
to the master interface.
Jack on, the other hand, has a single callback from which samples are
received from jack source ports, and new sample data is delivered to
jack destination ports. A very nice and clean approach, driven with a
single clock source. And appropriate for interconnecting audio streams
between programs. I like it a lot.
I have lost the ability to allow for my software to handle re-sampling
on multiple clock domains. I was thinking that zita-a2j, etc, was my
path to get the functionality back. If I didn't have it working so well
on OSX, I wouldn't lament the loss with jack. But it's hard to give up
on it!
Thanks for the Ubuntu Studio Control links.
Ethan...
On Sun, 2019-11-17 at 00:21 +0100, Ralf Mardorf wrote:
> On Sat, 16 Nov 2019 15:49:34 -0700, Ethan Funk wrote:
> > > Why do you need zita-a2j/j2a anyway ? Using a single
> > > multichannelcardis usually the better solution.
> >
> > I have one multichannel audio interface for everything
> > important:program out, studio monitors, headphones, guest mic, host
> > mic,etc. But it sure is nice to be able to use the built-in audio
> > for acue channel and talkback mic, where latency is not
> > important. Alsohandy for USB turntables, and other random devices
> > that areoccasionally used in a radio show without latency being
> > important.
>
> You are mistaken. Try to avoid anything that affects
> "everythingimportant". To sync different devices by software affects
> "everythingimportant".
> On Sat, 16 Nov 2019 14:47:26 -0700, Ethan Funk wrote:
> > Does anyone know where I can find the source code for UbuntuStudio
> > Control?
>
> https://packages.ubuntu.com/source/eoan/ubuntustudio-controls
> https://launchpad.net/ubuntustudio-controls
>
On Sun, Nov 17, 2019 at 12:21:47AM +0100, Ralf Mardorf wrote:
> You are mistaken. Try to avoid anything that affects "everything
> important". To sync different devices by software affects "everything
> important".
It doesn't affect anything on the sound card that is directly
controlled by Jack. From Jack's POV, a2j and j2a are nothing
special, just regular clients.
--
FA
I like to announce the initial release for Xmonk.lv2 version 0.1
Xmonk is a sound generator based on the Faust `SFFormantModelBP` from
physmodels.lib by Mike Olsen.
It's a monophonic formant/vocal synthesizer.
The source is a sawtooth wave and the "filter" is a bank of resonant
bandpass filters.
Xmonk provide a interface to drive the filter/synthesizer via mouse
movement.
Additional Xmonk provide a midi in port to drive it with any midi
controller.
On top Xmonk provide a virtual keyboard to drive it with your PC
keyboard/mouse combination.
Surely you could use your DAW automation to drive it the way you want.
Xmonk could play in two different modes, on is the usual note on/off,
the other is sustain, means the last note play forever, while you play
with the vowel filter settings.
Xmonk could be used with a free scaled temperament, or as selected with
Equal Tempered scales from 12 to 53.
Here you get it:
https://github.com/brummer10/Xmonk.lv2
So, have some fun with it,
hemrann
Dear all,
This weekend the linuxaudio.org VM's will have to be moved to a
different cloud region which could lead to some downtime of services
provided by linuxaudio.org (i.e. mail and web sites). We'll try to keep
impact as low as possible and are aiming at a seamless migration. This
migration is necessary because the cloud region linuxaudio.org now lives
in is being phased out. We'll keep you posted about the progress.
Best regards,
Jeremy
I'm using rasbian-buster-lite and once up and running the performance is
seriously impressive. However it's dog-slow to start up. The last message I
see before the greatest delay (about 30 seconds) is:
Raspberry Pi service to switch to ondemand cpu governor (unless shift key is pressed) during boot up.
I suspect the *actual* delay is connected with networking, but with systemD I'm
totally lost as to where everything is, so any help would be greatly
appreciated.
On the Pi 3 I'm able to get a devuan image, and that's a doddle to sort out,
but there isn't an image for the Pi 4 :(
--
Will J Godfrey
http://www.musically.me.uk
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.
Hey everyone!
My music "career" started with mod music. Nope, not the music and fashion
subculture from the late 1950s, but this mod music
<https://en.wikipedia.org/wiki/Module_file>, when people used programs
called "trackers" to produce stuff. It was either this or buying expensive
hardware.
Although people associate MOD music scene mostly with chiptunes, it was
much more than that, and has its own pantheon of musical gods
<https://modarchive.org/index.php?request=view_chart&query=topartists> who
produced tracks ranging from synth pop
<https://modarchive.org/index.php?request=view_by_moduleid&query=35280> to
jazz
<https://modarchive.org/index.php?request=view_by_moduleid&query=135135>,
from orchestral
<https://modarchive.org/index.php?request=view_by_moduleid&query=120901> to
realistic folk instrumentals
<https://modarchive.org/index.php?request=view_by_moduleid&query=155605>.
Immersed in this music, I severed my link to the mainstream idea of songs
with their standard verse/chorus, and endless drivel about relationships.
That link has not been restored. My mind was opened to music that was so
unlike anything I'd heard before that it felt a bit like walking through
that door in the wall. (Is this H.G. Wells reference
<https://www.encyclopedia.com/education/news-wires-white-papers-and-books/do…>
too obscure? :) )
My heroes were Elwood, DRAX, Awesome. Who even knows these names? I once
created a Wikipedia page for Elwood and it stayed up for many years, but
recently I discovered that it was removed. And yet Elwood
<https://modarchive.org/index.php?request=view_artist_modules&query=69004>
is a legendary musician and producer in the MOD scene, who has inspired and
awed several generations of fellow tracker musicians.
Some names have gotten enough traction to stay on Wikipedia. Purple Motion
<https://en.wikipedia.org/wiki/Jonne_Valtonen> is one clear example.
Several producers, famous today, started out using trackers. Here is
an incomplete
list <https://en.wikipedia.org/wiki/Category:Tracker_musicians>. It mostly
lacks artists who made their names in the tracking scene, but did not
become notable outside of it.
It's been a while since I went on a nostalgia tour, but due to my recent
project of putting out an album of old tunes called "Only Slightly
Embarrassing"
<https://louigiverona.com/?page=projects&s=music&t=slightly_embarrassing>,
I decided to cross further into the continent of "back in my days", which
brought me straight to ModArchive <https://modarchive.org/>. Eventually, I
was convinced that I should try making more tracked works, at the very
least because my early works were so shitty that I felt I had to make up
for that.
Long story short, I realized that MOD music is the true Open Source Music.
I mean, think about it. The most widely used software today is GPLed (
OpenMPT <https://openmpt.org/>). The modules you release are open source
too, just like JavaScript. You open your XM or IT file and inspect how the
tune was created. And you learn.
And there is surely stuff to learn. Not all of it is even tracker-specific.
People had no EQs, no compressors, no reverbs. And yet so much of tracked
music sounds just incredible
<https://modarchive.org/index.php?request=view_by_moduleid&query=134387>.
How did they do it? It turns out, there are ways.
Of course, all of that leads to a bit of self promotion. I would like to
draw your attention to the two tunes that I've written in the past month
with OpenMPT and which you can download and see how they were made. (Or
don't. You can instead explore ModArchive's Top Favorites
<https://modarchive.org/index.php?request=view_top_favourites>.)
- Lid
<https://modarchive.org/index.php?request=view_by_moduleid&query=186854>
- Twizzy II
<https://modarchive.org/index.php?request=view_by_moduleid&query=186855>
You can just use an Online Player to listen to them in a browser, or you
can use almost any modern player to play them. Audacious, VLC, for example.
An interesting thing is that the MOD scene has its own cultural backdrop:
it is primarily melodic oriented, and having melodies means a lot. If you
don't like melodies, you go for trance. I am putting out minimal house,
rominimal even. So, I am sure I will get little love.
But for those of you who enjoy this style of music, I think you might like
these. I am personally very happy with the sound and how both of these
turned out. And yet - no EQing, no nothing. Just volume envelopes, volume
levels and panning work. **a little proud**
It's somehow interesting to me that this is open source minimal house
music. Not a lot of those out there.
p.s.: fuck my tracks, listen to this
<https://modarchive.org/index.php?request=view_by_moduleid&query=34427>
Louigi Verona
https://louigiverona.com/