Hey everybody,
I've got this audio app I'm writing which uses message passing to
communicate between threads (similar to the actor model). A message
channel consists of a ring-buffer for the actual message storage, and
then an eventfd so that a thread can block on its channel (or,
importantly, several).
At the moment, when the audio thread (the JACK callback) needs to send a
message over a channel to another thread, it follows the common codepath
of appending the message to the channel's ring-buffer and then
write()ing to the eventfd. I suspect this is not real-time safe, but is
it something I should lose sleep over?
-w
Hi,
QMidiArp 0.5.2 has just seen the light of the day. It brings mainly
two improvements. One is a comeback, that of tempo changes on the fly,
and that now includes also tempo changes of a potential Jack Transport
master. Also the Jack Transport starting position is finally taken into
account, so that QMidiArp should be in sync also when starting the
transport master not at zero.
The second one is Non Session Manager support, mainly thanks to the work done by Roy Vegard Ovesen!
Note that for compiling in NSM support you will now need liblo as dependency.
Enjoy, and enjoy LAC in Graz this year
Frank
________________________________
QMidiArp is an advanced MIDI arpeggiator, programmable step sequencer and LFO.
Everything is on
http://qmidiarp.sourceforge.net
qmidiarp-0.5.2 (2013-05-09)
New Features
o Tempo changes are again possible while running, both manually or by
a Jack Transport Master
o Jack Transport position is now taken into account when starting,
QMidiArp used to start always at zero
o Muting and sequencer parameter changes can be deferred to pattern
end using a new toolbutton
o Modules in the Global Storage window have mute/defer buttons
o Global Storage location switches can be set to affect only the pattern
o Non Session Manager support with "switch" capability (thanks to
Roy Vegard Ovesen)
General Changes
o NSM support requires liblo development headers (liblo-dev package)
Hello Linux Audio Users and Developers
In less than an our the MOD Duo Kickstarter campaign will go live and so it is with great pleasure that the MOD Team makes the announcement of the desktop versions for our entire software suite.
Some of this software has already been announced in the past but, as part of our Kickstarter campaign, we put the necessary effort to have them running in a regular Linux environment and not just inside the MOD. All instructions in Github were also updated in order to yield working elements when followed.
Most of this software has been under development for almost two years and their history is related to the development of the MOD itself. Being so, they carry some differences in workflow when compared to other LV2 programs and our current effort is being put on correcting those differences.
The softwares are:
MOD Client - run your LV2 plugins using the MOD interface.
https://github.com/portalmod/mod-client
——————————————————————————————————————————
MOD SDK - plugin interface creator
Use this program to create the HTML interface required by the MOD Client. If you don’t create an interface the plugins still work, but their icons will be a tuna fish can with just the ON/OFF button. When you click on the gear symbol on the upper right side of the icon you have access to the Plugin Settings Screen in which all parameters are visible.
The MOD SDK is Python based and can be installed by typing “pip install modsdk”
As the MOD Client, it runs on your browser and requires a mod-workspace folder (or link) in which you place your LV2 bundles.
Just run “modsdk" in your terminal and point your browser to localhost:9000
There is also post on our blog about the SDK: http://portalmod.com/blog/2014/09/the-mod-sdk
——————————————————————————————————————————
LV2BM - tool for analyzing and benchmarking LV2 plugins
Allows to select which URIs to test
Uses minimum, maximum and default control values to run the plugins
Uses several controls combinations in the full test mode
The output shows the JACK load percent
——————————————————————————————————————————
Plugins
- CAPS-LV2
LV2 port of the CAPS suite of LADSPA plugins.
- TAP-LV2 -
LV2 port of the TAP suite of LADSPA plugins.
- Pitch shifters - http://github.com/portalmod/mod-pitchshifter
Capo - up to 7 semitones up pitch shifting
SuperCapo - up to 24 semitones up pitch shifting
Drop - up to 12 semitones down pitch shifting
SuperWhammy - continuous pitch shifting from -12 to 24 semitones
Harmonizer - scale interval generator
- Utilities - https://github.com/portalmod/mod-utilities
Switchbox - A/B box for audio signal routing
SwitchTrigger - 4 excluding channel selector
ToggleSwitch - 4 non-excluding channel selector
Gain (mono and stereo)
Filters (LP, HP and BP) - 1st, 2nd and 3rd order
Two way mono crossover - 1st, 2nd and 3rd order
Three way mono crossover - 1st, 2nd and 3rd order
- Distortions - mathematical simulations of classic distortion circuits
BigMuff
DS-1
Muff Fuzz
- SopoperLooper
LV2 simplified port of the SooperLooper.
All plugins from our repository have the HTML MOD GUI included.
In our Github repository - www.github.com/portalmod - we also have plugins that were forked from the original repositories.
One of our aims is to trigger a dialogue with the developers, deprecate our forks and add the MOD interface to the original plugins but that depends on the developers and creators and shall be discussed in a one-to-one basis.
Wish you all the best
Kind regards
Gianfranco
The MOD Team
Ah, the equinox...
Twice a year a cherished planetary alignment checks in on schedule,
once again.
The little rock gets another round from its warmy solar furnax, from
which were forged. The pale blue dot gets yet another round and to no
surprise, another tinier dot gets here around:
Qtractor 0.6.3 (armed hadron beta) is now released!
Release highlights:
* Revamped mixer (un)dockable panels (NEW)
* Plugin preset selection sub-menu (NEW)
* LV2 Time position/transport event support (NEW)
* Constrained plugin multi-instantiation (FIX)
* Automation curve node resolution (FIX)
Qtractor is an audio/MIDI multi-track sequencer application written
in C++ with the Qt4 framework. Target platform is Linux, where the Jack
Audio Connection Kit (JACK) for audio and the Advanced Linux Sound
Architecture (ALSA) for MIDI are the main infrastructures to evolve as a
fairly-featured Linux desktop audio workstation GUI, specially dedicated
to the personal home-studio.
nb. Despite the old Qt4 stance, but still recommended, Qtractor does
build, runs and does it all on Qt5 for quite some time now. However, the
former recommendation prevails as the despicable LV2 plugin GUI
X11/embedding support through libSUIL just does NOT work on modern Qt5.
Website:
http://qtractor.sourceforge.net
Project page:
http://sourceforge.net/projects/qtractor
Downloads:
http://sourceforge.net/projects/qtractor/files
- source tarball:
http://download.sourceforge.net/qtractor/qtractor-0.6.3.tar.gz
- source package (openSUSE 13.1):
http://download.sourceforge.net/qtractor/qtractor-0.6.3-13.rncbc.suse131.sr…
- binary packages (openSUSE 13.1):
http://download.sourceforge.net/qtractor/qtractor-0.6.3-13.rncbc.suse131.i5…http://download.sourceforge.net/qtractor/qtractor-0.6.3-13.rncbc.suse131.x8…
- quick start guide & user manual (still outdated, see wiki):
http://download.sourceforge.net/qtractor/qtractor-0.5.x-user-manual.pdf
- wiki (help wanted!):
http://sourceforge.net/p/qtractor/wiki/
Weblog (upstream support):
http://www.rncbc.org
License:
Qtractor is free, open-source software, distributed under the terms
of the GNU General Public License (GPL) version 2 or later.
Change-log:
- Make the mouse-wheel to scroll the plugin list views, when not
hovering a direct-access parameter slider.
- Mixer widget gets (un)dockable Inputs and Outputs panels, also with
their respective title captions.
- Plugin instantiation is now constrained as much to prevent any audio
channel output overriding.
- Existing plugin presets may now be selected right(-click) from plugin
list context-menu (ticket by Harry van Haaren, thanks).
- So-called "painting" over multiple selected event values, while on the
MIDI clip editor view pane below the main piano-roll (eg. note
velocities, controller values, etc.) is now split into two similar
painting modes, whether the sub-menu Edit/Select Mode/Edit Draw is set
on (free-hand) or off (linear).
- Drag-and-copy of plug-in instances across tracks or buses (ie.
cloning) now also copies the direct access parameter setting (ticket by
Holger Marzen, thanks).
- File/Save As... now prompts and suggests an incremental backup name
for existing sessions files.
- Zooming in/out increment is now augmented by whether shift /ctrl
keyboard modifiers are set (on a ticket request by Holger Marzen, thanks).
- LV2 Time position event messages for plugin atom ports that support it
is now being implemented.
- Attempt to break extremely long audio file peak generation on session
close or program exit (as reported by EternalX, thanks again).
- MIDI Controllers Hook and Invert properties are now properly saved for
tracks (after bug report by Nicola Pandini, thanks).
- A segmentation fault when closing with VST plugins open has been
hopefully fixed (after a patch by EternalX, thanks).
- Messages standard output capture has been slightly improved as for
non-blocking i/o, whenever available.
- Automation curve node editing has been slightly improved in regard to
time positioning and resolution.
See also:
http://www.rncbc.org/drupal/node/818
Enjoy && have fun.
--
rncbc aka. Rui Nuno Capela
Just trying to get a clearer understand so hope these aren't too noob-ish.
If you have several processes wanting to send audio, does the callback go to all
of them at (essentially) the same time, or does Jack poll them?
If the former, is it the quickest to complete that gets processed first?
If the latter, does Jack maintain the same order (assuming none stop) or might
it change between buffers?
I've a few more questions but they depend on whether either of these
scenarios is correct!
--
Will J Godfrey
http://www.musically.me.uk
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.
On Sat, Sep 20, 2014 at 04:10:13PM -0400, Mark D. McCurry wrote:
> On 09-20, Fons Adriaensen wrote:
> > Having to do 256 1024-point FFTs just to start a note is insane.
> > It almost certainly means there is something fundamentally wrong
> > with the synthesis algorithm used.
>
> I agree with that notion.
> In typical patches something between 2-10 IFFTs is expected and even this
> cost strikes me as too high (zero IFFTs for pure PAD/SUB synth based).
> In terms of worst case scenarios ZynAddSubFX can have some rather insane
> characteristics given multiple parts, kits, voices, etc.
> For instance if a user decided to use all padsynth instances at max quality,
> they would need 12GB of memory just to store the resulting wavetables.
>
> Such extremes are not really seen in practice, but things are slowly getting
> optimized to avoid them when possible.
You should really look at this from an information theory POV,
combined with some psycho-acoustics.
Suppose you have to deliver 256 samples in a period when a note
starts. That amounts to around 5.3 ms at 48 kHz. That time limits
the amount of spectral detail that can be detected given the
output from the first period. Which means that there is no point
in generating more detail in the first period of a note.
Even on sustained notes the amount of spectral detail that can be
detected by a human listener is limited by the critical bandwidth
of human hearing (which increases with frequency). That means that
any set of harmonics that fall within a critical bandwidth can be
replaced by a single one with the same energy and nobody would be
able to hear the difference. All this means that you *never* need
256 harmonics, not even on bass notes below Fs / (2 * 256).
And if the final output is a weighted sum of those IFFT outputs
you can as well compute the weighted sum of the inputs and then
do a single IFFT - it's a linear transform after all.
Ciao,
--
FA
A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)
----- End forwarded message -----
--
FA
A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)
On 09-20, Fons Adriaensen wrote:
> Having to do 256 1024-point FFTs just to start a note is insane.
> It almost certainly means there is something fundamentally wrong
> with the synthesis algorithm used.
I agree with that notion.
In typical patches something between 2-10 IFFTs is expected and even this
cost strikes me as too high (zero IFFTs for pure PAD/SUB synth based).
In terms of worst case scenarios ZynAddSubFX can have some rather insane
characteristics given multiple parts, kits, voices, etc.
For instance if a user decided to use all padsynth instances at max quality,
they would need 12GB of memory just to store the resulting wavetables.
Such extremes are not really seen in practice, but things are slowly getting
optimized to avoid them when possible.
The biggest flaw in the algorithm besides the current high cost of
initialization is the use of some adaptive harmonics routines which do some
frequency dependent spectral modifications before the IFFT.
More or less that chunk of math will end up resulting in some inaccuracies when
converting to any normal wavetable representation which will avoid the IFFT.
Changing over to a wavetable approach will make other parts of the algorithm
more correct than they currently are, but there's tradeoffs between correctness
and consistency between versions.
--Mark
(Oops, it looks like I missed the ML the first time I sent this)
Hi, as mentioned in subject: Production release 0.4.2 planed of Advanced
Gtk+ Sequencer.
Now, comes your part I would like you to test functionality. I'll do for
you a wiki page of aspects to test.
Please visit http://ags.sf.net and wiki page
https://sourceforge.net/p/ags/wiki/testing-cheatsheet/
Happy to hear of you pretty soon!
Joël
I thought I would move this over here. I know there is already work being
done on this hw wise. These thoughts are for a point to point raw
ethernet audio transport That still allows some normal network traffic as
well.
On Tue, 2 Sep 2014, Len Ovens wrote:
> My thought is something like this:
> We control all network traffic. Lets try for 4 words of audio. For sync
> purposes, at each word boundry a short audio packet is sent of 10 channels.
> This would be close to minimum enet packet size. Then there should be room
> for one full size enet packet, in fact even at 100m the small sync packet
> could contain more than 10 channels (I have basically said 10m would not be
> supported, but if no network traffic was supported then 10m could do 3 or 4
> channels with no word sync). So:
> Word 1 - audio sync plus 10 tracks - one full network traffic packet
> word 2 - audio sync plus 10 tracks - one full audio packet 40 tracks
> split between word 1 and 2
> wors 3 - audio sync plus 10 tracks - one full audio packet 40 tracks
> split between word 2 and 3
> word 4 - audio sync plus 10 tracks - one full audio packet 40 tracks
> split between word 3 and 4
Nobody commented that this could not work :) 4 samples on a 100mbit link
is still less than one full 1500byte data packet. The reason I am thinking
about this right now, is that my studio has been flooded :P and so I
have no access to work on my control surface project right now.
So some background thinking:
- The idea is to replace FW audio interfaces with something at least as
good, maybe better.
- Really low latency available (even if a lot of uses don't need it)
- Really stable operation. On a desktop/rack computer where the
user has access to a PCIe slot, it is obvous that a second NIC would be
the best solution. Laptops should work too.
- Normal network traffic will make it through this mess without ever
disturbing the audio. A laptop may be used with only one NIC and still
need to access network traffic.
- It should "just work" on newer network hw as it is developed.
- It should handle a switch in the middle for use as a range extender, but
never as a network traffic mixer. (on a 1gbit link other traffic may not
disturb things with low enough channel count) What I have thought of so
far would tend to ignore other traffic anyway and a switch should end up
just sending our traffic through our ports. This situation may require
some user intervention such as pointing out which box they wish to connect
to.
- It should deal well with hot plugging. This feels messy, but it should
be possible to let things like netman play with things first and still be
able to detect an audio IF has been connected and reinit the interface for
this use.
In the end, the kernel module for this device should be loaded anytime a
NIC is detected. (detected and has a connection) It should create both an
eth* device and an ALSA device, but should only do so if it detects an
audio IF online. To begin with, this would mean the user would have to do
some setup to get around all the auto setup stuff already running (dhcp
etc.) But as the IF was used more, normal networking stuff might be
expected to detect an audio IF and leave it alone. Maybe the audio IF
could have a dhcp server that refused an IP so that the dhcp client gives
up and goes away. That is, use network protocols that are already
available when possible.
This whole effort assumes the ethernet device is connected by twisted
pair from the host computer to the audio device with a separate path for
each direction. This is very important as it _should_ make for a colision
free environment. This interface will control both audio and data flow.
Any network traffic will only be sent during times audio sending is not
needed. This can be done because all network data will go through the
audio driver. There are still a lot of 100mbit networks out. Lots of new
equipment still has them too. I have chosen a 4 sample frame at 48k (which
happens to be 16 samples for 192k... if you must) because it seems to be
the lowest latency with reasonable use of overhead. (on Gbit and higher
lines, this is no longer true. The internet still runs on 1500MTU and more
than one full packet will travel in one sample's time) I have done all my
calculations based on a 100mbit line because that happens to be what I
have to play with and appears to be able to handle up to 60 audio channels
with some left over for control. I understand that there are venues that
use more, but gbit links will handle lots more. (600 plus, but realize the
systems on each end have to be able to deal with the data as it comes in,
it is not just about the link capabilities)
So, My thought is that each group of packets sent will be timed by a group
of 4 samples. The driver will attempt to send a packet with 4 samples
worth of audio for all channels at the end of those 4 samples. The driver
then calculates how much time it has before another 4 sample times are up
and sends as much data as it has time for. This calculation should be able
to happen only one time for any channel number setup if the hw is not
doing anything too fancy (like waiting for more than one packet before it
sends). It assumes the hardware/driver uses standard size guard bands,
etc. SO for each 4 samples there would be two packets minimum (probably
maximum too for a 100m link). One audio and one data. On a 100m link these
packets would always have an mtu of less than 1500. It would not be
possible to use the arrival of an audio packet as a sync signal, sync as
always, would be an external line if two audio interfaces needed to be
used.
I expect to have an atom based MB to play with soon that has two NICs on
board. as well as an audio IF. This will not be a true test low latency
because the onboard AI has higher latency to begin with (runs at p64/n3
min) and so I will have 192 samples to play with at a time which I will
still try sending in 4 sample bundles. (I may try putting an older PCI AI
in to see if I can get that down a bit) My thought is to make the AI side
a jack client (I think I can do that much :) and the host side an ALSA
device (something new to learn).
All control will be MIDI-able. Because there is two NICs and one of them
expects to do real IP based networking, OSC is possible as well as web
based control. IN the end this is also a general computer running Linux
that can be SSHed into (even ssh -Y) almost anything is possible... but I
would guess the first box that has a real DIY digital CODEC, S/pdif, ADAT
or MADI IF will be pretty basic.... but looking at the R-Pi, basic seems
to be pretty powerful anymore.
In the long run this could be a very interesting device. There is no
reason this could not also be, an effects box (both with local analog
ports as well as through net), a softsynth (most of these boards have at
least one serial port or USB), A remote mixer... drop the box FOH and use
a networked control surface, Android pad... even a browser to control, a
FOH snake box or even a standalone recording device.
Price point? Concidering ethernet switches, USB AUdio devices, Ethernet
storage controllers, set top... I hesitate to call them boxes some of
them are so small? Even development boards look ok. I don't think It would
be worth while to make a two i/o box, but by the time we hit 8 or so it
begins to look good.
--
Len Ovens
www.ovenwerks.net