QMidiArp 0.5.3 fixes a number of bugs and should from now on replace 0.5.2. It also has some minor functional improvements, all is listed below.
With thanks to all reporters, contributors and translators.
QMidiArp is an advanced MIDI arpeggiator, programmable step sequencer and LFO for Linux with ALSA and JACK MIDI backends.
Downloads are available at
o Random functions for sequencer and LFO steps and arp repeat mode
(feature request #5 Keith Milner)
o NSM support now handles import/export/clear to facilitate
getting started (Roy Vegard Ovesen)
o Tempo is now MIDI-controllable (MIDI-learn)
o Sequencer transpose slider is now MIDI controllable (MIDI-learn)
(feature request #7)
o Sequencer pattern maximum length extended to 32 bars
(feature request #6)
o LFO offset jumped back to fixed value when MIDI controlled
(bug #6 distrozapper)
o Arp trigger behavior was not practical with chords pressed on keyboard
(bug #7 Burkhard Ritter)
o JACK Transport no longer worked when no JT Master tempo was present
(bug #5 Barney Holmes)
o Deleting an arp pattern in text window while running caused crash
o Note lengths were not consistent between alsa and jack backends
o Note lengths did not account for current tempo
o Sequencer did not honor "D" button when MIDI controlled
o Seq note length is now a 16th at half slider scale
2013/11/14 Carlos sanchiavedraz <csanchezgs(a)gmail.com>:
> 2013/11/14 Robin Gareus <robin(a)gareus.org>:
>> On 11/14/2013 01:20 PM, Carlos sanchiavedraz wrote:
>>> Hello dear all LAUers.
>>> Time ago I did some research about open source/free software licences:
>>> types, pros and cons, etc. I'm reviewing it and, given that I follow
>>> and love many of the great projects and applications coded by members
>>> of this list, I would love to here you're opinions (pros, cons) and
>>> experience in practice and why you chose X licence for your project(s)
>>> (business model or enterprise view in mind, 'cause you like it...).
>> Software concerning infrastructure and inter-operation should *provide
>> freedom to the developer*. Less restrictive licensing (eg. MIT, BSD,
>> public-domain) is important to promote standards (in particular network
>> or communication protocols.)
>> Application software aimed at end-users should *protect the freedom of
>> the user*. Here GPL is appropriate. It ensures that any user will be
>> free to run it (which must include the freedom to modify it e.g. to make
>> it work on future systems,...) amongst other freedoms. From a developer
>> point of view the GPL also provides continuity and allows software to
>> Personally I either choose the MIT or the GPLv2+ license for all of my
>> projects. The former for libs, the latter for apps (with the usual
>> exceptions, mainly due to re-using code and inheriting licenses). The
>> reason for those two is that they're the only two licenses that I have
>> read, understand and agree with.
>> I have no intention to spend any time reading all of the others licenses
>> cover-to-cover, and I believe that any developer who is using a given
>> license should at least have a basic understanding of [the implications
>> of] the license which mandates reading it completely.
>> I keep an open eye on [new] licenses but have not had any reason to
>> investigate any of them any further.
>>> I see that the most commons are GPL2 (some don't like yet the v3) and
>>> GPL3. And nowadays with so many services in the cloud also AGPL, and
>>> Thanks as always for sharing your work and knowledge.
> Well and clearly explained.
> Thanks so much, Robin.
> Carlos sanchiavedraz
> * Musix GNU+Linux
I've made a quick search in Sourceforge to see the number of projects
with each licence and related to Audio:
* GPL2/GPL2+: they are a vast majority, maybe just because It's older
than v3. Here you can see some of the projects that we love at Musix
and myself: Ardour (in its own repo), Qsynth, Qjackctl, (all the rncbc
stuff), Rakarrack Hydrogen, LMMS
* GPL3: Here we have Guitarix (also has GPL2 and BSD), Virtual MIDI
* AGPL: not much
You can check also the chapter "Adoption" Wikipedia as part of the
article on GPL. There you can read:
In 2011, four years after the release of the GPLv3, according to Black
Duck Software data, 6.5% of all open-source license projects are GPLv3
while 42.5% are GPLv2. Google open-source programs office manager
Chris DiBona reported that the number of open-source projects licensed
software that had moved to GPLv3 from GPLv2 was 50% in 2009,
I thought at the beginning that choosing GPLv3 was the way to go
nowadays: It's newer, and takes into account problems like
"Tivoization", patents and stuff.
And also AGPL is one for me to consider because many of my projects
could benefit from its protection against being "cloud-servified"
against your will, let's say.
But then I see that big projects (reference for me in the FLOSS world)
like Ardour, Jackd, Qjackctl, Qsynth, Rakarrack are GPL2+ and others
just GPL2, so I wonder if they just keep going on with what was chosen
in first place (backwards compatibility I guess) or they just don't
like GPLv3 yet even with those mentioned potential benefits.
I'd be very appreciate to know in particular the experience of these
projects I mentioned and that of people who actually make a living
developing floss software and have a business model that supports
and/or benefits from it.
Thanks so much anyways you all.
* Musix GNU+Linux
Nama has a sort of multiband compressor by using three parallel tracks, all
processed by filters. Now we've discovered, that our current filter settings
aren't good for multiband compression as used in a mastering setup. Yes, I
know, people debate wether to use it or not, but it's always nice to have the
opportunity and leave it up to the users.
We currently use the Glame highpass, lowpass and bandpass IIR LADSPA plugins
with unique IDs 1890-1892. The current settings are:
lowpass: 106Hz 2 stages
bandpass: 520Hz(center) 800Hz(bandwidt) 2 stages
highpass: 1030Hz 2 stages
These settings give a good full band, but I've heard, that the bands aren't
the best choices. I've found some reeferences to using:
160Hz as the first divider and 3500Hz for the second divider.
could someone suggest good settings to achieve this. Either with these
plugins or with different filters, if easily available and in form of a LADSPA
plugin. Ecasound has LV2 support, but it's not capable of all the LV2
I'd also do just fine with a formula to calculate it myself, if there is
such a thing, that really meets the audible requirements.
Fons, last time you were kind enough to supply the settings. How did you
arrive at these values? Did you check graphically? I'm pretty sure, that you
Warm regards and thanks a lot
Music, creative writing, technical information:
I can see this being a problem if the multiple devices were all input devices, such as the "multiple Soundblasters" mentioned in a previous post, but if there is a single device used for input, and another device that is used strictly for listening, what problems can be caused? I fail to see how it could cause a problem, even if the clock on the monitor audio chain drifts.
michael noble <looplog(a)gmail.com> wrote:
>Linux-audio-user mailing list
I was recently stuck on an issue that I could not readily find
solutions for online. So I thought documenting the solution here may
be of use to someone in the future.
Problem: when attempting to start two independent instances of
fluidsynth in server mode from the command line, I get the following
fluidsynth: error: Failed to bind server socket
After a bit of research, it seems that the problem is a result of the
fact that in server mode (and probably in other modes as well, though
I am not sure) fluidsynth communicates with the shell through a socket
on the local machine. From what I saw in the source, there is no
automatic checking to see if the default port is already bound, so the
second instances tries to grab the same socket as the first, and fails
with the above error. There is a setting "shell.port", which needs to
be set explicity in this case (see man page). the -o command line
option allows one to define settings. In the end, the following two
commands give me the kind of environment I need (notice the "-o
$ fluidsynth -a jack -m alsa_seq --portname=fluidsynth_piano
--no-shell -s -g 1 \
-o audio.jack.id=fl_piano -o shell.port=9800 piano.sf2 &
$ fluidsynth -a jack -m alsa_seq --portname=fluidsynth_drums
--no-shell -s -g 1 \
-o audio.jack.id=fl_drums -o shell.port=9801 drums1_harris.sf2 &
I do not know if this problem occurs when using Qsynth, I am doing
this in a text only environment.
$ uname -a
Linux mervag 3.12.1-1-ARCH #1 SMP PREEMPT Thu Nov 21 08:18:42 CET 2013
$ jackd -V
Hi all. I'm still having occasional audio glitch problems, and I frequently get 1 or more errors in a log file like this:
[31mERROR: [0ma2j_process_incoming: threw away MIDI event - not reserved at time 157016825
Is this a A2JMIDIBridge error? If so, should this cause a complete audio drop-out at that point? It seems I only have difficulties when recording audio with Nama from a softsynth playing from MIDI data. I have no problems either with playback of MIDI sequences, playback of audio, or recording audio directly from a live sound-source. There was some posts a while back about A2JMIDI not providing proper MIDI time stamps. Could this be part of the issue? Is there any reason to be concerned with Nama/Ecasound creating MIDI ports even when they're not being used? I'm just throwing out thoughts here, because I'm not sure where to look for answers or how to test further. It even occurs when recording a softsynth live, but of course I'm still using MIDI data to drive it. Any suggestions would be appreciated. Thanks much.
jpmidi is a Midi-file player that uses Jack-Midi and synchronises to
It is currently a hosted by Julien here : http://juliencoder.de/jpmidi/
After using it for a while with the command line, i needed an advanced
feature : being able to control it remotely with UDP or TCP commands.
This what i call a server mode.
I've been modifying the code, and added a '-s' option so jpmidi is
started in server mode, and waits for commands on port 2013.
Well the whole thing is still buggy sometimes, especially with the
port which is not released correctly, but it worked.
I plan to continue developping on this feature. May you be interested
in contributing, let me know.
The project is now hosted on sourceforge :
Hi all! I'm wanting to use Aeolus's MIDI controller 98 function to change individule stops. However, In order to easily send single controller values, I'm wondering about using program change instead. Can I configure it that way now, or would that require code changes? I know program change is currently used for presets, but could this be user-selectable perhaps, so that one could use it for registration instead? Or again, is that already possible? Since I can't see, I'll have to have sighted assistance with it, so if this can be done currently, any guidance would be helpful.I very much appreciate the text interface! Thank you very much for that. I'm hoping to be able to include registrations in my song files using MIDISH as well. I hope to have a recording to share shortly. Thank you for this instrument! I'm enjoying it very much.