Hi,
QMidiArp 0.5.2 has just seen the light of the day. It brings mainly
two improvements. One is a comeback, that of tempo changes on the fly,
and that now includes also tempo changes of a potential Jack Transport
master. Also the Jack Transport starting position is finally taken into
account, so that QMidiArp should be in sync also when starting the
transport master not at zero.
The second one is Non Session Manager support, mainly thanks to the work done by Roy Vegard Ovesen!
Note that for compiling in NSM support you will now need liblo as dependency.
Enjoy, and enjoy LAC in Graz this year
Frank
________________________________
QMidiArp is an advanced MIDI arpeggiator, programmable step sequencer and LFO.
Everything is on
http://qmidiarp.sourceforge.net
qmidiarp-0.5.2 (2013-05-09)
New Features
o Tempo changes are again possible while running, both manually or by
a Jack Transport Master
o Jack Transport position is now taken into account when starting,
QMidiArp used to start always at zero
o Muting and sequencer parameter changes can be deferred to pattern
end using a new toolbutton
o Modules in the Global Storage window have mute/defer buttons
o Global Storage location switches can be set to affect only the pattern
o Non Session Manager support with "switch" capability (thanks to
Roy Vegard Ovesen)
General Changes
o NSM support requires liblo development headers (liblo-dev package)
Hey Everybody,
I'm happy to announce OpenAV productions: http://openavproductions.com
OpenAV productions is a label under which I intend to release my
linux-audio software projects. The focus of the software is on the workflow
of creating live-electronic music and video.
The release system for OpenAV productions is one based on donations and
time, details are available on http://openavproductions.com/support
Sorcer is a wavetable synth, and is ready for release. Check out the
interface and demo reel on http://openavproductions.com/sorcer
Greetings from the LAC, -Harry
Hello everyone,
lately I had to fight big XRUN troubles, and thanks to this forum I
finally solved that. This excellent thread saved me:
http://linuxaudio.org/mailarchive/lau/2012/9/5/192706
On my long quest, I tried to see a little bit more what happened with
the IRQs on my system. I searched for a kind of 'top' utility to monitor
the interrupts, but the only apps I found were either deprecated, or
missed some cool features.
So, I ended up writing my own tool to monitor the file /proc/interrupts.
It's available a this address:
https://gitorious.org/elboulangero/itop
As its name indicates, it behaves pretty much like top, but for interrupts.
It's quite a simple thing, that I tried to enhance a bit with some cool
features:
+ refresh period can be specified.
+ two display modes: display interrupts for every CPU, or only a sum
of all CPU.
+ display every interrupt (sorted like /proc/interrupts), or only
active interrupts (sorted by activity).
+ in case the number of interrupts changes during the execution of
itop (due to a rmmod/modprobe), it's handled without any fuss.
+ command-line options are also available as hotkeys for convenience.
+ at last, the program display a summary on exit. The idea is that
this summary could be copied/pasted in emails to help debugging.
If anyone is interested, feel free to try and comment !
Cheers
Hello all,
I'm trying to add Alsa support to an application with mixed results.
When using the default audio output it sounds fine (my dac display 48KHz
as sampling rate, which doesn't equal the source signal of 44.1) but
when using hw:0 it is heavily distorted and with plughw:0 the sound
stutters at a steady interval of about 2Hz (in these last two cases my
dac display 44.1kHz as the sampling rate). When using libao plughw:0
sound fine (and my dac always display 44.1kHz).
Can anyone give me a direction (or answer) where I might go wrong here.
Unfortunately I'm a bit out of my league here and I do not yet
understand the data that fills buf[]. However, I do know that the data
it supplies works fine with libao, so I do suspect it is something with
the code below.
Thanks in advance, Maarten
ps. it is supposed to add Alsa to the following:
https://github.com/abrasive/shairport/tree/1.0-dev
static void start(int sample_rate) {
if (sample_rate != 44100)
die("Unexpected sample rate!");
int ret, dir = 0;
snd_pcm_uframes_t frames = 32;
ret = snd_pcm_open(&alsa_handle, alsa_out_dev,
SND_PCM_STREAM_PLAYBACK, 0);
if (ret < 0)
die("Alsa initialization failed: unable to open pcm device:
%s\n", snd_strerror(ret));
snd_pcm_hw_params_alloca(&alsa_params);
snd_pcm_hw_params_any(alsa_handle, alsa_params);
snd_pcm_hw_params_set_access(alsa_handle, alsa_params,
SND_PCM_ACCESS_RW_INTERLEAVED);
snd_pcm_hw_params_set_format(alsa_handle, alsa_params,
SND_PCM_FORMAT_S16);
snd_pcm_hw_params_set_channels(alsa_handle, alsa_params, 2);
snd_pcm_hw_params_set_rate_near(alsa_handle, alsa_params, (unsigned
int *)&sample_rate, &dir);
snd_pcm_hw_params_set_period_size_near(alsa_handle, alsa_params,
&frames, &dir);
ret = snd_pcm_hw_params(alsa_handle, alsa_params);
if (ret < 0)
die("unable to set hw parameters: %s\n", snd_strerror(ret));
}
static void play(short buf[], int samples) {
int err = snd_pcm_writei(alsa_handle, (char*)buf, samples);
if (err < 0)
err = snd_pcm_recover(alsa_handle, err, 0);
if (err < 0)
die("Failed to write to PCM device: %s\n", snd_strerror(err));
}
Quoting conrad berhörster <beat.siegel.vier(a)gmx.de>:
> maybe this helps
> http://welltemperedstudio.wordpress.com/code/lemma/
lemma looks cool. I will definitely give it a try as soon as I get my
daw back from the repair shop...
> and take a look at
> impro-visor
> http://www.cs.hmc.edu/~keller/jazz/improvisor/
I also worked with impro-visor a lot. Basically it has too many
features for my requirements. I just want an easy-to-use tool to
provide a playback track for practice sessions for those of us that
lack the skill or the extra hands to accompany themselves on the piano
during practice. I remember the playback quality to be very good,
though. I never got to use all the melody-centered functionalities,
it's a different use-case really. That's also one of my design goals -
focus on the central requirement, keep everything out that's not
absolutely necessary, make it as easy to use as possible, and in the
end really focus on the quality of the output (i.e. the music that can
be heard).
> And maybe you can explain a little bit about your ideas about the
> pattern-less approach
Happy to, though it might get a bit longish from here on. Bear with
me. My reasoning goes like this: Take the bass part of a very
simplistic straight 4-beat groove. In a fictional
pseudo-pattern-defining-format you may specify something like this:
-) Play the bass note of the chord on beat 1.
-) Play the bass note of the chord on the off-beat after 2.
-) Play the bass note of the chord on beat 3.
-) Play the bass note of the chord on the off-beat after 4.
This will sound fine on straight-forward one-chord-per-bar tunes. Even
if you change chord every two beats it will sound reasonably good. But
what of bars with a chord change on every beat, such as happens often
in Jazz, especially in turnarounds and the like? The bass player will
miss every other chord! Sure, you could add a rule like
-) In addition, play the bass note of the chord at every chord change.
assuming your pseudo-pattern-defining-format allows this. But then the
notes on the off-beats defined earlier will not sound too good. A real
life bass player would in such turnaround bars probably play a single
note on every chord change and leave out the off-beat notes. Or
consider off-beat chord changes, also a common thing. Just think about
what a typical pattern definition will make of them, and what a real
player would play.
Still, it's possible to express all this in a pattern definition. But
always provided the engine reading the pattern files supports it! And
I think that sooner rather than later you reach the point where
understanding the syntax of a sufficiently powerful pattern format and
actually writing good patterns in it is not that much easier than
programming in any well-structured contemporary programming language.
Bottom line: I think everyone able to define patterns so complex that
they actually sound really good is just as able to program them given
a well-structured framework to work in.
What you gain by actually programming a groove is that you are no
longer limited by the abilities of the pattern interpreter, but have
the full expressive power of the programming language at your
disposal. Random elements, variations in timing and volume to make it
sound more human, even the occasional wrong note, all that is possible
without any changes to the engine itself.
BTW, the same holds for the voicing of chords. Consider the ugly but
simple progression D7 C9 Bb6. By defining for the bass to
-) Play the root note of the chord in octave #3
will give you a not only ugly but also quite unrealistic jump of a
seventh up at the last change. It gets worse if the voicing of the
pianists right hand just builds the chords up from the root. Playing
the above progression with all the chords in root position will not
only violate a lot of voicing rules, it will also sound pretty crappy.
A real-life pianist would probably play all three chords with the D at
the bottom, or maybe even move up through D E F or something similar.
Actually programming such rules is doable, defining them in a
declarative fashion is much much harder.
So how does this actually work? At the lowest (and most convenient)
level you just define your Groove as a number of Players, each
representing typically one instrument. In your Player you have two
hook methods, one that gets called once for every bar and one that
gets called once for every chord change. Within these methods you
define (actually you program) what notes should be added to the final
music. For that you have not only methods to actually add notes to the
track, but also a (hopefully still growing) number of convenience
methods, for example to get the proper notes for the current chord
from a voicing engine or to get not only the current chord but also
the previous and next chord(s) (for example to program cool walking
bass lines you need the full context) as well as their suggested
voicings, etc. In addition you can do basically everything that can be
programmed.
In the most simple case, programming in that layer is not harder than
defining patterns. The simple pattern from above would look something
like this:
void createEvents(Bar bar) {
addNote(getVoicing(getChordAt(0.0)).get(0), 0.0, 1.0, 0.35);
addNote(getVoicing(getChordAt(0.375)).get(0), 0.375, 1.0, 0.12);
addNote(getVoicing(getChordAt(0.5)).get(0), 0.5, 1.0, 0.35);
addNote(getVoicing(getChordAt(0.875)).get(0), 0.875, 1.0, 0.12);
}
The difference is that you are not limited anymore. You can even skip
the convenience layer and directly implement your own Player object,
in which you receive a list of music information (Bars, Chord changes,
Volume changes, Tempo changes), an assigned MIDI channel number and
the set MIDI resolution, and must return simply an arbitrary list of
MIDI events.
To everyone who made it this far: Thanks for listening ;-)
Mike
--
Michael Niemeck
Krausegasse 4-6/3/6
1110 Wien
michael(a)niemeck.org
+43 1 9417017
+43 660 9417017
----------------------------------------------------------------
This message was sent using IMP, the Internet Messaging Program.
No, not the umpteenth person looking for one. I want to actually set
out to try and build one. I'm still in the early stages of scoping,
architecturing and prototyping (read: not ready to share anything
yet), but still there's a few questions I would like to get feedback on:
1) Would there be any demand for such an open source initiative at
all, or is everyone seriously needing this type of application using
The Real Thing anyway?
2) Would anyone be interested in collaborating? The core and GUI code
are not really suitable for collaborative development (yet), and
anyway not the real issue (although I'm open to any suggestion and/or
wishes regarding the user experience). What the project _does_ need
however in order to get serious at some point, is a lot of brains put
into the actual music creation routines, which I'm designing from the
start to be easily contributed to. It's in Java and I'm trying to put
in as much convenience methods as possible, so a lot of programming
know-how is not really necessary to contribute. What I'd rather need
is people with a lot of musical know-how, like voicing theory or
in-depth style knowledge, and just enough formal thinking to be able
to put their ideas into some form of algorithms (if Java is a problem
I'm willing to accept any for of pseudo-code as well...).
3) Is this the right place to ask? (Probably should have put this as
1)...). Seriously, if you have any suggestions where else I might put
this up, please let me know as well.
If there are some 'yes's out there, I'll be happy to post more information.
Cheers
Mike
----------------------------------------------------------------
This message was sent using IMP, the Internet Messaging Program.
Hi Geoff
On Sun, 16 Jun 2013 20:49:26 +1000 you wrote:
> Going to finally build a new machine. I'ts going to be Intel this time -
> AMD for 15 years or so - can any one here give some advice as to how
> many cores are optimal given current kernel >3.8 performance. Any
> install/operational issues? Any pitfalls ?
I can't provide much in the way of scientific evidence; there are others who
know far more about the technical realities of this - and push their
machines much harder - than I do.
Personally, I have always run with the idea that so long as one has N
independent processes whose links to other processes are limited to the
consumption or production of streams, then one could theoretically max out N
CPU cores. However in most practical cases involving audio and video one is
running the streams at real world speed, and this obviously limits the
extent that each process requires a CPU. The only time a single process
would have the chance to max out a single CPU core would be when
freewheeling with jack for example, or doing a final render of a video.
If one spends most of the time interacting with their AV software, this
means that there's no simple answer to the "how many cores are optimal"
questions. It depends on the precise mix of software you're running, what
each process requires of a CPU core, how much I/O each process instigates,
and so on. Another caveat is jackd: my current understanding is that jack2
can better utilise multiple cores, but I'm happy to be corrected on this
point by anyone with more knowledge than I (I really haven't looked into
this recently because for my current situation it's academic).
My system has been based on a first-generation i7 for the last couple of
years and I've noticed no major issues. However, in terms of audio work I'm
not really pushing the system all that hard during real work (I don't have
soft-synths running generally, and the plugins I use tend to be fairly
frugal with CPU requirements). This gives me 4 cores with hypertheading
and when I've done tests to see what it could handle, an audio-like workload
is able to push well above the 400% loading (so the hyperthreading seems to
be doing something useful).
Having said all that and knowing the sort of work you do, I would probably
err on the side of getting as many cores as you can reasonably afford. As
time goes on they won't go astray; you'll have the flexibility to experiment
with new ways of doing things without being too constrained by the number of
cores at your disposal.
A final comment is that with the release of Intel's Haswell-based CPUs we
are at an interesting point in time. These new cores are certainly a big
win for mobile computing due to their lower power consumption for a given
performance level. However, whether the increase in outright performance -
the primary metric for a desktop - justifies the "new product" premium these
will attract for the next 6-12 months remains to be seen, especially since
one would also expect some runout discounting on the 4th generation CPUs in
the coming months.
Regards
jonathan
Anyone else noticed this:
Ardour 3.1-3-g1606996: changing the meter source (in/pre/post/custom)
in a mixer strip generates clicks in the audio output.
??
Ciao,
--
FA
A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)
hi LADs
I installed linux on a new macbook retina and have getting some trouble
running jalv.
it gives me the error
gian@gian-MacBookPro:~/mod/LV2/jalv-1.4.0$ jalv.gtk
http://guitarix.sourceforge.net/plugins/gx_amp#GUITARIX
Plugin: http://guitarix.sourceforge.net/plugins/gx_amp#GUITARIX
UI: http://guitarix.sourceforge.net/plugins/gx_amp#gui
JACK Name: GxAmplifier-X
Cannot connect to server socket err = No such file or directory
Cannot connect to server request channel
jackdmp 1.9.10
Copyright 2001-2005 Paul Davis and others.
Copyright 2004-2013 Grame.
jackdmp comes with ABSOLUTELY NO WARRANTY
This is free software, and you are welcome to redistribute it
under certain conditions; see the file COPYING for details
no message buffer overruns
no message buffer overruns
no message buffer overruns
JACK server starting in realtime mode with priority 10
audio_reservation_init
Acquire audio card Audio1
creating alsa driver ... hw:1,0|hw:1,0|256|2|44100|0|0|nomon|swmeter|-|32bit
configuring for 44100Hz, period = 256 frames (5.8 ms), buffer = 2 periods
ALSA: final selected sample format for capture: 24bit little-endian
ALSA: use 2 periods for capture
ALSA: final selected sample format for playback: 24bit little-endian
ALSA: use 2 periods for playback
Block length: 256 frames
MIDI buffers: 32768 bytes
Comm buffers: 131072 bytes
Update rate: 2 Hz
using block size: 256
MasterGain = -15,000000
PreGain = 0,000000
Distortion = 20,000000
Drive = 0,250000
Middle = 0,500000
Bass = 0,500000
Treble = 0,500000
Cabinet = 10,000000
Presence = 5,000000
model = 0,000000
t_model = 1,000000
c_model = 0,000000
Inconsistency detected by ld.so: dl-open.c: 684: _dl_open: Assertion
`_dl_debug_initialize (0, args.nsid)->r_state == RT_CONSISTENT' failed!
JackEngine::XRun: client = GxAmplifier-X was not finished, state = Triggered
JackAudioDriver::ProcessGraphAsyncMaster: Process error
gian@gian-MacBookPro:~/mod/LV2/jalv-1.4.0$ Unknown error...
terminate called after throwing an instance of
'Jack::JackTemporaryException'
what():
due to the new retina display I've installed both KDE and GNOME. I wonder
if the problems com from that.
Any help is appreciated.
kind regards
Gian