I would like to announce that I've written a small
command line utility to store/restore all current JACK
and/or ALSA (midi) connection to/from an XML file.
I've released it under GPLv3, and you can download
the tarball at:
http://sourceforge.net/projects/aj-snapshot/
You can also clone the git repository with:
git clone git://gitorious.org/aj-snapshot/aj-snapshot.git
Bugs can be reported at sourceforge, and I'll be around for any
questions on this list.
Since I'm not an experienced developer, I would appreciate
feedback on the actual code, if anyone cares to have a quick look...
Greetings,
Lieven Moors
Incidentally, there is an excellent app called jm2cv that does a good
job of "converting" cv data to jackmidi (and vice versa) written by
Peter Nelson. (See the non-mixer webpage for details)
So if your particular app of choice has midi out for automation, you
can direct it to a CV in port for, as an example, fader control in
non-mixer.
Just start jm2cv, and ports in and out of it in your favourite patchbay.
Simple stuff.
Alex.
--
www.openoctave.org
midi-subscribe(a)openoctave.org
development-subscribe(a)openoctave.org
(forgot to copy this to LAD)
On Wed, Mar 24, 2010 at 06:45:00PM -0400, Paul Davis wrote:
> On Wed, Mar 24, 2010 at 5:59 PM, <fons(a)kokkinizita.net> wrote:
>
> > When connected via a loopback on a HW interface
> > I expected the worst case to be events quantised
> > to Jack period (256 frames). Actually it's 10 times
>
> what bridge were you usng between JACK and ALSA?
It was -X raw. I just repeated the test with -X seq
and that provides a completely different picture.
Jitter is +/- 2 frames, with occasional outliers
at around 160 frames, nothing in between.
That is with a 30-track Ardour session running in
the background, DSP load 15%, CPU load 27%.
Unpatched 2.6.32 kernel (which may explain the
occasional larger error).
Not bad at all.
Ciao,
--
FA
O tu, che porte, correndo si ?
E guerra e morte !
> From: David Olofson <david(a)olofson.net>
> a simple synth for a small prototyping testbed I hacked the other day:
What I like about this concept is that very concise code provides for full
control of a note's pitch. You can play it in any scale. You synth supports
microtuning. You can 'bend' a specific note while it's playing. You can play
two voices in unison.
These features takes pages of code in my MIDI synths. Microtuning is seldom
supported by soft-synths. With your model it's difficult NOT supporting it.
Jeff McClintock
Nick Copeland wrote:
> [snip, because you sent your reply off-list, but I guess this should
> be send to the list too]
If my broken English doesn't fool me, than the more I learn about cv,
the more I guess it's a bad idea to import it to Linux.
Fons: "Another limitation of MIDI is its handling of context, the only
way to do this is by using the channel number. There is no way to refer
to anything higher level, to say e.g. this is a control message for note
#12345 that started some time ago."
I don't know how this will be possible for cv without much effort, but
assumed this would be easy to do, than there would be the need to record
all MIDI events as cv events too, right? Resp. Linux than only would
record all events as cv events and apps would translate it to MIDI.
I'm asking myself, if cv has advantages compared to MIDI, what is the
advantage for the industry to use MIDI? Ok, when MIDI was established we
had another technology, e.g. RAM was slower, more expensive etc., today
we e.g. have serial buses that are faster than parallel buses, so
thinking about reforming MIDI or of having something new is suitable.
HD protocol? RTP-MIDI? USB-MIDI? Some keyboards do have new ports, e.g. USB.
For the proprietary software on MacOS and Windows there are some
advantages that are needed for Linux too. Now you coders say that this
is true, but it's better to realise it in a different way, so Linux
would get advantages compared to the proprietary software, but won't
have disadvantages any more.
Sounds nice in theory, but in praxis I don't believe that this is true.
There is fierce competition between proprietary software developers, why
don't they use cv for their products? Because they are less gifted than
all the Linux coders?
I hope the Linux coders don't lose sight of hardware mainly based on
MIDI. If I buy keyboards, effect processors etc. this hardware is
compliant to industry standards. It doesn't matter if those standards
are good or bad, I need to be able to use affordable equipment. At the
moment Linux on my computer and on computers of around 30 other people I
know can't use hardware MIDI equipment because of MIDI jitter. On the
same machines there is less jitter for Windows, so using Windows would
solve this problem for most of them. For musicians not having this
jitter issue, there are still other issues, e.g. if you loop play a
passage again and again the events send internal Linux and the events
send to external hardware diverge in the timeline, for standard play
everything is ok.
Even if this is a PITA for me, I stay at Linux. Musicians now need to
know which way Linux will go? Are coders for Linux interested to take
care of such issues, or do they want all musicians to buy special Linux
compatible computers, instead of solving issues like the jitter issue
for nearly every computer? Are they interested in being compatible to
industry standards or will they do their own thing? An answer might be,
that Linux coders will do their own thing and in addition they will be
compatible to industry standards. I don't think that this will be
possible, because it isn't solved now and the valid arguments are time
and money right now, so how would implementing a new standard defuse the
situation?
It seems to be, that Rui will enable automation by using MIDI and saving
the automation to MIDI files for Qtractor, at least this was an idea he
has outspoken about it. Isn't this a good idea?
Having cv additionally is good, no doubt about it. My final question,
the only question I wish to get an answer is: Even today MIDI is treated
as an orphan by Linux, if we would get cv, will there be any efforts to
solves MIDI issues with usual products from the industry? Or do we need
to buy special mobos, do we need to use special MIDI interfaces etc. to
still have less possibilities using Linux, than are possible with usual
products of the industry?
We won't deal with the devil just by using the possibilities of MIDI.
Today Linux doesn't use the possibilities of MIDI, I wonder if having a
Linux standard e.g. cv would solve any issues, while the common MIDI
standard still isn't used in a sufficient way.
I do agree that everybody I know, me too, sometimes do have problems
when using MIDI hardware, because of some limitations of MIDI, but OTOH
this industry standard is a blessing. Networking of sequencers, sound
modules, effects, master keyboards, sync to tape recorders, hard disk
recorders etc. is possible, for less money, without taking care from
which vendor a keyboard, an effect, a mobo is. Linux is an exception, we
do have issues when using MIDI. But is it really MIDI that is bad? I
guess MIDI on Linux needs more attention.
Internal Linux most things are ok, but networking with usual MIDI
equipment musicians, audio and video studios have got still is a PITA.
Cv would solve that?
2 Cents,
Ralf
> From: David Olofson <david(a)olofson.net>
> These issues seem orthogonal to me. Addressing individual notes is just a
> matter of providing some more information. You could think of it as MIDI
> using
> note pitch as an "implicit" note/voice ID. NoteOff uses pitch to "address"
> notes - and so does Poly Pressure, BTW!
Not exactly note-pitch. That's a common simplification/myth.
MIDI uses 'key-number'. E.g. key number 12 is *usually* tuned to C0, but is
easily re-tuned to C1, two keys can be tuned to the same pitch yet still be
addressed independently.
It's a common shortcut to say MIDI-key-number 'is the pitch', it's actually
an index into a table of pitches. Synths can switch that tuning table to
handle other scales.
A MIDI note-on causes a synth to allocate a physical voice. That physical
voice is temporarily mapped to that MIDI-key-number so that subsequent note
control is directed to that voice. The mapping is temporary. Once the note
is done the mapping is erased. Playing the same key later will likely
allocate a different physical voice.
The MIDI-key-number is therefore an 'ID' mapping a control-source to a
physical-voice.
> Anyway, what I do in that aforementioned prototyping thing is pretty much
> what
> was once discussed for the XAP plugin API; I'm using explicit "virtual
> voice
> IDs", rather than (ab)using pitch or some other control values to keep
> track of notes.
I agree that addressing notes unambiguously regardless of pitch (or any
other arbitrary property) is the ideal. I wish more sequencers were not
locked into a narrow 'western pop music' mode of operation.
But many MIDI alternatives have been proposed without looking deeply
enough to realise that MIDI already supports very flexible note control.
MIDI's significant flaw is it's grossly outdated 7-bit resolution, the
underlying voice model is sound.
> Virtual voices are used by the "sender" to define and
> address contexts, whereas the actual management of physical voices is done
> on the receiving end.
You have re-invented MIDI with different nomenclature ;-).
Best Regards,
Jeff McClintock
I finally couldn't resist anymore so I bought a TASCAM US-1641. At $299
it's hard to pass up. OK, so now the fun part. I loaded (shudder)
Windows and I've got Cubase 4 LE with the latest driver (2.00) and
firmware (1.02). I realize that there is no driver for the 1641 on
Linux. I've got a ton of programming experience (including some hard
real-time work) but I've never written a driver for Linux. I have no
idea where to start. I've read the following pages:
http://www.reactivated.net/weblog-content/20050806-reverse-0.2.txthttp://www.linux-usb.org/http://www.lrr.in.tum.de/Par/arch/usb/usbdoc/http://www.pps.jussieu.fr/~smimram/tascam/
I'm assuming that since ALSA is part of the kernel now I'll have to
compile my own kernel to play with things. If there's another way I'd
sure like to know about it. Any help would be greatly appreciated. If
anyone has any advice to help me get started you can either post it here
or send to me off list (that will at least save everyone else from
having to wade through all of my ignorant questions ;-) I may not be
able to figure this out but at least I'll give it the old college try.
Cheers,
Jan
> There is no way to refer to anything higher level, to say e.g. this is a
control message for note #12345 that started some time ago" could be done by
using SysEx.
FYI MIDI sysex does support that already....
[UNIVERSAL REAL TIME SYSTEM EXCLUSIVE]
KEY-BASED INSTRUMENT CONTROL
F0 7F <device ID> 0A 01 0n kk [nn vv] .. F7
F0 7F Universal Real Time SysEx header
<device ID> ID of target device (7F = all devices)
0A sub-ID#1 = "Key-Based Instrument Control"
01 sub-ID#2 = 01 Basic Message
0n MIDI Channel Number
kk Key number
Confirmation of Approval for MIDI Standard CA# __23__
Page 2 of 2
[nn,vv] Controller Number and Value
:
F7 EOX
SOME COMMONLY-USED CONTROLLERS
CC# nn Name vv
-----------------------------------------------------------
7 07H Note Volume 00H-40H-7FH
10 0AH *Pan 00H-7FH absolute
33-63 21-3FH LSB for 01H-1FH
71 47H Timbre/Harmonic Intensity 00H-40H-7FH
72 48H Release Time 00H-40H-7FH
73 49H Attack Time 00H-40H-7FH
74 4AH Brightness 00H-40H-7FH
75 4BH Decay Time 00H-40H-7FH
76 4CH Vibrato Rate 00H-40H-7FH
77 4DH Vibrato Depth 00H-40H-7FH
78 4EH Vibrato Delay 00H-40H-7FH
91 5BH *Reverb Send 00H-7FH absolute
93 5DH *Chorus Send 00H-7FH absolute
120 78H **Fine Tuning 00H-40H-7FH
121 79H **Coarse Tuning 00H-40H-7FH
Best regards,
Jeff
In the spirit of "release early, release often," I am pleased to
announce the release of Composite 0.006. This release marks the
completion of the LV2 Sampler Plugin, which supports Hydrogen drum
kits.
STATUS
------
Composite is a project with a large vision. Here is the status of the
different components:
composite-gui: Alpha (i.e. "a broken version of Hydrogen")
compoiste_sampler (LV2): production/stable, no GUI
libTritium: Not a public API, yet.
LINKS
-----
Composite: http://gabe.is-a-geek.org/composite/
Plugin Docs: file:///home/gabriel/code/composite-planning/plugins/sampler/1
Tarball: http://gabe.is-a-geek.org/composite/releases/composite-0.006.tar.bz2
Git: http://gitorious.org/compositegit://gitorious.org/composite/composite.git
HOW TO USE THE PLUGIN
---------------------
To use the plugin, you need the following:
* A program (host) that loads LV2 plugins.
* A MIDI controller.
* An audio output device. :-)
The following LV2 hosts are known to work with this plugin:
Ingen http://drobilla.net/blog/software/ingen/
lv2_jack_host http://drobilla.net/software/slv2/
The following is known to _not_ work:
zynjacku (Uses a different MIDI port type)
If you don't have a hardware MIDI controller, I suggest using
jack-keyboard (http://jack-keyboard.sourceforge.net/).
The first time you run the sampler, it will create a file
~/.composite/data/presets/default.xml, which will set up presets on
Bank 0 for the two default drum kits (GMkit and TR808EmulationKit).
Sending MIDI PC 0 and PC 1 will switch between the two kits. See
composite_sampler(1) for more information on setting up presets.
ACKNOWLEDGEMENTS
----------------
With this release, I would especially like to thank:
Harry Van Haaren - For help with testing. This release has much
more polish because of you.
David Robillard - For general help with Ingen and LV2. I also
stole a lot of great design ideas from Ingen.
Peace,
Gabriel M. Beddingfield