miniloop is a simple live looping program. It can load a number of
stereo audio loops of equal length from the disk and loop them in sync
with each other, sending each loop to a different pair of JACK audio
outputs. These outputs are intended to be subsequently fed into an
external software mixer, such as Ardour. For live performance, you
will want to control the mixer using a MIDI control surface.
miniloop is similar in intent to Stephen Sinclair's LoopDub. I
actually created miniloop to explore some design ideas that I had
while working on LoopDub. Given that, it is appropriate to provide a
comparison between the two programs. The most important difference is
that LoopDub uses a built-in mixer, while miniloop uses an external
mixer. This means that miniloop is more flexible, but requires a more
complex software setup. Another important difference is the user
interface, which is radically different, and, I hope, somewhat easier
to use. Finally, LoopDub has many features that miniloop lacks;
miniloop is currently quite small (~500 SLOC) and quite feature-poor,
and I intend to keep it that way.
Project homepage here:
http://code.google.com/p/miniloop/
Download here:
http://code.google.com/p/miniloop/downloads/detail?name=miniloop-0.0.zip
Not sure whether this is any good with respect to real-time audio.
"LatencyTOP is a Linux tool for software developers (both kernel and
userspace), aimed at identifying where system latency occurs, and what
kind of operation/action is causing the latency to happen. By identifying
this, developers can then change the code to avoid the worst latency
hiccups."
<http://www.latencytop.org/>
It's part of Intel's OSS initiative:
<http://oss.intel.com/>
Hi all,
the a bit provocative title is not here to start a flame war but to spark a
constructive discussion about the
viability and future of the LV2 plugin standard in the professional audio
application market.
Some background:
as you probably know Steinberg just released VST3 and developers do not seem
happy with it
as it is not backwards compatible, not many new features and it seems less
portable amongst platforms than VST2.4.
Users are unhappy and started a long discussion:
http://www.kvraudio.com/forum/viewtopic.php?t=204080&postdays=0&postorder=a…
The discussion was picked up on the Reaper forum too.
For anyone not familiar with Reaper, it is a very good audio,midi sequencer.
http://www.reaper.fm
It is a windows app but runs very well under wine and the company writes
this on their page.
Users like Alex Stone are using it on wine in conjunction with LinuxSampler
The authors are Justin Frankel from Nullsoft and others. For those that
don't know the name, he is the one
that wrote winamp, gnutella, nullsoft installer etc.
On a forum he said he played with the idea of open sourcing reaper.
It is being ported to OS X and probably will get ported to Linux too, given
the very good performance it achieves on wine.
Now back to LV2:
The VST3 discussion on the reaper forum resulted in users proposing to
create a new plugin standard
in order to "break free" from proprietary standards so they are proposing to
add LV2 support to Reaper.
Justin from Reaper answered the following on the forum:
-------
I looked at LV2, there's a lot of stuff which I disliked.. for example,
"ports" being for parameters and audio buffers (and presumably MIDI events),
and all having the possibility of colliding, isnt well thought out.
Also if you want to add parameters to a new revision of a plug-in, then you
have to change the URI? ick.
Or what if you want to change the I/O of a plug-in on the fly..
-Justin
---
see here for the full thread:
http://www.cockos.com/forum/showthread.php?t=17198&page=2
I did not look at LV2 so I cannot judge, but I think LV2 developers should
discuss about these issues and concerns with
Justin in order sort out problems, and given the joung nature of LV2 , in
case important design flaws get uncovered, change the specs a bit.
Reaper is rapidly building up a large user base (users switching away from
cakewalk sonar and cubase to reaper) as the application provided excellent
performance, is easy to use
and new features are being incorporated at a fast pace.
Dave Philips seem to love the app too as he mentioned it in Linux Journal
and he posts frequently on the forum.
http://www.linuxjournal.com/node/1005911
I think LV2 and Reaper developers should join forces because together
perhaps it will be possible to impose a new open plugin standard
which will get adopted by other commercial applications too and supersede
VST2.4 over time.
not sure if the reaper devs are reading LAD, so perhaps LV2 developers
should answer on the reaper forum too in order to sort out the
issues raised by Justin.
Everyone is invited to add his own point of view and I hope that the outcome
will be a positive collaboration between LV2 and Reaper.
thanks everyone,
Benno
On Jan 22, 2008, at 2:45 PM, M-.-n wrote:
> more linux music on the go ?
> http://www.openpandora.org
> http://pandora.bluwiki.com/
definitely one to watch .. i hear a rumour that devboards are out and
about for this already .. should be good to go in march if all goes
well, and the world doesn't slide into a mass depression in the
meantime .. ;)
also, the neo1973-GTA02 version should be a nice little machine for
portable music-making, soon enough. i'm having a blast with
pulseaudio on my gta01 .. gonna be nice to have some power increase
(and new graphics) in the new version. insmod gadget-audio for the
win, w00t!
;
--
Jay Vaughan
Hello all, my name's Alex Stone, and Benno From Linuxsampler pointed me in
this direction to see what you guys who build stuff are up to.
Frankly I've been amazed at the talent on display to use a phrase. Linux
audio has come a long way since i last peeked in some 8 or 10 years ago. I'm
currrently usiing a programme called Reaper, in wine in Linux, but i am keen
to see a complete native solution come to maturity. I've had the privilege
of communicating with some of you already, and as my linux journey is a
recent one, i've been assisted, and encouraged by some great fellas, who
know their stuff.
Anyway, the introductions are done, and handshakes are made. I hope i can
contribute something here from a user's perspective, and as someone fairly
well versed in using audio and production programmes on the 'other side of
the fence'. I've finally seen the light, and gained my freedom.
As a former orchestral player, i'm more inclined to use programmes from a
classical composer's perspective. and as some of you know already, i've
tried to convey what it's like to write classical music with a computer, and
the challenges that go with that, the biggest of all usually being the
number of audio or midi ports available, and the capability possible for
using a large number of instruments, often with several for one section with
variations in articulation.
My respects and thanks to those who have helped and encouraged so far, and i
hope i can give something back as my knowledge continues to grow..
Alex Stone.
Hi again!
Well Phil you said, that the new protocol wouldn't be understood by
anything. So why not first take it easy and take a look at what already
exists. OSC was mentioned here a couple of times? Might that be an idea?
And about the two APIs: why not? If you think about it thorroughly I'm sure
one can come up with a protocol, of which MIDI is a subset. Thus some
interaction between to apps (one using MIDI, one using the new thing) could be
possible.
At the moment we have audio and MIDI ports, add a third kind of ports. And
thus if you use plain old MIDI there's no more overhead, than there should be.
Kindest regards
Julien
--------
Music was my first love and it will be my last (John Miles)
======== FIND MY WEB-PROJECT AT: ========
http://ltsb.sourceforge.net
the Linux TextBased Studio guide
======= AND MY PERSONAL PAGES AT: =======
http://www.juliencoder.de
Dear Linux Audio users,
A new jack release (0.109.0) is available:
http://sourceforge.net/project/showfiles.php?group_id=39687
Enjoy,
Pieter
(Releaser-ad-interim)
Changelog
=========
API changes:
* add jack_thread_wait API
* remove port_(un)lock functions
* add new time APIs
* add port aliases
* add new client registration callback
* add port connect callback
Backends:
* ALSA: fix for use of snd_pcm_link
* ALSA: hardware jack-midi support
* ALSA: fix for enabling big-endian 16bit format discovery
* ALSA: hardware jack-midi support
* FreeBoB: fix deallocation segfault
* FireWire: add 'firewire' backend for use with FFADO
* OSS: add support for proper triggering in OSS driver when in full
duplex mode
* ALSA: fix illegal use of ALSA API
* OSS: disable software mixing and samplerate conversions on OSS 4.x
* CoreAudio: fix sample rate management
Other
* add JACK_PROMISCUOUS_SERVER handling
* make /dev/shm the default tmpdir
* add -Z flag to cancel zombification on timeout
* add per-port update total latency
* increment default watchdog timeout to 10sec
Fons Adriaensen wrote:
> Seriously, there are three things that I profoundly dislike in MIDI.
>
> 1. The limited precision of almost all values, 7 bits or 14 with a
> kludge (but even this kludge is not available in any standard
> way for e.g. individual note frequencies).
>
> 2. Note events are identified by their frequency.
>
> 3. The only thing that can actually refer back to a note on event
> is it's corresponding 'note off' message. It's not possible to
> send a controller value that refers to a previous note-on event.
I'm quite sure many of us would like to see this limitations go away,
including Dave. Being able to work with MIDI hardware is reason
enough to have MIDI sequencing. That doesn't rule out to have
something else, additionaly.
How about you work out a proposal? Or better yet a draft
implementation, as we all have seen how far one gets with talking ...
I guess you have the technical and musical expertise and maybe
the motivation thanks to your specific interests. Once you have
completed something, perhaps someone steps up to crticise it.
--
Thorsten Wilms
Phil Rhodes wrote:
> It seems to me dangerously like you're running off down a road familiar
> to Linux - create something that's a fantastic engineering solution and
> runs at a million miles an hour on a 286, but which is completely
> pointless and unusable because it won't talk to anything.
indeed, but do you have an alternative? It's sort-of a chicken-and-egg
issue. Back when jack was conceived, there were no jack clients either.
So you didn't have anything to 'talk to' either (important exception: a
soundcard).
If we can come up with something 'better than MIDI', it could be
interesting. Especially since these days a lot of audio stuff doesn't
leave the pc anyway. The only thing tying us to MIDI are the external
devices. And if the 'better than MIDI' can be downgraded to plain MIDI
you have something to talk with.
Greets,
Pieter
With the release of Jack 0.109 we now have a (hopefully)
stable API for midi-over-jack. This set me to consider what
would be required to modify Aeolus to use this system.
And I did not like the conclusions.
Is it a good idea to insert a 30-year old data format
that mixes real-time and general data on a single stream
into a real-time audio processing API ? I don't think so.
1. Note on/off and controller events now can be 'sample
accurate'. That's nice to have. But a) they are not and
never will be 'sample accurate' if they come from a HW
midi device, and b) if they are generated in software then
you can have, and for some applications you actually want
much more finer grained timestamps and controller values.
So it's a solution that in one case is plain overkill,
and in the other it's just not good enough.
2. *All* MIDI data now has to pass through the RT-audio
audio thread, even if it's not related to any RT operations
at all, and in many cases, not even meant for the receiving
application. What is the poor process callback to do with
sysex memory dumps, sample downloads, and in most cases,
even program changes ? The only thing it can do is get rid
of this ballast as fast as possible, dumping it in a buffer
to be processed by some lower-priority process, and hoping
it will not be blocked or forced to discard data. Forcing
a critical RT-thread to waste its time in this way is IMHO
not good program design, not even if the overhead can be
tolerated. That sort of data should never go there.
Some sort of solution could be to let the MIDI backend do
at least part of the filtering - it could have several
jack-midi ports corresponding to a single raw midi input
and e.g. separate note on/off, controller events, and
sysex messages. An app that wants to receive all can
still do so without any significant overhead, it just
needs a few more ports.
And once e,g. note on/off and controller updates have
dedicated ports, there is no more need to keep the MIDI
format with all its limitations.
This will be enough for some flames, so I'll stop here
:-)
--
FA
Laboratorio di Acustica ed Elettroacustica
Parma, Italia
Lascia la spina, cogli la rosa.