Hi there,
I have written a MIDI step sequencer that uses the JACK MIDI APIs, and
i'm just about ready to unleash it on the world.
However, i'm no expert on packaging for Linux - i use Ubuntu personally,
and can probably get an ubuntu .deb together easily enough, but am
wondering what people do to get their apps packaged and supported on
other distros?
If anyone is interested - http://qtribe.sourceforge.net/
If you want help compiling,installing or using it, just drop me an email.
-Pete
Hi everyone!
(this was first posted on Linux-Audio-Users, but is probably really more
suited to this list.)
I just compiled a sound generator, which can be controlled vai MIDI, it uses
simpe raw-midi and jack as output, via bio2jack.
Now I have horrible latency 0.2-0.5sec (rough estimate). Where should I start
looking in my setupt. I know others had the software running without problems.
Sorry for being so abstract, but I'm not allowed to mention the software. :-(
I already tried setting internal audio-buffer to 128 samples and reducing
message queues for the tonegenerator and MIDI. Still it's much too much
latency. Does bio2jack have some settings for reducing latency? Does bio2jack
do samplerate conversion, is it costly? The sound-engines samplerate is set to
44100 or 22050 and my JACKd runs at 48000.
But I could post parts of the source if necessary.
My setup is:
Debian Lenny with custom built kernel 2.6.24-rt1
JACK subversion 0.109.0
jack-commandline:
jackd --timeout 4500 -R -d alsa -d hw:1 -r 48000 -p 128 -H -M -z shaped
ALSA version 1.0.15
Any idea where to optimise?
Kindest regards
Julien
--------
Music was my first love and it will be my last (John Miles)
======== FIND MY WEB-PROJECT AT: ========
http://ltsb.sourceforge.net
the Linux TextBased Studio guide
======= AND MY PERSONAL PAGES AT: =======
http://www.juliencoder.de
Hello,
First of all I want to announce that the next version of NASPRO will
shift to LV2 as its internal model, which means that it will contain a
collection of bridges from other APIs to LV2 and viceversa
In order to accomplish such a thing (writing wrappers for LV2), a
special extension is needed to make the host call some function in a
shared object file to generate a new manifest.ttl-like file (possibly
using tmpfile() or similar). The host will call this function and read
the file to know about bridged plugins.
Now, I wrote something (taking "inspiration" from other extensions,
etc.), but since I'm a total beginner with RDF and LV2, please someone
check whether the following stuff is correct.
lv2_dyn_manifest.ttl:
@prefix : <http://naspro.atheme.org/ext/dyn-manifest#> .
@prefix lv2: <http://lv2plug.in/ns/lv2core#> .
@prefix dman: <http://naspro.atheme.org/ext/dyn-manifest#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix doap: <http://usefulinc.com/ns/doap#> .
@prefix foaf: <http://xmlns.com/foaf/0.1/> .
<http://naspro.atheme.org/ext/dyn-manifest#> a lv2:Specification ;
doap:license <http://usefulinc.com/doap/licenses/bsd> ;
doap:name "LV2 Dynamic Manifest" ;
rdfs:comment "" ;
doap:mainainer [
a foaf:Person ;
foaf:name "Stefano D'Angelo" ;
foaf:homepage <http://zanga.netsons.org/>
] .
:DynManifest a rdfs:Class ;
rdfs:label "Dynamic Manifest" ;
rdfs:comment """
The class which represents a dynamic generator of a manifest.ttl-like file.
""" .
lv2_dyn_manifest.h:
#ifndef LV2_DYN_MANIFEST_H_INCLUDED
#define LV2_DYN_MANIFEST_H_INCLUDED
#include <stdio.h>
/* This function shall create a temporary file containing the dynamically
* generated manifest.ttl-like file and return a FILE pointer or NULL in case
* of failure.
*
* The generated file must not implement DynManifest classes. */
FILE * lv2_dyn_manifest();
#endif /* LV2_DYN_MANIFEST_H_INCLUDED */
I hope this is enough since I'm guessing that manifest.ttl can also
contain plugin-related information (the LV2 wiki suggests using
separate files, but maybe it's possible/valid nonetheless).
Then, some kind of API property could be associated to a plugin class
to indicate the original plugin API (needs an extensions or
something?).
In the end, I'm using FILE * because of file locking + standard C
library. If anyone is against that, please say so (with motivations).
Thanks in advance,
Stefano
hey,
when i request a lot of parameters in one function:
get_channel_setup_settings(void)
{
pmesg(100, "midi get_channel_setup_settings()\n");
int i;
for (i = 367; i <= 415; i++)
{
param_request(i);
}
for (i = 512; i <= 639; i++)
{
param_request(i);
usleep(100000);
}
}
(param_request just sends a sysex message for the requested
parameter)
then i discover some weirdness:
midi get_channel_setup_settings()
midi param_request()
midi nibble(367, 1)
midi sysex_delivery()
midi midi_callback() ENTERING
midi unibble(111, 2)
midi unibble(4, 0)
midi midi_callback() got param: 367, value: 4
midi param_request()
midi nibble(368, 1)
midi sysex_delivery()
midi param_request()
midi nibble(369, 1)
midi sysex_delivery()
midi param_request()
....
parameter requests only, until the last parameter:
...
midi param_request()
midi nibble(639, 1)
midi sysex_delivery()
midi set_spinbutton() <- rest of midi_callback() from above
midi midi_callback() ENTERING
midi unibble(112, 2)
midi unibble(0, 0)
midi midi_callback() got param: 368, value: 0
...
(midi_callback is catching the received answers. its
running in another thread)
...
midi unibble(51, 4)
midi unibble(51, 0)
midi midi_callback() got param: 563, value: 51
midi set_spinbutton()
.
now there are two things i dont understand:
first: why is the midi_callback() thread being called once
at the beginning, right after the very first parameter
request and then never again until the very last request?
second: why does the midi_callback() thread only receive
the parameters up to 563 where the last request was for
parameter 639? i am missing the remaining parameters! :-)
about the second weirdness: this can only be a problem on
the receivers side (alsa) as i gave the device i am sending
to more than enough time to answer before its "message
receive buffer" could ever be filled (see the usleeps after
every request). i would guess that some alsa side buffer is
filled up and alsa just drops the rest.
solutions would be:
1. call the midi_callback() thread more often
2. resize the alsa midi input buffer to something more big.
how would i do any of this, preferably 1.?
thank you,
Jan
Hi there,
In my quest to compile and package qTribe, i thought i'd use my iBook to
see how much of a pain it will be to run it under OS X.
(I know Jack OS X doesnt support CoreMIDI port-bridging yet so it won't
be too useful on OS X)
I can compile it quite happily with the Qt I downloaded from Trolltech,
but when it comes to linking against the Jack Framework provided by
JackOSX 0.77 I get this:
g++ -o ../bin/qtribe qtribe.o main.o jackIO.o sequencerCore.o
stepSequence.o stepsequencerwidget.o stepsequencerwidgetbase.o
moc_qtribe.o moc_stepsequencerwidget.o moc_stepsequencerwidgetbase.o
-L/Developer/qt/lib -ljack -lqt-mt -lm -lpthread
/usr/bin/ld: /usr/local/lib/libjack.dylib load command 4 unknown cmd field
collect2: ld returned 1 exit status
make[1]: *** [../bin/qtribe] Error 1
Now, i'm guessing that JackOSX was compiled on leopard using whatever
fancy shiz apple bundle with 10.5 and this is not backwards compatible
by default.
But, is there any way around this issue? I can try downloading XCode 2.5
(on 2.2 here) but that 900MB of stuff just to link JACK. the previous
version of JACK i had installed (which put a dylib directly in
/usr/local/lib instead of a symlink to a Framework) links fine, but
doesn't include JACK MIDI so it fails there.
Can I recompile Jack with my older gcc etc.? Will upgrading to XCode 2.5
definitely fix the problem? Is there a compiler flag I can use to make
the error disappear?
Any help appreciated,
-Pete
"Patrick Stinson":
>
> I know that the rule is to never block in a real time audio thread,
> but what about blocking for resources shared between real-time
> threads? In the case of a sequencing app that uses one thread per
> audio/instrument track (as most do), is it OK to share a lock between
> real time scheduled tracks?
Yes, that is OK. But beware that a RT thread running with priority x
waiting for a RT thread running with priority y, will in practice
only run with priority min(x,y). This is called priority inversion,
there's probably a wikipedia article about it. In short: If all
threads run with the exact same priority, then it's OK.
> Also, I've gathered are that adding app-thread => RT-thread
> message-passing to avoid using locks while modify the application's
> basic shared data structures is useless, since the real-time thread
> will have to wait for execution of that code one way or another, given
> the overhead of acquiring the lock is negligable. This would mean that
> you should just use locks and assume the user will cope with hearing
> small overruns when adding/removing audio components. True? Not true?
>
I don't understand the question ("i have gathered are (...) a=>b
message-passing (...) while (... useless, since (...), given (...)
is negligable. (...)" ???), but it seems like others have, so
hopefully their answers are correct.
Howdy!
I am pleased to announce this new and long due release of QjackCtl, the
Qt GUI for the awesome JACK Audio Connection Kit.
Release highlights are mainly about final JACK-MIDI support for the
"evil" Patchbay, new Messages file logging and the most intriguing
application window instance uniqueness which will make X11 desktop life
easier for everyone (ie. no more duplicates as JACK server gets
auto-started as candy bonus:)
You can grab it from the project source as usual:
http://qjackctl.sourceforge.nethttp://sourceforge.net/projects/qjackctl
In case you need an upstream hug, you're free to visit my own forum:
http://www.rncbc.org
The change-log doesn't say much but... here it goes:
- Attempt to load Qt's own translation support and get rid of the ever
warning startup message, unless built in debug mode (transaction by
Guido Scholz, while on qsynth-devel, thanks).
- Messages file logging makes its first long overdue appearance, with
user configurable settings in Setup/Options/Logging.
- Only one application instance is now allowed to be up and running,
with immediate but graceful termination upon startup iif an already
running instance is detected, which will see its main widget shown up
and the server started automatically (Qt/X11 platform only).
- Finally, full JACK MIDI support sneaks into the patchbay; socket types
now differ in Audio, MIDI and ALSA, following the very same nomenclature
as found on the Connections widget tabs.
- Sun driver support (by Jacob Meuser).
- Delay window positioning at startup option is now being disabled, on
the Setup/Misc tab, when Minimize to system tray is enabled.
- Cosmetic fix: Setup/Settings tab, 'Input Device' text label was using
a slightly smaller font than the rest of the application (bug#1872545,
reported by Jouni Rinne).
Cheers && Enjoy
--
rncbc aka Rui Nuno Capela
rncbc(a)rncbc.org
I'm just asking these out of curiosity. I know you guys love these
kinds of questions :)
I know that the rule is to never block in a real time audio thread,
but what about blocking for resources shared between real-time
threads? In the case of a sequencing app that uses one thread per
audio/instrument track (as most do), is it OK to share a lock between
real time scheduled tracks? I ran into this question after
implementing a scripting engine for our commercial audio plugin using
python, which uses a global lock to serialize access to it's data
structures.
Also, I've gathered are that adding app-thread => RT-thread
message-passing to avoid using locks while modify the application's
basic shared data structures is useless, since the real-time thread
will have to wait for execution of that code one way or another, given
the overhead of acquiring the lock is negligable. This would mean that
you should just use locks and assume the user will cope with hearing
small overruns when adding/removing audio components. True? Not true?
I hope I worded these well enough. Cheers!
Let's stop this flame for a moment and see what LV2 misses in order to
let me kill EPAMP and live an happier life.
#1. Support for interleaved channels and non-float data
Input and output data is often found in these formats.
#2. Changing sample rate without re-instantiating all effects.
Gapless playback when chaning songs, for example, should be possible
without performing black magic.
#3. Some serious connection logic thing (all the "equal channels" thing etc.).
This needs a thousand flame wars and *deep* thinking.
#4. Support for time stretching when using non real-time audio sources.
#5. Informations about delay time introduced by the algorithm itself
to do syncing with video-sources (for example).
#6. Some way for the host to make sense of the meaning of some
parameters and channels, to better support global settings and stuff.
#7. Global explicit initialization/finalization functions for more
exotic platforms (they wouldn't harm, so why not having them).
#8. Rules to find plugins possibly platform-specific and outside of
the specification; possibly one compile-time valid path.
#9. Maybe more strict requirements on both hosts and plugins
(especially about thread-safety).
I see there is some indication in the core spec, but I don't know
about extensions and/or other possible concurrency issues.
#10. Something (a library possibly) to make use all of this features
easily from the host author's POV.
Can we start discussing about these issues and see if they are solved
already/how to implement them/how to make them better?
Stefano
Greetings AudioScience linux customers and others,
This is to inform you that hpklinux version 3.10.00 is available from our
website http://audioscience.com/internet/download/linux_drivers.htm
The major change in this release is the addition of ASI89xx tuner series, and
removal of ASI4xxx (still supported by driver 3.08). Of course bugs have
been fixed, and new minor features added, see the release notes for details.
While I have your attention, if you are a user of our cards and have a few
moments please reply and let me know which distro(s) and kernel version(s)
you currently support, and whether you are using or intend to use HPI or ALSA
thanks and regards
--
--
Eliot Blennerhassett
www.audioscience.com