The Generalized Music Plug-In Interface (GMPI) working group of the MIDI
Manufacturer's Association (MMA) is seeking the input of music and audio
software developers, to help define the technical requirements of GMPI.
The objective of the GMPI working group is to create a unified
cross-platform music plug-in interface. This new interface is hoped to
provide an alternative choice to the multitude of plug-in interfaces that
exist today. Among the many benefits of standardization are increased
choice for customers, lower cost for music plug-in vendors and a secure
future for valuable market-enabling technology.
Like MIDI, GMPI will be license free and royalty free.
Phase 1 of the GMPI working group's effort is to determine what is required
of GMPI: What sorts of capabilities are needed to support existing products
and customers? What are the emerging new directions that must be addressed?
Phase 1 is open to any music software developer and is not limited to MMA
members. It will last a minimum of three months, to be extended if deemed
necessary by the MMA. Discussions will be held on an email reflector, with
possible meetings at major industry gatherings such as AES, NAMM and Musik
Following the collection of requirements in Phase 1, the members of the MMA
will meet to discuss and evaluate proposals, in accordance with existing MMA
procedures for developing standards. There will be one or more periods for
public comment prior to adoption by MMA members.
If you are a developer with a serious interest in the design of this
specification, and are not currently a member of the MMA, we urge you to
consider joining. Fees are not prohibitively high even for a small
commercial developer. Your fees will pay for administration, legal fees and
marketing. Please visit http://www.midi.org for more information about
To participate, please email gmpi-request(a)freelists.org with the word
"subscribe" in the subject line. Please also provide your name, company
name (if any) and a brief description of your personal or corporate domain
of interest. We look forward to hearing from you.
GMPI Working Group Chair
I'm currently embarking on a project to make an interface between Q, a
functional programming language
(http://www.musikwissenschaft.uni-mainz.de/~ag/q/), and SuperCollider. I
think the OSC interface will be fairly straightforward to do, but I
haven't been able to find any documentation (besides the sc sources,
which I haven't grokked yet ;-) on the format of the synth definition
file. Does anyone here know more about this?
Many thanks in advance,
Dr. Albert Gr"af
Email: Dr.Graef(a)t-online.de, ag(a)muwiinfa.geschichte.uni-mainz.de
Hi, I've been playing a lot with bristol synth and really love it. So
much so that I've been trying to 'Jackify' it. Actually, I'm pretty
much done, but can't figure out the internal audio format. Its
interleaved floats I think, but not normalised to [-1,1]. If any of
the developers are here could you help me out? I can hear noise, but I
need to tune the maths. TIA.
Here is a propblem that is propably already solved in another context,
so i would like to know some opinions on it:
I am trying to implement a (hopefully:-) ) simple general-purpose event
The actual application will propably be something like a network of
modules that can do arbitrary filtering, generation and manipulation of
midi events in real time.
As propably different kinds of external event sources, like several midi
ports, maybe joystick device and of course a gui, are involved, how
would one efficiently organize the delegation of events passed between
the modules, so that the everything is still thread-safe?
The three main ideas i can currently think of are:
A: don't do it at all, that is, everything is implemented as simple
subject/observer patterns, so that the communication is a pure
Mutexes et al, would have to be managed by each individual plugin.
B: use a global, mutexed event queue. This could be a priority queue for
time-stamped events, or a simple FIFO.
C: use local queues. As above, but for each individual module.
Each of the above aproaches seems to have its advantages and
disatvantages. E.g. if queues are used, this would as far as i an judge,
make it easy to have feedback cycles.
OTOH this would propably introduce some overhead which aproach A
wouldn't have. C would propably involve a single thread for each module,
or a global "clock" thread that periodicaly calls an "process_queue"
method on each module.
Is there a general "best" way to do this?
On linux, which is faster, pipe, FIFO, or socket? What about shared
memory - is it faster, and if so is it faster enough to warrant the
extra programming overhead?
Hans Fugal | De gustibus non disputandum est.
http://hans.fugal.net/ | Debian, vim, mutt, ruby, text, gpg
http://gdmxml.fugal.net/ | WindowMaker, gaim, UTF-8, RISC, JS Bach
GnuPG Fingerprint: 6940 87C5 6610 567F 1E95 CB5E FC98 E8CD E0AA D460
Sort of off-topic, except that i want to do it on linux and it
does have to do with audio :)
I've been wanting to tinker a bit with analog audio circuit
design. I've built a few boxes from schematics but I've
never actually designed a circuit.
What I'd REALLY like is some GUI software in which I can construct
a circuit diagram and then feed in a test audio signal and
through some DSP magic, hear (more or less) what the result would sound
like before I start using my big clunky hands to ruin physical objects :)
Does such a thing exist? Or some combination of software that would
do the above? I've been looking at some SPICE resources online
but I can't tell if you can feed an arbitrary digitized audio signal into
a SPICE simulation.
Look! Up in the sky! It's PURPLE KABUKI WARRIOR BOOMERANG!
(random hero from isometric.spaceninja.com)
I'm a little green at time measurement facilities available in linux,
In my host, I would like to be able to measure how much time each
individual LADPSA plugin takes. Assuming I do realtime work with the
smallest buffer sizes my system can handle, what options are available
to me to measure time on this small scale? I know that the typical
system calls time and gettimeofday have resolution of 10 ms, which is
too big for this purpose. Is there any way to do this? Is there a
high-res time howto?
Has anyone done any low-latency audio testing with the new native pthread
implementation (comes with rh9.0 at least)...?
I was just reading through  and noticed the following (in section 8):
""Realtime support is mostly missing from the library implementation. The
system calls to select scheduling parameters are available but they have
no effects. The reason for this is that large parts of the kernel do not
follow the rules for realtime scheduling. Waking one of the threads
waiting for a futex is not done by looking at the priorities of the
If I've understood right, SCHED_FIFO semantics do not have any meaning
between threads of one process if NPTL is used! Hopefully this is right,
as it would cause quite a lot of problems (GUIs and disk i/o threads can
freely block audio processing even though using SCHED_FIFO). :(
Audio software for Linux!
Hi Steve, hi others!
I'm just adding lrdf support to glame but apart from default values I dont
see more information than in the ladspa descriptor. Can we agree on some
more metatags? I'd like to have
- a Category (or multiple ones?)
- short description of ports and the plugin itself
- URI to the documentation of the plugin
am I right getting such information would be done via the lrdf call
lrdf_get_setting_metadata? So we need to define element labels for the
above. I'd suggest