The Generalized Music Plug-In Interface (GMPI) working group of the MIDI
Manufacturer's Association (MMA) is seeking the input of music and audio
software developers, to help define the technical requirements of GMPI.
The objective of the GMPI working group is to create a unified
cross-platform music plug-in interface. This new interface is hoped to
provide an alternative choice to the multitude of plug-in interfaces that
exist today. Among the many benefits of standardization are increased
choice for customers, lower cost for music plug-in vendors and a secure
future for valuable market-enabling technology.
Like MIDI, GMPI will be license free and royalty free.
Phase 1 of the GMPI working group's effort is to determine what is required
of GMPI: What sorts of capabilities are needed to support existing products
and customers? What are the emerging new directions that must be addressed?
Phase 1 is open to any music software developer and is not limited to MMA
members. It will last a minimum of three months, to be extended if deemed
necessary by the MMA. Discussions will be held on an email reflector, with
possible meetings at major industry gatherings such as AES, NAMM and Musik
Messe.
Following the collection of requirements in Phase 1, the members of the MMA
will meet to discuss and evaluate proposals, in accordance with existing MMA
procedures for developing standards. There will be one or more periods for
public comment prior to adoption by MMA members.
If you are a developer with a serious interest in the design of this
specification, and are not currently a member of the MMA, we urge you to
consider joining. Fees are not prohibitively high even for a small
commercial developer. Your fees will pay for administration, legal fees and
marketing. Please visit http://www.midi.org for more information about
membership.
To participate, please email gmpi-request(a)freelists.org with the word
"subscribe" in the subject line. Please also provide your name, company
name (if any) and a brief description of your personal or corporate domain
of interest. We look forward to hearing from you.
Sincerely,
Ron Kuper
GMPI Working Group Chair
Hi, I've been playing a lot with bristol synth and really love it. So
much so that I've been trying to 'Jackify' it. Actually, I'm pretty
much done, but can't figure out the internal audio format. Its
interleaved floats I think, but not normalised to [-1,1]. If any of
the developers are here could you help me out? I can hear noise, but I
need to tune the maths. TIA.
--ant
>> That said, I think Patrick is right to start thinking about this now.
Thanks.
>I think he's completely right - I'm not sure about this bank account
>thing but I do think that now is the time to be demoing, talking up and
>generally approaching people and companies about Linux music software.
>I wrote us up (and mentioned a few other apps) in the latest edition of
>Linux User - John at mstation.org has been very kind so far as well.
>Now is the right time to be talking to people and getting the
>"products" out there. If it works - why not tell people about it?
The reason I believe we need to have various bank accounts are because
we cannot afford to waste money on excessive service charges and not
everyone has access to credit cards. If we have the accounts in the
right countries then people can just donate cash.
From a professional perspective we need to show our prospective clients
that we have sound financial thinking. It's mostly a subconscious need
that consumers have. They want to know that the money they are investing
is being given to people/companies/organisations who use it. Most people
don't really care how it is used although we have the moral
justification on our side too.
This is from the Sound on Sound advertising package.
"The main target market of Sound On Sound is the professional
and semi-professional musician who is the kind of person that will have
the spending ability to purchase a large range of products from
synthesizers to samplers, mixing desks to microphones, multitracks to
monitors, effects to expanders and computer hardware and software.
They are not time wasters who do not know their profession - they are
serious and mature individuals working with a reasonable budget."
If we want to appeal to this audience we need to prove to them that they
are investing in professional audio. We need to wine them and dine them
(metaphorically). If they look into our commmunity and say these are
just amateur geeks who have made some interesting things happen it won't
work. If we take the intiative and lead them into our world they will
come at it from the perspective that we are professionals who have
created a very credible concept that we are proud of and want them to
enjoy using.
They will ask "What kind of cash have we invested" and if we come back
with "Ahh, well we don't actually have a scope on the financial side of
our open community." They are just going to look around for a while and
leave.
If we can show them that not only are we mathematics and logics wizards
but that we also have solid business sense then they are going to stick
around and see what we have to offer. A lot of them will probably invest
just to test the waters or to keep up with the play.
I want to see an advertising campaign happen that will educate and
encourage the mass of potential user to take the step. I also want to
make sure that we have covered our asses when they finally walk in
through the doors.
It's a choice between being amatuer enthusiasts or professionals.
If we come across as professionals people won't give a toss about
geekyness.
--
Patrick Shirkey - Boost Hardware Ltd.
For the discerning hardware connoisseur
Http://www.boosthardware.comHttp://www.djcj.org - The Linux Audio Users guide
========================================
"Um...symbol_get and symbol_put... They're
kindof like does anyone remember like get_symbol
and put_symbol I think we used to have..."
- Rusty Russell in his talk on the module subsystem
All - here are some early scribbles of a XAP spec. I took the doc that I
wrote as an overview and started formalizing it. Please read it over and
pooint out the places where it is missing stuff? The Table of Contents
needs a lot more meat. Once I have that, I can write a lot or a little on
each subject.
Again - EARLY SCRIBBLES :) That said, the ToC, content, organization,
structure, and spelling are all up for abuse.
Have at it
The XAP Audio Plugin API
Specification
$Id: xap_overview.txt,v 1.1 2003/01/15 10:55:50 thockin Exp $
0.0 Meta
0.1 Guilty Parties
0.2 Goals
0.3 Terminology
0.4 Conventions
1.0 Overview (introduce ideas, no details)
1.1 Controls
1.2 Events
1.3 Channels
1.4 Ports
2.0 The Descriptor
2.1 Meta-Data
2.2 State Changes
2.2.1 Create
2.2.2 Destroy
2.2.3 Activate
2.2.4 Deactivate
2.3 Channels
2.4 Port Setup
2.5 Controls and EventQueues
2.6 Processing Audio
2.7 Errorcodes
Host struct
- Threading
Control struct
- RT
- we have SYS controls
Port struct
Events
- ramping
-EVQ and Events
Channels
Ports
Instruments and Voices
We have Voice Controls
Special controls
- sys controls
- MIDI controls
Tempo and Meter
Sequencer Controls
Macros
0.1 Guilty Parties
XAP is a combined effort of many people on the linux-audio-dev
(linux-audio-dev(a)music.columbia.edu) mailing list. The discussion is
open, and anyone interested is welcome to join in.
0.2 Goals
The main goal of this project is to provide an API that is full-featured
enough to be the primary plugin system for audio creation and playback
applications, while remaining as simple, lightweight, and self-contained
as possible. The focus of this API is flexibility tempered with
simplicity.
0.3 Terminology
In order to read this document without having your head spin, you should
probably understand the following terms, first.
* Plugin:
A chunk of code, loaded or not, that implements this API (e.g. a .so
file or a running instance).
* Host
The program responsible for loading and controlling Plugins.
* Instrument/Source:
An instance of a Plugin that supports the instrument API and is used
to generate audio signals. Many Instruments will implement audio
output but not input, though they may support both and be used an an
Effect, too.
* Effect:
An instance of a Plugin that supports both audio input and output.
* Output/Sink:
An instance of a Plugin that can act as a terminator for a chain of
Plugins. Many Outputs will will support audio input but not output,
though they may support both and be used as an Effect, too.
* Voice:
A playing sound within an Instrument. Instruments may have multiple
Voices, or only one Voice. A Voice may be silent but still active.
* Event:
A time-stamped notification of some change of something.
* Control:
A knob, button, slider, or virtual thing that modifies behavior of
the Plugin. Controls can be master (e.g. master volume),
per-Channel (e.g. channel pressure) or per-Voice (e.g. aftertouch).
* Port:
An audio input or output.
* EventQueue
A control input or output. Plugins may internally have as many
EventQueues as they deem necessary. The Host will ask the
Plugin for the EventQueue for each Control.
FIXME: what is the full list of things that have a queue?
Controls, Plugin(master), each Channel?
* VVID
Virtual Voice ID. Part of a system that allows sequencers and
the like to control synth Voices without having detailed
knowledge of how the synth manages Voices.
* Channel
A grouping of Controls and Ports, similar to MIDI Channels.
* Tick
The unit of musical time, used with the tempo and
meter interfaces. The unit is decided by the
maintainer of the timeline, in order to keep musical
time calculations exact as far as possible.
* Cue Point
A virtual marker on the musical time line, marking a
position to which the plugin should be able to jump
at any time, without delay. This is used by hard disk
recorders, and other plugins that may need to perform
time consuming and/or nondeterministic processing as
a result of timeline jumps.
0.4 Conventions
//FIXME: datatypes (XAP_foo) and documentation style
1.0 Overview
XAP Plugins live in shared object files. A shared object file holds
one or more plugin descriptors, accessed by index. Each descriptor holds
all the information about a single plugin - it's identification,
meta-data, capabilities, controls, and access methods. The Plugin
descriptors are retrieved by an exported function in each shared object.
XAP plugins are always in one of two states - ACTIVE or IDLE. IDLE
Plugins are not capable of processing audio, and must be activated.
Plugins will spend most of their time in the ACTIVE state. After loading
Plugins, the Host instantiates them, establishes Port and EventQueue
connections, and activates them. Once ACTIVE, a Plugin expects to be run
repeatedly on small blocks of data. When audio processing is done, the
Host can deactivate the Plugin.
XAP is designed to be used in realtime scenarios. XAP plugins specify
their realtime capabilities, and Hosts can allow or disallow operations
based on that information.
All XAP audio data is processed in 32-bit floating point form. Values are
normalized between -1.0 and 1.0, with 0.0 being silence.
1.1 Controls
XAP uses the idea of Controls as abstract carriers of Plugin parameters
and other information. Controls can represent things like knobs and
buttons, but they can also represent things like filenames, MIDI
aftertouch or channel pressure. Not only do they represent audio
parameters, but they can represent chunks of system information, such as
tempo or transport position.
Like audio hardware, knobs and other controls can be global to a Plugin.
However, XAP also allows Instrument Plugins to provide per-Voice controls.
Controls come in a few datatype flavors, and can have min/max limits and
default values, as well as hints to the host about what they are,
semantically. Hosts can use the hints to automatically connect things,
where appropriate.
Controls get their data via Events. Events can immediately set the value
of a Control, or they can establish a ramp - a target and duration for the
Plugin to change the control more smoothly.
1.2 Events
Almost everything that happens during the ACTIVE state is communicated via
Events. The Host can send Events to Plugins, Plugins can send Events to
the Host, and Plugins can send Events to other Plugins (if they are so
connected).
All Events are timestamped. That means that any Control change, or any
other Event is sample-accurate. XAP hosts have a running timer which
counts sample-frames - this is what the timestamp is based on.
Events are passed to Plugins on EventQueues. This allows a Plugin to
receive any number of Events with a minimal per-Event overhead.
1.3 Channels
//FIXME:
1.4 Ports
This is an audio API, so it wouldn't be complete without some mechanism to
transport audio Data. This is a Port. Each Port carries a single stream
of mono audio data.
Plugins may allow the Host to disable Ports. If a Port is disabled, the
Plugin will not read from or write to it. If a Port is not disabled, it
must be connected by the host to a valid buffer.
2.0 The Plugin Descriptor
The Plugin descriptor is a static(*) data structure provided by the Plugin
to describe what it can do. Descriptors are retrieved by calling the
xap_descriptor() function of a XAP shared object. This function is called
repeatedly with an index parameter, starting at 0 and incrementing by one
on each call. The function returns the Plugin descriptor for each index,
up to the number of Plugins in the shared object file, at which time it
returns NULL. Plugin indices are always sequential, and once a NULL is
returned, the host can assume there are no more Plugin descriptors to be
queried.
(*) The descriptor for wrapper Plugins may change upon loading of a
wrapped Plugin. This is a special case, and the Host must be aware of it.
//FIXME: how?
2.1 Meta-Data
The Plugin descriptor provides several fields for Plugin meta-data. The
fields are as follows:
id_code: the vendor and product encoding
api_code: the XAP API version code of this Plugin
ver_code: the Plugin version code - used to identify Plugin data
flags: Plugin-global flags
label: a short, unique Plugin identifier
name: the user-friendly Plugin name string
version: the version string
author: the author string
copyright: the copyright string
license: the license string
url: the URL string
notes: notes about the Plugin
2.2 State Changes
As mentioned above, Plugins exist in one of two states - ACTIVE and
IDLE (well, three if you count non-existance as a state). The Plugin
descriptor holds the methods to create and destroy instances of Plugins,
and to change their state.
2.2.1 Create
A Plugin is instantiated via it's descriptor's create() method. This
method receives two key pieces of information, which the Plugin will use
throughout it's lifetime. The first is a pointer to the Host structure
(see below), and the second is the Host sample rate. If the Host wants to
change either of these, all Plugins must be re-created. This is where the
Plugin's internal structures can be allocated and initialized. There is
no required set of supported sample rates, but Plugins should support the
common sample rates (44100, 48000, 96000) to be generally useful. If the
Plugin does not support the specified sample rate, this method should
fail. Hosts should always check that all Plugins support the desired
sample rate.
Once created, the Plugin instance is in the IDLE state.
2.2.2 Destroy
Plugins are destructed via the descriptor's destroy() method. Plugins
can only be destroyed when in the IDLE state. All Plugin-allocated
resources must be released during this method. After this method is
invoked, the Plugin handle is no longer valid. This function can not
fail.
2.2.3 Activate
From the IDLE state, a Plugin can be changed to the ACTIVE state via the
activate() method. Passed to this method are two arguments which are
valid for the duration of the ACTIVE state - quality level and
realtime state. The quality level is an integer between 1 and 10, with 1
being lowest quality (fastest) and 10 being highest quality (slowest).
Plugins may ignore this value, or may provide less than 10 discrete
quality levels. The realtime state is a boolean value which is boolean
TRUE if the Plugin is in a realtime processing net or boolean FALSE if it
is not realtime (offline).
This method can only be called from the IDLE state. Once ACTIVE, a Plugin
may process audio.
2.2.4 Deactivate
From the ACTIVE state, a Plugin can be changed to the IDLE state via the
deactivate() method. This method can only be called from the ACTIVE
state.
2.3 Channels
//FIXME:
2.4 Port Setup
//FIXME: kinda needs Channels
Once a Plugin is loaded, the Host must connect the audio Ports. All Ports
in a Plugin must be connected or disabled. The Plugin descriptor provides
a connect_port() method, which the host must call to connect a buffer
pointer to a Port. Once connected, a Port remains connected to the
specified buffer until the Host disables it or connects it to a different
buffer. All Plugins are assumed to not use the same input buffers as
output buffers, unless the Plugin flags indicate that it safely handles
in-place processing.
Plugins may allow the Host to disable Ports, rather than connect them.
The Plugin descriptor provides an optional disable_port() method. If this
method is provided, and it returns successfully, the Host can ignore this
port. Once disabled, a Port remains disabled until the Host connects it,
at which point it becomes enabled.
2.5 Controls and EventQueues
//FIXME: kinda needs Channels
In order to be of any use, a Plugin must provide Controls. Control
changes are delivered via Events. Events are passed on EventQueues.
Each Control has an associated EventQueue, on which Events for that
Control are delivered. In addition, there is an EventQueue for each
Channel and a master Queue for the Plugin. The Plugin can internally use
the same EventQueue for multiple targets.
The Host queries the Plugin for EventQueues via the get_input_queue()
method. In order to allow sharing of an EventQueue, the get_input_queue()
method also returns a cookie, which is stored in each Event as it is
delivered. This allows the plugin to use a simpler EventQueue scheme
internally, while still being able to sort incoming Events.
Controls may also output Events. The Host will set up output Controls
with the set_output_queue() method.
2.6 Processing Audio
A Plugin which has been activated and properly set up, may be called upon
to process audio. This is done through the descriptor's run() method.
This method gets the current sample-frame timestamp as an argument. In
this method the Plugin is expected to examine and handle all new Events,
and to read from or write to it's Ports.
This method may only be called from the ACTIVE state.
2.7 Errorcodes
All this stuff needs to be integrated somewhere...
Tempo and Meter
----
XAP uses Controls for transmitting tempo and meter information. If a
Plugin defines a TEMPO control, it can expect to receive tempo Events on
that control. The Host must define some unit of musical-time measurement (Tick),
which represents the smallest granularity the host wants to work with.
This is the basis for tempo and meter. The host publishes the current count of
Ticks/Beat via the host struct.
Control: TEMPO
Type: double
Units: ticks/sec
Range: [-inf, inf]
Events: Hosts must send a TEMPO Event at Plugin init and when tempo
changes.
Control: METER
Type: double
Units: ticks/measure
Range: [0.0, inf]
Events: Hosts must send a METER Event at Plugin init and when meter
changes. Hosts should send a METER Event periodically, such as
every measure or once per second.
Control: METERBASE
Type: double
Units: beats/whole-note
Range: [1.0, inf]
Events: Hosts must send a METERBASE Event at Plugin init and when meter
changes.
This mechanism gives Plugins the ability to be aware of tempo and meter
changes, without forcing information into plugins that don't care. A
Plugin can sync to various timeline events, easily.
The Host struct also provides a mechanism to query the timestamp of the
next Beat or Bar, and to convert timestamps into the following time formats:
* Ticks
* Seconds
* SMPTE frames
Sequencer Control
----
XAP plugins may be aware of certain sequencer events, such as transport
changes, positional jumps., and loop-points. These data are received on
Controls.
Control: POSITION
Type: double
Units: ticks
Range: [0.0, inf]
Events: Hosts must send a POSITION Event at Plugin init, when transport
starts, and when transport jumps. Hosts should send a POSITION
Event periodically, such as every beat, every measure or once per
second.
Control: TRANSPORT
Type: bool
Units: on/off
Events: Hosts must send a TRANSPORT Event at Plugin init and when
transport state changes.
Control: CUEPOINT
Type: double
Units: ticks
Range: [0.0, inf]
Events: Hosts must send a CUEPOINT Event when Cuepoints are added,
changed, or removed.
Control: SPEED
Type: double
Units: scalar
Range: [-inf, inf]
Events: Hosts must send a SPEED Event at Plugin init and when play speed
changes.
Instruments and Voices
----
XAP instruments can be either voiced or non-voiced. Non-voiced
instruments are essentially always on, and their output is controlled
purely by controls, such as the gate control of a modular synth or the
hand-distance of a theremin. Non-voiced instruments are monophonic
per-channel.
Voiced controls are more structured. They must handle Virtual Voice IDs
(VVIDs). VVIDs are unsigned 32 bit integers, which are allocated by the
host and passed to instruments via the XAP_EV_VVID_* events. Instruments
may use VVIDs as a direct index into the host structure's VVID table,
which is an array of unsigned 32 bit integers. The data in the host VVID
table is for use by the instrument.
VVIDs must be allocated before use and de-allocated before re-use.
Instruments can maintain an internal mapping between VVIDs and actual
voices, which allows them to handle voice allocation in a purely
abstracted manner from the host. Once allocated, the host can send events
for a VVID until the VVID is de-allocated.
A VVID has two states - active and inactive. After allocation, the VVID
is inactive. Control events received while inactive can be assumed to set
the control state for the activation event. A VVID is activated via the
VOICE control, which all instruments must provide. Once in the active
state, the instrument may produce sound for the voice. A VVID is
deactivated via the VOICE control, which puts the VVID in the inactive
state. It is important to note that just because a VVID has received a
VOICE OFF event, it is not necessarily silent. It may have a long release
phase, which is dependant on the instrument.
Control: VOICE
Type: bool
Units: on/off
Events: Hosts send VOICE Events any time after a VVID has been allocated
but before it has been deallocated.
Some instruments will choose to only examine some controls at activation
time (such as velocity for MIDI-like instruments) or at deactivation time
(such as release velocity). These controls are referred to as latched.
Setting an init-latched control after activation may or may not set the
value, and may or may not have any effect - this is instrument dependant.
A VVID may be re-used. That is to say, the host can leave a VVID
allocated after deactivation, and re-activate it later. This can be used
for emulation of MIDI synths, where the VVID is akin to the MIDI pitch.
A VVID may be deallocated before the associated voice is done playing.
The instrument should continue to play the voice, as no VOICE OFF event
has been received. Likewise, a voice may end before it has received a
VOICE OFF (such as a drum hit). The VVID can change to the inactive state
and track control changes until another VOICE ON is received.
Because VVIDs and voices have no exact correlation, instrument plugins
have a great deal of control over their voice operations. An Instrument
can be mono, and cut the voice and restart for each new VOICE ON, or it
man be massively polyphonic. The host always alerts the plugin to the
state of a VVID, and to changes of that state. Once a VVID has been
deallocated, it can be re-used.
Plugins can assume that VVIDs are at least global per-Plugin. The host
will not activate the same VVID in different Channels at the same time.
Hey,
I just got a new box and two hammerfall 9636 cards for a project at work.
The motherboard is a KT4 Ultra that has some onboard audio chip I don't
care about but can't turn off either in the bios.
I've put in the first hammerfall card, got alsa 0.9.0 rc7 from the
freshrpms site (it's a redhat box), and started configuring things.
I've got the feeling I'm missing some things, so here are some simple
questions.
a) It looks like the Hammerfall driver doesn't have a mixer interface, is
this correct ?
Here's an ls in the relevant dir:
[root@framboos asound]# ls card0/
id pcm0c pcm0p rme9652
b) It looks like the onboard audio chip is controlled by an OSS driver, it
doesn't show up in the alsa drivers either, which is fine by me, since I'm
not going to use it. Is there any problem with OSS modules being loaded
at the same time as ALSA modules ?
c) I bought the card so I could record optical S/PDIF. The manual says I
need to tell the card that I want the ADAT1 Input source to be Optical
S/PDIF.
Right now I get, in /proc/asound/card0/rme9652:
[root@framboos card0]# cat rme9652
RME Digi9636 (Rev 1.5) (Card #1)
Buffers: capture cdc00000 playback cda00000
IRQ: 5 Registers bus: 0xde000000 VM: 0xd088a000
Control register: 44008
Latency: 1024 samples (2 periods of 4096 bytes)
Hardware pointer (frames): 0
Passthru: no
Clock mode: autosync
Pref. sync source: ADAT1
ADAT1 Input source: ADAT1 optical
IEC958 input: Coaxial
IEC958 output: Coaxial only
IEC958 quality: Consumer
IEC958 emphasis: off
IEC958 Dolby: off
IEC958 sample rate: error flag set
ADAT Sample rate: 44100Hz
ADAT1: No Lock
ADAT2: No Lock
ADAT3: No Lock
Timecode signal: no
Punch Status:
1: off 2: off 3: off 4: off 5: off 6: off 7: off 8: off
9: off 10: off 11: off 12: off 13: off 14: off 15: off 16: off
17: off 18: off
Which I read as the Input source being ADAT optical instead of S/PDIF.
How do I set it to S/PDIF ?
d) the card came with a sub-D-connector that connects to the card's 15 pin
port, which branches off two RCA jacks. I don't suppose these RCA jacks
provide an analogue output by any chance on which I can monitor for sound
?
I'm sure I'll bug you with more questions later on, but these are the most
pressing ones at this point.
Any help is greatly appreciated.
Thomas
--
The Dave/Dina Project : future TV today ! - http://davedina.apestaart.org/
<-*- thomas (dot) apestaart (dot) org -*->
Oh, baby, give me one more chance
<-*- thomas (at) apestaart (dot) org -*->
URGent, the best radio on the Internet - 24/7 ! - http://urgent.rug.ac.be/
Think this needs to go to LAD too...
--- Steve Harris <S.W.Harris(a)ecs.soton.ac.uk> wrote:
> I think there was one more too, but I can't think what it was...
I remember one about sidechains or something. I don't know what a sidechain is,
though, I just recall seeing it somewhere :)
> I agree AUDIO_RATE_CONTROL should be renamed.
>
> There was a suggestion on ardour-dev that a hint to say whether control outs
> were supposed to be informative or a source of control data might help,
> but I'm not sure about it.
Not sure what 'informative' means here... what information do we get if we
ignore the control data on the output?
> Does someone want to reword these in a more meaningful way? If not I'l do
> it, then you'l be sorry ;).
I'll have a go :)
(The LADSPA_IS_* things will need to be added too)
/* Hint MOMENTARY indicates that that a control should behave like a
momentary switch, such as a reset or sync control. LADSPA_HINT_MOMENTARY
may only be used in combination with LADSPA_HINT_TOGGLED. */
#define LADSPA_HINT_MOMENTARY 0x40
/* Hint RANDOMISABLE indicates that it's meaningful to randomise the port
if the user hits a button. This is useful for the steps of control
sequencers, reverbs, and just about anything that's complex. A control
with this hint should not result in anything too suprising happening to
the user (eg. sudden +100dB gain would be unpleasant). */
#define LADSPA_HINT_RANDOMISABLE 0x80
/* Plugin Ports:
Plugins have `ports' that are inputs or outputs for audio or
data. Ports can communicate arrays of LADSPA_Data (for audio
or continuous control inputs/outputs) or single LADSPA_Data values
(for control input/outputs). This information is encapsulated in the
LADSPA_PortDescriptor type which is assembled by ORing individual
properties together.
Note that a port must be an input or an output port but not both
and that a port must be one of either control, audio or continuous. */
[...]
/* Property LADSPA_PORT_CONTINUOUS_CONTROL indicates that the port is
a control port with data supplied at audio rate. */
#define LADSPA_PORT_CONTINUOUS_CONTROL 0x10
-
Mike
>
> - Steve
>
> > On Wed, 15 Jan 2003 18:13:35 +0000
> > Steve Harris <S.W.Harris(a)ecs.soton.ac.uk> wrote:
> >
> > > There have been a few suggestions recently, I'l try to summarise them
> > > for comment.
> > >
> > > MOMENTARY. A hint to suggest that a control should behave like a
> > > momentary switch, eg. on for as long as the user holds down the
> > > key/mouse button/whatever. Useful for reset or sync controls for
> > > example. Would be useful in the DJ flanger. Only applies to TOGGLED
> > > controls.
> > >
> > > AUDIO_RATE_CONTROL. Hints than an audio control should/could be
> > > controlled by a high time res. slider or control data, but shouldn't
> > > be connected to the next audio signal by default. I can't think of any
> > > simple examples off hand, but combined with MOMENTARY it could be used
> > > for sample accurate tempo tapping.
> > >
> > > RANDOMISABLE. Hints that its useful/meaningful to randomise the port
> > > if the user hits a button. This is useful for the steps of control
> > > sequencers, reverbs, and just about anything that's complex. Allows
> > > you to specify which controls can be randomised without anything too
> > > supprising happening to the user (eg. sudden +100dB gain would be
> > > unpleasent).
__________________________________________________
Do You Yahoo!?
Everything you'll ever need on one web page
from News and Sport to Email and Music Charts
http://uk.my.yahoo.com
I have been working on an audio processing/synthesis application that
should have the power of Reason and Buzz plus the ability to be used
effectivly in live proformance. Some of you may have read my earlier
emails on this subject. I call it Voltage.
The main features that I want but are not availiable in other software
are:
- Linux and Open-Source
- the ability to route and process sequence data (MIDI like data). This
would allow very powerful chordization and arpegiation.
- the ability to create sequence loops in a flexible way (like Reason
Matrix except without the limitations). I'd also like to be able to
record/modify them live without stoping the playback.
- the ability to control things in complex ways using scripting of some
sort (maybe as a machine that is scripted to create certain sequence
events)
What I am looking for is someone to help me (an inexperienced software
designer) design this program. That person don't nesseserily need to
help code the program. I've desided I need help after multiple failed
attempts. I think my current ideas are closer to a working design than
past ones, but I don't want to code it then find out doesn't work (like
I did last time round).
If you are willing to help or have any tips or books I should read, etc.
please email me.
Thanks much.
-Arthur
--
Arthur Peters <amp(a)singingwizard.org>
Hi!
On Tue, Feb 25, 2003 at 07:48:11PM +0200, Kai Vehmanen wrote:
> Date: Tue, 25 Feb 2003 12:20:22 -0500
> From: Paul Davis <paul(a)linuxaudiosystems.com>
> Reply-To: linux-audio-dev(a)music.columbia.edu
> To: linux-audio-dev(a)music.columbia.edu
> Subject: Re: [linux-audio-dev] Fwd: CSL Motivation
>
> >There are discussions on kde-multimedia about
> >the future of Linux/Unix multimedia (especially sound).
> >This is one of the most interesting messages.
>
> CSL is proposed primarily as a wrapper layer around existing APIs. as
> such, it seems to me to have no particular merits over PortAudio,
> which has the distinct advantages of (1) existing
CSL also "exists".
> (2) working on many platforms already and
You're right about that one: CSL is not as complete as PortAudio w.r.t to
portability.
> (3) using well-developed abstractions.
I do not believe that something is ever a "well-developed abstraction" by
itself. Something is always a "well-developed abstraction" from something,
designed to achieve a certain purpose. From the PortAudio homepage:
| PortAudio is intended to promote the exchange of audio synthesis software
| between developers on different platforms, and was recently selected as the
| audio component of a larger PortMusic project that includes MIDI and sound
| file support.
This clearly states the purpose: if you want to write audio synthesis software,
then you should use PortAudio. Then, I assume, the abstraction is well-
developed. However, it does not state:
"... is intended to play sound samples with sound servers easily. Or: ... is
intended to port existing applications easily. Or: ... is intended to let
the application choose its programming model freely."
No. PortAudio makes a lot of choices for the software developer, and thus
provides an easy abstraction. This will mean, however, that actually porting
software to PortAudio will probably be hard (compared to CSL), whereas
writing new software for PortAudio might be convenient, _if_ the software
falls in the scope of what the abstraction was made for.
> CSL was
> written as if PortAudio doesn't exist. I don't know if this a NIH
> attitude, or something else, but I see little reason not use consider
> PortAudio as *the* CSL, and by corollary, little reason to develop Yet
> Another Wrapper API.
Well, I gave you some. The paper gives some more. Basically, CSL is intended
for porting _most_ free software, whereas PortAudio is intended for portable
synthesis software.
I think PortAudio would benefit in _supporting_ CSL, rather than aRts for
instance, because CSL is more generic ; once new sound servers (like MAS)
come up, you need not patch PortAudio all the time, but just one place: a
CSL driver. The same is valid for other meta-frameworks like SDL.
> the only reason i was happy writing JACK was
> precisely because its not another wrapper API - it specifically
> removes 90% of the API present in ALSA, OSS and other similar HAL-type
> APIs.
I am glad you did write JACK, although back then I thought it was just another
try to redo aRts (and we had some heated discussions back then), because some
people seem to like it. If some people will like CSL, why not?
If you added CSL support to JACK right now, you would never need to bother
with any of the "sound server guys" like me again, because you could always
say: "support CSL in your sound server thing, and then JACK will support your
sound server".
On the other hand, if you added JACK support to CSL, you could also mix the
output of all of these "sound servers" into JACK, without endangering your
latency properties.
Cu... Stefan
--
-* Stefan Westerfeld, stefan(a)space.twc.de (PGP!), Hamburg/Germany
KDE Developer, project infos at http://space.twc.de/~stefan/kde *-
Hallo,
with the LAD meeting getting closer, I'm getting a bit curious about,
what the plans are for the open "Linux Sound Night" on 15.3.? Will we
hear some of you guys perform and Paul records it?
ciao
--
Frank Barknecht _ ______footils.org__