>Anyone remember the project mentioned on LAD before about an online
>Linux Audio tech database? Cheers.
That was me.
http://www.djcj.org
I will be working on improving the functionality soon. I have been
overloaded recently actually playing music. Makes a nice change from
wanting to but not having the correct resources. All open source of course.
Anyway, I would like to have more feedback on the way the database is
setup, displayed...
Feel free to add your name. I originally intended it to be for paid
support work but if it evolves into a more general database then I'm
happy to do my part to get it there.
I will focus on making it user configurable over the next few weeks. I
have been contemplating it for the past few days. Sometimes it nice not
to get sucked into the net. Almost like taking a vacation :)
--
Patrick Shirkey - Boost Hardware Ltd.
For the discerning hardware connoisseur
Http://www.boosthardware.comHttp://www.djcj.org - The Linux Audio Users guide
========================================
Being on stage with the band in front of crowds shouting, "Get off! No!
We want normal music!", I think that was more like acting than anything
I've ever done.
Goldie, 8 Nov, 2002
The Scotsman
> Why not? Shouldn't these sorts of discussions be wiiiiide open, Paul?
> Or is it just a matter of it being too soon, and thus there being
> nothing to talk about?
The discussions will of course be wide open, most likely held on an email
listserv, once one gets set up. To encourage more participation, it is also
likely that the discussion will be overseen by the IASIG rather than the
MMA.
Right now the whole proposal is in the "idea" stage. There is no definite
spec. I think the ultimate aim would be to cherry pick the best of what's
out there, and try to build a standard that is both "best of breed" and
"lean and mean" -- if such a thing is possible. :-)
-----------
Ron Kuper
VP of Engineering
Cakewalk
http://www.cakewalk.com
If anyone can get to this, it would be a great idea. I might even
consider using a frequent flyer ticket for this. not sure yet.
--p
----------------------------------------------------------------------
To: various folks
cc: mma(a)midi.org
From: RonKuper(a)Cakewalk.com
Subject: FW: [mma-members] Announcement: Item #183 Unified Audio Plug-In Architecture
Hi folks,
The following announcement may be of interest to you. The initial
discussion on this proposal will be held at NAMM in about 2 weeks. Note
that this meeting is not restricted to MMA members, so I would urge anyone
who is interested and would like to attend, to please do so. Also, as far
as I know you do not need a NAMM convention badge to attend this meeting.
The meeting will be held on Sunday, January 19 from 4:30 PM - 5:45 PM at the
Anaheim Marriott. I'm not sure what the actual conference room will be; so
far it's only been designated as "MMA Meeting Room A".
I hope to see you there!
----------
Ron Kuper
Cakewalk
********************************************
MIDI MANUFACTURERS ASSOCIATION MEMBERS FORUM
********************************************
A new MMA working group has been formed for discussion of a Unified
Audio Plug-In Architecture. The proposal from Ron Kuper of Cakewalk
is attached below.
The Working Group Chair is Ron Kuper (Cakewalk). The TSB
Representative is David Miller (Microsoft).
For the moment, discussion will take place on the mma-members mailing
list. If the email traffic gets heavy, a new separate working group
mailing list will be created. A discussion session will take place
following the AGM on Sunday January 19th.
Please respond to this message if you would like to join the working group.
=====================================================
Item # 183 - Unified Audio Plug-In Architecture
Submitted by: Ron Kuper
Company: Cakewalk
The professional audio market offers a variety of audio plug-in
formats, some hardware based, some software based, all entirely
incompatible. These plug-in formats include Audio Units (Apple),
DirectX (Microsoft), DXi (Cakewalk), JACK (Linux), LADSPA (Linux),
MAS (MOTU), MFX (Cakewalk), OPT (Yamaha), ReWire (Propellerheads),
RTAS (Digidesign), TDM (Digidesign), VST (Steinberg), VSTi
(Steinberg).
While these are touted as standards, they are in fact proprietary,
and the companies responsible for their development assume a heavy
documentation and support burden. Furthermore, unlike true standards
such as MIDI, they do not actually enable interoperability between
vendors. Instead, they fragment the music software industry into
"tribes" of vendor allegiance.
The large number of competing formats means that audio plug-in
developers must either incur the high cost of developing for
multiple formats, or else take the business risk of focusing
on a single format. Host application vendors face the same
dilemma when choosing which formats to support in their applications.
We propose to develop a single audio plug-in framework, a
cross-platform standard for audio plug-ins and software synthesizers.
The key design objectives for this standard are:
- Transport neutral: can stream in memory, PCI, Ethernet, WiFi, etc.
- Low-overhead
- Adaptive to hardware
- Platform and programming-language neutral
- Compatible with existing standards, e.g., MIDI (for
parameterization and data); XMF (for serialization);
AAF (for project interchange)
- Easy to "wrap" in existing formats such as DirectX or VST
What is encouraging is that all of these standards differ only in terms of
programming interface. They all provide equivalent levels of functionality.
The goal then is to define a standard core level of functionality that will
support all of the common features among existing plugin standards, yet be
easily encapsulated by the vendor specific interfaces. In other words, the
goal would not be to replace TDM, DirectX, VST, etc with a common interface,
but rather to define a low-level interface to a reusable DSP core that could
be then packaged as TDM, DirectX, VST, etc.
-----------------------------------------------
The contents of this message are Copyright 2002
MIDI Manufacturers Association Incorporated and
not to be reproduced or distributed in any form
without express written permission.
i made a small mistake in forwarding the message about the UAPA
meeting. it was not intended for public dissemination, even though the
meeting is open to the public. my bad, as they say.
--p
> >I browsed the Kernel Source and there is only one mark_inode_dirty in
> >pipe_write (in fs/pipe.c). So we know where it is hanging...
> >
> >And in __mark_inode_dirty (in fs/inode.c) there is one
> > spin_lock(&inode_lock)
> >call, and I guess that is where the whole thing is hanging. So something
> >is holding that lock... how do I find out who is doing that? Apparently
> >the handling of inode_lock is confined to inode.c. I'll keep reading.
[Andrew Morton had suggested that the stack traces did not show problems
with stuck locks in the kernel...]
> >Maybe the pipe in question is one of the pipes that jack uses for ipc?
>
> seems *damn* likely ... sorry to just be chiming in with a useless comment!
One more (small) datapoint. Roger Larsson sent me off the list a couple
of small utilities (VERY nice tools!) that monitors the cpu usage of
SCHED_FIFO processes and after a timeout actually downgrades the
persistent hogs to SCHED_OTHER.
So I run that in a terminal and after playing around with a bunch of
jack apps got the machine to lockup... and then, after a little bit,
suddenly, it came back to life! (you could see that the monitor had
changed the priority of the hogs to SCHED_OTHER).
So I guess that somehow jack has a hard to trigger race condition that
locks up the machine when running SCHED_FIFO.
Now I have to figure out how to trace the thing so as to determine where
the whole thing is locking. Help from the jack gurus appreciated.
-- Fernando
>>It's obvious when you consider that "VVID has no voice" can happen
>>*before* the synth decides to start the voice; not just after a voice has
>>detached from the VVID as a result of voice stealing. At that point, only
>>the value of the control that triggered "voice on" will be present; all
>>other controls have been lost. Unless the host/sender is somehow forced to
>>resend the values, the synth will have to use default values or something.
>OK... I was thinking that the initial mention of the VVID would cause it
>creation (be that implicit or explict, though I prefer explit I think),
>thereafter control changes would be applied the the instantiated voice (or
>the NULL voice if you've run out / declined it).
The initial mention of the VVID is the issue here; certain types of voice
events are assumed not to allocate a voice (parameter-set events). THis is
because there is no difference between a tweak on a VVID that has had its
voice stolen and a tweak intended to initialize a voice that arrives before
voice-on. We must conclude that the plugin will discard both of them. There
must be a signal to the plugin that a VVID is targeted for activation. we
have a few options:
---a voice-activation event is sent, then any initializing events, then a
voice-on event
---a voice-on event is sent, with any following events on the same timestamp
assumed to be initializers
---a voice-activation event is sent and there is no notion of voice-on, one
or more of the parameters must be changed to produce sound but it is a
mystery to the sequencer which those are. (I don't like this because it make
sequences not portable between different instruments)
---events sent to voiceless VVID's are attached to a temporary voice by the
plugin and which may later use that to initialize an actual voice. This
negates the assumption that voiceless VVID events are discarded.
#2 is just an abbreviated form of #1, as i argue below. (unless you allow
the activate-to-voice_on cycle to span multiple timestamps which seems
undesireable)
> > > > When are you supposed to do that sort of stuff? VOICE_ON is > > >
>what triggers it in a normal synth, but with this scheme, you > > > have to
>wait for some vaguely defined "all parameters > > > available" point.
We can precisely define initialization parameters to be all the events
sharing the same VVID and timestamp as the VOICE_ON event. This means that
the "all parameters available" point is at the same timestamp as the
VOICE_ON event, but after the last event with that timestamp.
If we want to include a VOICE_ALLOCATE event then the sequence goes:
timestamp-X:voice-allocate, timestamp-X:voice-parameter-set(considered an
initializer if appropriate), timestamp-X:voice-on, timestamp-X+1:more
voice-parameter-sets (same as any other parameter-set)
But this sequence can be shortened by assuming that the voice-on event at
the last position for timestamp-X is implicit:
timestamp-X:voice-on(signifying the same thing as voice-allocate above)
timestamp-X:voice-parameter-set(considered an initializer if appropriate),
(synth actually activates voice here), timestamp-X+1:other-events
---Jacob Robbins.................
_________________________________________________________________
STOP MORE SPAM with the new MSN 8 and get 2 months FREE*
http://join.msn.com/?page=features/junkmail
>I made a post a while back defining all the XAP terminology to date. Read
>it if you haven't - it is useful :)
I was hoping something of this sort existed. It would be very helpful if you
could put the list of XAP terminology on the webpage. It would help keep
everybody on the same page when discussing.;) And it would help people to
join the discussion without spending the 10-15 hours it takes to read
december's posts.
>VVID allocation and voice allocation are still two different issues. VVID
>is about allocating *references*, while voice allocation is about actual
>voices and/or temporary voice control storage.
I agree entirely. If each VVID=a voice then we should just call them Voice
ID's, and let the event-sender make decisions about voice reappropriation.
---jacob robbins.......................
_________________________________________________________________
MSN 8 with e-mail virus protection service: 2 months FREE*
http://join.msn.com/?page=features/virus
Hi all,
Nice couple of new features with this release, a lot of code cleanups, a
bit of UI work, and general betterness all round :) And, if I'm not
mistaken, this makes jack rack the only application so far with LRDF
support. Nee ner nee ner ;-)
* proper ladcca support (pays attention to events)
* added saving/loading rack configurations. this is a bit of a hack tho as
when you open a file, it only adds the plugins to whatever's in the
current rack. in fact, the whole of the file loading/saving is hairy atm.
* added lrdf support (this is majorly phat.. categorised plugin menus :)
* proper toolbar with stock buttons
* control rows now have no central port label
* added a menu bar
* added a splash screen
* added an about box (using gnome 2)
* nice new icon and logo images, used for the splash screen, the window
icons and also a gnome 2 .desktop
* lots of code separation and cleanups and under-the-hood changes
http://pkl.net/~node/jack-rack.html
Bob
My understanding of VVID's is that the sequencer puts one complete,
continuous note on a particular VVID. The sequencer only reuses a VVID once
it has ended any previous notes on that VVID. The sequencer can allocate a
large number of VVIDs so that it never has to make a voice stealing decision
on its end (and so we don't have to make roundtrips). This large allocation
means that the plugin should never try to allocate a significant sized
structure for each VVID. Instead, the plugin should match VVIDs to actual
voices as incoming voice-on messages are received until all actual voices
are used. After all voices are in use, the plugin has to decide whether to
steal voices from ongoing notes or deny voice-on events. Voice stealing
decisions are properly made by the plugin. This is the case even if the
sequencer knows how many actual voices there are because the plugin has much
more intimate knowledge of the nature of the voices: their amplitude, timbre
etc.
My underlying assumptions are:
-a single object resides in each channel, be it a piano a gong or whatever,
there is one in the channel
-HOWEVER, that single object may be polyphonic; the piano may be able to
sound multiple notes concurrently, the gong may be able to sound two quick
strokes in succession which overlap in their duration.
-DEFINITION: We call the facility for making ONE of those sounds a voice.
-DEFINITION: the individual voices produce finite periods of sound which we
call notes. A note is the sound that a voice makes between a Voice-On event
and a Voice-Off event (provided that the voice is not reappropriated in the
middle to make a different note)
-HOWEVER- there is no rule that a note has any pitch or velocity or any
other particular parameter, it is just that the Voice-On tells the voice to
start making sound and the Voice-Off tells the voice to stop making sound.
-ALSO HOWEVER- the entity which sends voice-on and off messages may not
directly refer to the object's voices. Instead, the event sender puts
separate notes on separate Virtual Voice IDs to indicate what it desires the
voices to do. This differential is in place because sequencers typically
send more concurrent notes than the plugin has actual voices for AND the
plugin is better suited to decide how to allocate those scarce resources. In
other words, it is the role of the plugin to decide whether or not to steal
a voice for a new note and which voice to steal. So the sequencer sends out
notes in fantasy-land VVID notation where they never ever have to overlap,
and the plugin decides how best to play those notes using the limited number
of voices it has.
As I see it, the procedure for using a voice via a particular VVID is as
follows (note that all events mentioned are assumed to have a particular
VVID):
(1)send voice-on event at timestamp X. This indicates a note is to start.
(2)send parameter-set events also at timestamp X, these are guaranteed to
follow the voice-on event even though they have the same timestamp because
the event ordering specifies it. These parameter-set events are to be
considered voice initializers should the plugin support such a concept,
otherwise they are the first regular events to effect this note.
(3)send parameter-set events at later times to modify the note as it
progresses.
(4)send voice-off event at later time to end the note and free the voice.
When the plugin reads the voice-on event at timestamp X it decides whether
to allocate a voice or not. If it has an initialization routine for voice-on
events, then the plugin must read through the remaining events with
timestamp X to get initialization arguments. The plugin must delay actually
initializing the voice until it has read the other events at the same
timestamp as the voice-on event. If the plugin doesn't do any special
initialization procedures then it doesn't have to worry about this because
the events concurrent with the voice-on event can just be applied in the
same manner as later param-set events.
--jacob robbins:..... soundtank..........
_________________________________________________________________
Protect your PC - get McAfee.com VirusScan Online
http://clinic.mcafee.com/clinic/ibuy/campaign.asp?cid=3963