i'd like to remind the folks involved in the XAP discussion that it
would be really, really, really useful to get some summary
documentation on where the design process stands and what has been
accomplished thus far. if you can provide this, i will need it by 3pm
EST on thursday at the absolute latest.
also, for folks here who are not on ardour-dev, please critique this:
http://www.op.net/~pbd/brochure.pdf
--p
Hi everyone,
I sent the draft of the complete MWPP implementation guide off
to internet-drafts(a)ietf.org today. You can download it now from:
http://www.cs.berkeley.edu/~lazzaro/sa/pubs/txt/current-guide.txt
See the abstract below for details, as well as the I-D change
log. Comments are welcome. I'll turn the document around one more
time before the March 3 San Francisco cutoff date, and can incorporate
your feedback into the revision.
Writing this 77-page (!) document added a few more open issues
to draft-ietf-avt-mwpp-midi-rtp-05.txt. Next, I'll spend a few days
writing the RTP over TCP I-D, and then I'll start working through the
open issue list for draft-ietf-avt-mwpp-midi-rtp-05.txt. I expect to
submit an -06.txt in time for March 3 deadline.
---
INTERNET-DRAFT John Lazzaro
January 15, 2003 John Wawrzynek
Expires: July 15, 2003 UC Berkeley
An Implementation Guide to the MIDI Wire Protocol Packetization (MWPP)
<draft-lazzaro-avt-mwpp-coding-guidelines-01.txt>
Abstract
This memo offers non-normative implementation guidance for the MIDI
Wire Protocol Packetization (MWPP), an RTP packetization for the
MIDI command language. In the main body of the memo, we discuss one
MWPP application in detail: an interactive, two-party, single-
stream session over unicast UDP transport that uses RTCP. In the
Appendices, we discuss specialized implementation issues: MWPP
without RTCP, MWPP with TCP, multi-stream sessions, multi-party
sessions, and content streaming.
---
-------------------------------------------------------------------------
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-------------------------------------------------------------------------
IÂ’m trying to get this working between two LAN connected machines
running Suse8.1. It appears to compile and run OK, both Client and
Server applications on the two machines but IÂ’m not getting any audio
across the network. My speakers and mic are working OK on both machines
as I’ve tested them with with ‘alsamixer’. I’m not getting any errors so
I donÂ’t know where the problem is. I run the server by running
TC__talk_srvr and I run the client by running TCP_talk_clnt
xxx.xxx.xxx.xxx (where xxx.xxx.xxx.xxx is the IP address of the server
machine. IÂ’d greatly appreciate any help. Alternatively IÂ’d be glad to
try any other application that will give me 2 way audio/voice across a
LAN.
R.C.
Be gone foul bugs!
* fixed control output ports segfault
* fixed desktop installation prefix stuff
* fixed bug dealing with duplicate plugin ids
* now quits when you close the window
* added a "New" option to clear the rack
* rack is now automatically cleared when you load a file
http://pkl.net/~node/jack-rack.html
Bob
The main difficulty I see with the VVID system is how to initialize
parameters on a new voice before the voice is activated with a VoiceOn
event.
The MIDI standard deals with this by allowing 2 particular parameters, pitch
and velocity, to be privileged as voice-on initializers. These parameters
are included inside the MIDI Voice On message so that the instrument always
receives them when they are needed: right when a voice is being activated.
It has been the general feeling in LAD discussions that XAP should not put 2
particular parameters into the Voice On event but, instead, should allow any
parameter-set events to be intializers. How to achieve this?
In XAP, it is not sufficient to consider all parameter setting events
received before a Voice On message to be initializers. Consider that, before
an actual voice is attached to a VVID, all events sent to that VVID are
discarded. For instance, if a voice is activated and then reappropriated
halfway through a note, before the sequencer has stopped using the VVID, the
remainder of the events which the sequencer sends on the voiceless VVID are
simply discarded.
Furthermore, any system which requires initializers to be timestamped before
the voice-on event forces at least a 1 timestamp delay into voice activation
which is undesireable.
We could instead put the voice-on event at the same timestamp as the
initializers and require that the plugin read ALL events for a particular
time position before discarding ANY events, but this makes it more complex
for plugins to read their events. If the plugin found a voice-on event at
the end of a queue section for a particular timestamp then it would need to
loop back to the first event in the queue matching that timestamp and read
again for any initializers.
Alternately, we could require that event ordering has 2 criterion:
-first- order on timestamps
-second- put voice-on ahead of all other event types.
A little ungainly, but effective as it frees the plugins to assume that
they'll get voice-on first but must consider all other events on that
timestamp as arguments to the initialization of the voice. Of course this
only applies to events having a particular VVID on a particular channel of a
particular plugin instance. I admit that introducing another criteria to
event ordering is inelegant, but it is the best approach to voice activation
that I can think of. I think that introducing any sort of mandatory 1 or 2
timestamp delay is far worse.
-jacob robbins.... current projects: soundtank.......<BR>..........
_________________________________________________________________
STOP MORE SPAM with the new MSN 8 and get 2 months FREE*
http://join.msn.com/?page=features/junkmail
On Fri, Jan 10, 2003 at 11:12:21 +0100, Robert Jonsson wrote:
> >Ahh, theres a misundersantding about what sidechains do here, they really
> >are just an audio stream.
>
> Hmmm, how do you recon? A sidechain for a compressor may be an audio
> stream, but it's the "control" behaviour of that stream that is of
> interest, right? That should mean that it would be possible to extract
> just the behaviour and only distribute "that".
Not in practice, each compressor will want to extract different features
from the audio stream.
> >There is an auto-phaser (sweeping allpass filter), but no auto wah
> >(sweeping bandpass), or auto panner, true. I tend to build things like
> >that out of SSM or pd. But there should be complete plugins.
>
> Ya, working mostly in MusE I'm somewhat limited in the number of
> solutions I can try. Currently MusE only supports basic features (pretty
> much modeled after Cubase).
OK, I'm not familar with MusE, I should try it again.
- Steve
is there any application at all like vsound for alsa? i notice that it can't write a buffer from the alsa driver..
i have a half-duplex card on a lappie so i cannot record and play at the same time, hence vsound is very useful to me.
cheers,
julian oliver
melbourne, australia
* Really fixes the UID clashes.
* Includes changes to make it build on ia64 and mips, thanks to Anand
Kumria.
I will now be walking around with a paper bag over my head for the next
few days.
- Steve
I've been using Linux for quite some time now and finally got around to using
it for audio and MIDI. I must say i'm very impressed with all the
applications i've been using.
Thanks to all (developers and users) for making Linux audio/music development
a reality.
Levi
"Is this thing on...?"
(Seems like at least one message was lost, and it's been *very*
silent for a while...)
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
--- http://olofson.net --- http://www.reologica.se ---