ok, there are 5 hours left till i leave for anaheim. if anyone wants
to make any further changes to tim's recent summary, do it very
soon. otherwise, i'll print out what he sent recently, and use my
knowledge of the ongoing discussion to supplement it.
Hi all, I got a couple of questions regarding MIDI implementation in
Linux.
My app (that I am currently working on) will not use MIDI for
sequencing, but rather as a real-time triggering mechanism (including
continuous controllers) that will intercommunicate with other
MIDI-capable apps on the (usually) same system. A while ago I was
suggested that the best path is to use Open Sound Control for such
stuff. However, upon [quickly] glancing at the .h file for OSC, I
realized that it is nothing more than a network protocol for such stuff
and that it has nothing in it that would enable it to "hook-up" directly
to the /dev/midi port and then parse the info by itself and route
accordingly to the settings in my main app. So, I would greatly
appreciate any help in figuring out where can I get the code that would
"bridge" this gap between /dev/midi and the OSC, and that would be
flexible enough for me to be able to customize routing (let's say based
on what controller and what channel the data is coming from).
Any source code you could point me towards would be greatly appreciated
(preferably something that is not a part of a gargantuan project that
will be hard to "extract"). Also, if I have misstated anything above,
please do correct me. Finally, any alternative suggestions for my
implementation would be greatly appreciated as well. My need is for:
1) ability to route MIDI data on a local machine incoming from outer
physical MIDI controller
2) ability to communicate with as many apps as possible
3) ability to do so in an elegant fashion (i.e. easy to implement)
4) communication needs to be only one-way (returned MIDI data from apps
receiving my app do not interest me)
Thank you for your help!
Sincerely,
Ivica Ico Bukvic
Hi all,
Good job I'm getting a new hard disk soon and will be able to install
some other distros to test on :) Just build fixes with this release.
There was also a gtk-2.2-only function in there, which has been
gtk-2.0-ified.
* build fixes for gcc 2.9x from Fernando Pablo Lopez-Lezcano
* compiles with gtk 2.0 now (thanks to Fernando again)
* builds without lrdf now (thanks to Austin Acton)
http://pkl.net/~node/jack-rack.html
Bob
i think i've decided to try to go out to the meeting in anaheim. it
will depend on whether i can a frequent-flyer ticket at this point in
time.
assuming i can book one, i would appreciate it if tim (hockin) could
send me (sometime during the next week) any working documents on XAP,
especially the conclusions of our discussions about tempo control.
--p
i'd like to remind the folks involved in the XAP discussion that it
would be really, really, really useful to get some summary
documentation on where the design process stands and what has been
accomplished thus far. if you can provide this, i will need it by 3pm
EST on thursday at the absolute latest.
also, for folks here who are not on ardour-dev, please critique this:
http://www.op.net/~pbd/brochure.pdf
--p
Hi everyone,
I sent the draft of the complete MWPP implementation guide off
to internet-drafts(a)ietf.org today. You can download it now from:
http://www.cs.berkeley.edu/~lazzaro/sa/pubs/txt/current-guide.txt
See the abstract below for details, as well as the I-D change
log. Comments are welcome. I'll turn the document around one more
time before the March 3 San Francisco cutoff date, and can incorporate
your feedback into the revision.
Writing this 77-page (!) document added a few more open issues
to draft-ietf-avt-mwpp-midi-rtp-05.txt. Next, I'll spend a few days
writing the RTP over TCP I-D, and then I'll start working through the
open issue list for draft-ietf-avt-mwpp-midi-rtp-05.txt. I expect to
submit an -06.txt in time for March 3 deadline.
---
INTERNET-DRAFT John Lazzaro
January 15, 2003 John Wawrzynek
Expires: July 15, 2003 UC Berkeley
An Implementation Guide to the MIDI Wire Protocol Packetization (MWPP)
<draft-lazzaro-avt-mwpp-coding-guidelines-01.txt>
Abstract
This memo offers non-normative implementation guidance for the MIDI
Wire Protocol Packetization (MWPP), an RTP packetization for the
MIDI command language. In the main body of the memo, we discuss one
MWPP application in detail: an interactive, two-party, single-
stream session over unicast UDP transport that uses RTCP. In the
Appendices, we discuss specialized implementation issues: MWPP
without RTCP, MWPP with TCP, multi-stream sessions, multi-party
sessions, and content streaming.
---
-------------------------------------------------------------------------
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-------------------------------------------------------------------------
IÂ’m trying to get this working between two LAN connected machines
running Suse8.1. It appears to compile and run OK, both Client and
Server applications on the two machines but IÂ’m not getting any audio
across the network. My speakers and mic are working OK on both machines
as I’ve tested them with with ‘alsamixer’. I’m not getting any errors so
I donÂ’t know where the problem is. I run the server by running
TC__talk_srvr and I run the client by running TCP_talk_clnt
xxx.xxx.xxx.xxx (where xxx.xxx.xxx.xxx is the IP address of the server
machine. IÂ’d greatly appreciate any help. Alternatively IÂ’d be glad to
try any other application that will give me 2 way audio/voice across a
LAN.
R.C.
Be gone foul bugs!
* fixed control output ports segfault
* fixed desktop installation prefix stuff
* fixed bug dealing with duplicate plugin ids
* now quits when you close the window
* added a "New" option to clear the rack
* rack is now automatically cleared when you load a file
http://pkl.net/~node/jack-rack.html
Bob
The main difficulty I see with the VVID system is how to initialize
parameters on a new voice before the voice is activated with a VoiceOn
event.
The MIDI standard deals with this by allowing 2 particular parameters, pitch
and velocity, to be privileged as voice-on initializers. These parameters
are included inside the MIDI Voice On message so that the instrument always
receives them when they are needed: right when a voice is being activated.
It has been the general feeling in LAD discussions that XAP should not put 2
particular parameters into the Voice On event but, instead, should allow any
parameter-set events to be intializers. How to achieve this?
In XAP, it is not sufficient to consider all parameter setting events
received before a Voice On message to be initializers. Consider that, before
an actual voice is attached to a VVID, all events sent to that VVID are
discarded. For instance, if a voice is activated and then reappropriated
halfway through a note, before the sequencer has stopped using the VVID, the
remainder of the events which the sequencer sends on the voiceless VVID are
simply discarded.
Furthermore, any system which requires initializers to be timestamped before
the voice-on event forces at least a 1 timestamp delay into voice activation
which is undesireable.
We could instead put the voice-on event at the same timestamp as the
initializers and require that the plugin read ALL events for a particular
time position before discarding ANY events, but this makes it more complex
for plugins to read their events. If the plugin found a voice-on event at
the end of a queue section for a particular timestamp then it would need to
loop back to the first event in the queue matching that timestamp and read
again for any initializers.
Alternately, we could require that event ordering has 2 criterion:
-first- order on timestamps
-second- put voice-on ahead of all other event types.
A little ungainly, but effective as it frees the plugins to assume that
they'll get voice-on first but must consider all other events on that
timestamp as arguments to the initialization of the voice. Of course this
only applies to events having a particular VVID on a particular channel of a
particular plugin instance. I admit that introducing another criteria to
event ordering is inelegant, but it is the best approach to voice activation
that I can think of. I think that introducing any sort of mandatory 1 or 2
timestamp delay is far worse.
-jacob robbins.... current projects: soundtank.......<BR>..........
_________________________________________________________________
STOP MORE SPAM with the new MSN 8 and get 2 months FREE*
http://join.msn.com/?page=features/junkmail
On Fri, Jan 10, 2003 at 11:12:21 +0100, Robert Jonsson wrote:
> >Ahh, theres a misundersantding about what sidechains do here, they really
> >are just an audio stream.
>
> Hmmm, how do you recon? A sidechain for a compressor may be an audio
> stream, but it's the "control" behaviour of that stream that is of
> interest, right? That should mean that it would be possible to extract
> just the behaviour and only distribute "that".
Not in practice, each compressor will want to extract different features
from the audio stream.
> >There is an auto-phaser (sweeping allpass filter), but no auto wah
> >(sweeping bandpass), or auto panner, true. I tend to build things like
> >that out of SSM or pd. But there should be complete plugins.
>
> Ya, working mostly in MusE I'm somewhat limited in the number of
> solutions I can try. Currently MusE only supports basic features (pretty
> much modeled after Cubase).
OK, I'm not familar with MusE, I should try it again.
- Steve