http://plugin.org.uk/timemachine/ tarball, 100k.
Depends on SDL, SDL_image, jack and libsndfile.
I used to always keep a minidisc recorder in my studio running in a mode
where when you pressed record it wrote the last 10seconds of audio to the
disk and then caught up to realtime and kept recroding. The recorder died
and haven't been able to replace it, so this is a simple jack app to do
the same job. It has the advantage that it never clips and can be wired to
any part of the jack graph.
I've been using it to record occasional bursts of interesting noise from
jack apps feeding back into each other.
Usage: ./configure, make, make install, run jack_timemachine. Connect it
up with a patchbay app. To start recording click in the window. To stop
recording, click in the window.
It writes out 32bit float WAV files called tm-<time>.wav, where <time> is
the time the recording starts from.
The prebuffer time and number of channels are set in a macro, defaults are
10s and 2. It works on my machine, and I'l fix major bugs, but I don't
really have time to support another piece of software, so good luck :)
If anyone wants to maintain it, feel free.
May it preserve many interesting sounds for you,
Steve
The whole discussion about VVIDs has become a rather complicated
web of opinions and examples that sometimes are understood, and
sometimes not. This is how I see it.
Why we need explicit VVIDs.
With MIDI, you can have
1. A mono synth. If there is any relation between a new note
and another one, it's always clear wich one is meant (the
previous one). This allows things like for example, not restarting
an ADSR if you play a second note before releasing the previous one.
2. A poly synth. Here normally 'a new note is a new note', and
things like the effect described above are not possible because
the synth does not know the relations between the existing set
of notes and any new ones. Anther example, you play a 3-note chord,
and then a second one, and you want notes to slide individually from
the first chord to the second. Once your masterpiece is in MIDI format
it't impossible to find out which notes are related.
If course, if you try to play this on a keyboard, you can not even
express what you want, but that only a limitation of the the
interface, and should not imply that it can't be done. If you look
beyond the traditional 'pop' music scene, lots of composers are using
other means to enter their scores, such as scripts or even algorithms.
What should be clear from this, is that as a results of the
limitations of MIDI, a poly synth is *not* the same thing as a set of
mono synths.
If you want that (polyphony by a set of mono synths) the only way to
get it is by abusing the channel mechanism. This forces you to work
in a way that is completely different from normal poly mode, which
is extremely unpractical. Anyway channels are not meant for this,
they are meant to multiplex data intended for different devices over
a single cable.
The explicit use of VVIDs would allow us to unify the interface to the
'normal' (in the MIDI sense) polyphonic synth, and the 'set of of
monophonic synths'.
And it would indeed allow the player to take the normally automatic
voice assignment into his own hands, but it does *not* force him to
do so.
A lot more could be said, but I have to go.
--
Fons Adriaensen
ok, there are 5 hours left till i leave for anaheim. if anyone wants
to make any further changes to tim's recent summary, do it very
soon. otherwise, i'll print out what he sent recently, and use my
knowledge of the ongoing discussion to supplement it.
Hi all, I got a couple of questions regarding MIDI implementation in
Linux.
My app (that I am currently working on) will not use MIDI for
sequencing, but rather as a real-time triggering mechanism (including
continuous controllers) that will intercommunicate with other
MIDI-capable apps on the (usually) same system. A while ago I was
suggested that the best path is to use Open Sound Control for such
stuff. However, upon [quickly] glancing at the .h file for OSC, I
realized that it is nothing more than a network protocol for such stuff
and that it has nothing in it that would enable it to "hook-up" directly
to the /dev/midi port and then parse the info by itself and route
accordingly to the settings in my main app. So, I would greatly
appreciate any help in figuring out where can I get the code that would
"bridge" this gap between /dev/midi and the OSC, and that would be
flexible enough for me to be able to customize routing (let's say based
on what controller and what channel the data is coming from).
Any source code you could point me towards would be greatly appreciated
(preferably something that is not a part of a gargantuan project that
will be hard to "extract"). Also, if I have misstated anything above,
please do correct me. Finally, any alternative suggestions for my
implementation would be greatly appreciated as well. My need is for:
1) ability to route MIDI data on a local machine incoming from outer
physical MIDI controller
2) ability to communicate with as many apps as possible
3) ability to do so in an elegant fashion (i.e. easy to implement)
4) communication needs to be only one-way (returned MIDI data from apps
receiving my app do not interest me)
Thank you for your help!
Sincerely,
Ivica Ico Bukvic
Hi all,
Good job I'm getting a new hard disk soon and will be able to install
some other distros to test on :) Just build fixes with this release.
There was also a gtk-2.2-only function in there, which has been
gtk-2.0-ified.
* build fixes for gcc 2.9x from Fernando Pablo Lopez-Lezcano
* compiles with gtk 2.0 now (thanks to Fernando again)
* builds without lrdf now (thanks to Austin Acton)
http://pkl.net/~node/jack-rack.html
Bob
i think i've decided to try to go out to the meeting in anaheim. it
will depend on whether i can a frequent-flyer ticket at this point in
time.
assuming i can book one, i would appreciate it if tim (hockin) could
send me (sometime during the next week) any working documents on XAP,
especially the conclusions of our discussions about tempo control.
--p
i'd like to remind the folks involved in the XAP discussion that it
would be really, really, really useful to get some summary
documentation on where the design process stands and what has been
accomplished thus far. if you can provide this, i will need it by 3pm
EST on thursday at the absolute latest.
also, for folks here who are not on ardour-dev, please critique this:
http://www.op.net/~pbd/brochure.pdf
--p
Hi everyone,
I sent the draft of the complete MWPP implementation guide off
to internet-drafts(a)ietf.org today. You can download it now from:
http://www.cs.berkeley.edu/~lazzaro/sa/pubs/txt/current-guide.txt
See the abstract below for details, as well as the I-D change
log. Comments are welcome. I'll turn the document around one more
time before the March 3 San Francisco cutoff date, and can incorporate
your feedback into the revision.
Writing this 77-page (!) document added a few more open issues
to draft-ietf-avt-mwpp-midi-rtp-05.txt. Next, I'll spend a few days
writing the RTP over TCP I-D, and then I'll start working through the
open issue list for draft-ietf-avt-mwpp-midi-rtp-05.txt. I expect to
submit an -06.txt in time for March 3 deadline.
---
INTERNET-DRAFT John Lazzaro
January 15, 2003 John Wawrzynek
Expires: July 15, 2003 UC Berkeley
An Implementation Guide to the MIDI Wire Protocol Packetization (MWPP)
<draft-lazzaro-avt-mwpp-coding-guidelines-01.txt>
Abstract
This memo offers non-normative implementation guidance for the MIDI
Wire Protocol Packetization (MWPP), an RTP packetization for the
MIDI command language. In the main body of the memo, we discuss one
MWPP application in detail: an interactive, two-party, single-
stream session over unicast UDP transport that uses RTCP. In the
Appendices, we discuss specialized implementation issues: MWPP
without RTCP, MWPP with TCP, multi-stream sessions, multi-party
sessions, and content streaming.
---
-------------------------------------------------------------------------
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-------------------------------------------------------------------------
Im trying to get this working between two LAN connected machines
running Suse8.1. It appears to compile and run OK, both Client and
Server applications on the two machines but Im not getting any audio
across the network. My speakers and mic are working OK on both machines
as Ive tested them with with alsamixer. Im not getting any errors so
I dont know where the problem is. I run the server by running
TC__talk_srvr and I run the client by running TCP_talk_clnt
xxx.xxx.xxx.xxx (where xxx.xxx.xxx.xxx is the IP address of the server
machine. Id greatly appreciate any help. Alternatively Id be glad to
try any other application that will give me 2 way audio/voice across a
LAN.
R.C.
Be gone foul bugs!
* fixed control output ports segfault
* fixed desktop installation prefix stuff
* fixed bug dealing with duplicate plugin ids
* now quits when you close the window
* added a "New" option to clear the rack
* rack is now automatically cleared when you load a file
http://pkl.net/~node/jack-rack.html
Bob