> Hans Fugal <hans(a)fugal.net>
>
> I am doing a modest partial MWPP implementation for a networking class
> and I want to demo it for the professor and class, but am not keen on
> carting my midi keyboard around campus if I can help it. [...]
Sfront has a pre-IETF-era version of MWPP, and a bunch of demos
that ship to show it off (including a 2-person interactive session
demo). These are all under sfront/examples/rtime ... mirror, nmp_audio,
nmp_null, and nmp_stream. The demos use -cin ascii for real-time input
via the ASCII keyboard, and nmp_stream streams an SMF out.
The trick here will be that, as written,
the MWPP network drivers look to a SIP server at Berkeley to do the
session setup, which has random non-standard hacks in it to do NAT
breaking and MD5-based authentication -- the code is a few years old,
and pre-dates the IETF standards to do those things. So, you'll either
need to hack sfront networking to match your implementation, or hack
your implementation to work with the Berkeley SIP server (the former
is probably much easier ...). Let me know if you want to go this route
and I can offer advice ...
--jl
-------------------------------------------------------------------------
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-------------------------------------------------------------------------
Has anybody investigated the Gibson MaGIC protocol for use on Linux? It looks like a perfect match: lots of audio channels and control data over CAT5. Just buy a separate network card, port their protocol to Linux, and then you could use their 16-in 16-out box, or get input directly from one of their new guitars with the RJ45 port on it. The The only scary bit is that the license is free "for the next10 years" only.
-Ben
Greetings:
I'm trying to compile the Teknocomposer software but have run into a
problem that Nick doesn't know what to do about (beyond suggesting I
update my compiler). Here's the failure point:
make[2]: Entering directory `/home/dlphilp/teknocomposer/teknocomposer'
c++ -DHAVE_CONFIG_H -I. -I. -I.. -O2 -fno-exceptions -fno-check-new
-fexceptions -c MainWindow.cxx
MainWindow.cxx:405: parse error before `{'
MainWindow.cxx:410: destructors must be member functions
MainWindow.cxx:410: virtual outside class declaration
MainWindow.cxx:418: parse error before `}'
MainWindow.cxx:420: syntax error before `*'
MainWindow.cxx:424: invalid use of undefined type `class
AppSoundDriver'
MainWindow.cxx:404: forward declaration of `class AppSoundDriver'
MainWindow.cxx:435: invalid use of undefined type `class
AppSoundDriver'
MainWindow.cxx:404: forward declaration of `class AppSoundDriver'
MainWindow.cxx: In method `AppSoundDriver::~AppSoundDriver ()':
MainWindow.cxx:443: `sizeof' applied to incomplete type
`AppSoundDriver'
MainWindow.cxx: At top level:
MainWindow.cxx:446: invalid use of undefined type `class
AppSoundDriver'
MainWindow.cxx:404: forward declaration of `class AppSoundDriver'
MainWindow.cxx:464: invalid use of undefined type `class
AppSoundDriver'
MainWindow.cxx:404: forward declaration of `class AppSoundDriver'
MainWindow.cxx:471: invalid use of undefined type `class
AppSoundDriver'
MainWindow.cxx:404: forward declaration of `class AppSoundDriver'
MainWindow.cxx: In function `int main (int, char *)':
MainWindow.cxx:2508: `theSoundDriver' undeclared (first use this
function)
MainWindow.cxx:2508: (Each undeclared identifier is reported only once
for each function it appears in.)
MainWindow.cxx:2508: parse error before `('
make[2]: *** [MainWindow.o] Error 1
make[2]: Leaving directory `/home/dlphilp/teknocomposer/teknocomposer'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/home/dlphilp/teknocomposer'
make: *** [all-recursive-am] Error 2
The specific code block looks like this:
/////////////////////////////////////////////////////////////////////////////
class AppSoundDriver : public SoundDriver
{
public:
AppSoundDriver(REAL sample_rate, int num_channels);
virtual ~AppSoundDriver();
// this is called to fill buffer with len samples
void Run(REAL * buffer, uint32 len);
void Start();
void Stop();
};
AppSoundDriver * theSoundDriver;
AppSoundDriver::AppSoundDriver(REAL sample_rate, int num_channels)
{
printf("Starting Sound Driver...\n");
#ifdef PORT_AUDIO_DRIVER
openPortAudio(this, sample_rate, num_channels);
#endif
#ifdef ALSA_DRIVER
openALSADriver(this, sample_rate, num_channels);
#endif
}
AppSoundDriver::~AppSoundDriver()
{
printf("Closing sound driver...\n");
#ifdef PORT_AUDIO_DRIVER
closePortAudio();
#endif
#ifdef ALSA_DRIVER
closeALSADriver();
#endif
}
//////////////////////////////////////////////////////////////////////
Nick's advice to upgrade my compiler is timely (I'm using GCC 2.96 from
RH 7.2) but unfortunately I can't make the switch right now. If there's
an obvious (or non-obvious) solution to my dilemma I'd be happy to hear
of it.
Best regards,
== Dave Phillips
The Book Of Linux Music & Sound at http://www.nostarch.com/lms.htm
The Linux Soundapps Site at http://linux-sound.org
Currently listening to: Ravi Shankar, "Raga Bilashkani Todi"
To avoid duplication of effort I'd like to announce that I'm working on a
Linux (ALSA sequencer + JACK) app based on the ZR-3 VSTi code that was
released on SourceForge.
The ZR-3 is described as a three-channel drawbar organ, and you can see
some reviews of the VSTi at http://www.kvr-vst.com/get.php?mode=show&id=213
Right now I have the backend working so I hear organ sounds but I need
to do more work on the MIDI input (the VSTi understands raw MIDI bytes,
while ALSA sequencer sends event structures..) and obviously build some
kind of Linux GUI, probably with GTK+ 2.x
As far as I know this is the only VSTi for which source code has been
made available under a Free license. If you know of other instruments
that are available (and so could be amenable to the same transformation)
then please let the list (and thus me) know.
Nick.
Hey, list!
Im a bit of a newbie on this list, so forgive me if I do something wrong
(as usually happens. ;)
Anyway, I am working on a realtime sound application for use on stage in a
band setting. The purpose is for general prerecorded sound effects,
sequencing, and real time singal processing(like from a guitar to an amp).
It appears that in order to do this, i am going to need to use a sound card
with low latency, as well as a low latency kernel. (I'll most likely strip
the whole operating system down to the very minimum so that my application
is essentially the only application that needs to be scheduled).
My question is this-- where can I find a card that is both linux
compatible, and low latency. Although I've done some serious reading on
writing my own drivers, and im about a step away from doing it for some
preexisting card, i'd prefer to use some preexisting card. The other
problem is that my budget restricts me to about $200 or so. I mean i could
probably sqeeze about $300-400 if i really needed to.. but Im a poor college
student. (what can I say?) anyone know of a decent card that I could use
in this price range? I would really apprecaite your feedback.
Thanks
Craig
_________________________________________________________________
Help STOP SPAM with the new MSN 8 and get 2 months FREE*
http://join.msn.com/?page=features/junkmail
What is the contemporary wisdom from the LAD list gurus about the proper way
to implement MTC sync for sequencers and other tools? There is a lengthy
thread in the archives dating from 2000 which didn't seem to get into the
nitty gritty, and much may have changed in three years anyway.
1) Is the sync properly implemented in a sequencer application, or does ALSA
provide a useful framework for some kind of general solution? I believe that
both Ardour and Muse have MTC implementations built into the applications
(which seem to have rotted at this point; it doesn't seem to work for me).
Why was this, and what are the alternatives?
2) The 2000 thread discussed how bad SMTPE is for audio sync. Unfortunately,
SMPTE/MTP seems to be the standard that is implemented in hardware devices
that we can actually buy. There is probably no point in beating that dead
horse again. It is what we have.
3) Does Jack have a role to play in syncing applications and external hardware
together?
4) Does SMTPE/MTC have a role in software to software sync, or just syncing to
external hardware?
John
Howdy,
Is there something in the VST SDK licensing agreement that would
prevent someone from porting the API to Linux?
--
Oliver Sampson
olsam(a)quickaudio.com
http://www.oliversampson.com
Hi.
I released ZynAddSubFX 1.4.0 and contains many new
features:
- added instrument's own effect (effects that
are loaded/saved with the instrument)
- FreeMode Envelopes: all Envelopes can have any
shape (not only ADSR)
- Added instrument kits: It is possible to use
more than one instruments into one part (used for
layered synths or drum kits)
- Amplitude envelopes can be linear or
logarithmic
- added interpolation on the Resonance user
interface
- user interface improvements and cleanups of
it's code
- initiated a mailing list to allow users to
share patches for ZynAddSubFX. Please share your
ZynAddSubFX patches; look at
http://lists.sourceforge.net/mailman/listinfo/zynaddsubfx-user
for more information about the mailing list.
For those you don't know about it, ZynAddSubFX is a
powerfull software synthesizer for Linux and Windows.
It is a opensource software and is licensed under GNU
GPL 2.
The homepage is:
http://zynaddsubfx.sourceforge.net/
Paul.
__________________________________________________
Do you Yahoo!?
The New Yahoo! Search - Faster. Easier. Bingo
http://search.yahoo.com
On Monday 14 April 2003 06:26 am, you wrote:
> the ALSA sequencer does not do this. it could probably be coerced into
> doing so, but it wouldn't work correctly on kernels pre 2.5.6X or
> so. the "scheduling" requirements for delivering MTC are impossible to
> satisfy in earlier kernels without patches (and not the low-latency
> patch, but others).
So I am working on a new composition that is ready for some computer
assistance. The way I choose to work, I need a sequencer application like
Muse or Rosegarden to sync either to my ADAT or to Ardour. Both options
would be best, but one or the other would get me going. Because this is a
priority for me, I am interested enough in making this happen that I will
hack on code for a while instead of composing.
May I have some guidance from the LAD wizards about what is the most realistic
way for this to happen?
1) Which tools, hardware or software, have the cleanest timing designs ready
for a satisfying sync implementation between a sequencer and a recorder?
2) In those codebases, which part(s) need the work, and what is the most
satisfying way to go about it?
3) Are there any new pieces of independent software like a driver module that
would be convenient to have as part of a good, clean, sync solution?
For composing my last record, I used my ancient black face ADAT with a
Steinberg MTC generator, a Motu MIDI Express XT for getting the MTC to the
computer, and an ancient version of Cakewalk in 'doze that slaved to the
ADAT. Although using 'doze and Cakewalk was extremely painful and was
generally far from what I really wanted, the syncronization seemed to work
OK. My demands were not particularly high, though, because none of the
sequencer/ADAT work was used for the album other than as a scratch track.
(All the released tracks were 100% analog, the way I like it.) I'm hoping
that soon, I can get this basic set of Linux tools that do everything as well
or better without the pain of windoze.
Thanks for any advice,
John