The current top Ethernet standard specifies max transmission speed of
10GBit/sec - 1394b is 800MBit/sec.
You can also run Ethernet over Firewire. IIRC the max. number of
devices on a 1394 chain is 63 making Ethernet more suitable for large
clusters of interconnected MIDI workstations.
But to an extent arguing over which PHY layer is like a Vi / Emacs
flamewar.
[plug]
For a working example of a MIDI over Ethernet (and UDP) have a look at
IEEE P1639 (was called DMIDI):
www.plus24.com/ieeep1639
This acts as a bridge between ALSA and the network so all MIDI apps can
bounce MIDI data between remote machines without any code changes.
I'm also working on an embedded Linux for clustering audio workstations,
Live CD available (USB mouse support broken just for now, PS/2 OK):
www/plus24.com/m-dist
This is also a call for participation in the final development of the
standard as well as application development.
Regards
Phil
On Sunday, August 15, 2004, at 09:36 am, Steve Harris wrote:
But if youre going to do that, why use ethernet? You'd need dedicated
NICs
and switches, so you may as well use firewire, which has dedicated
realtime channels, more bandwidth and doesnt require switching. 400meg
Firewire cards are down to about 7 or 8 euros in the UK now.
The only disadvantage is that you can't (right now) cheaply run firewire
over long distances, but taht will change once firewire over CAT5 cards
come down in price, and this is rarely an issue with clusters anyway.
On Aug 16, 2004, at 12:58 AM,
linux-audio-dev-request(a)music.columbia.edu wrote:
> Juan Linietsky <coding(a)reduz.com.ar> writes:
>
> I tried this myself, on a 100mbit ethernet switch.. while for single
> instruments it seems okay, and latency is fine, playing full complex
> midi
> pieces in realtime had a lot of jittering...
Small playout buffers help a lot ... it doesn't take many milliseconds
of
buffering (small single-digit) to make a big difference.
Oh, time for the obligatory RTP MIDI plug:
http://www.cs.berkeley.edu/~lazzaro/sa/pubs/txt/current-rtp-midi.txthttp://www.cs.berkeley.edu/~lazzaro/sa/pubs/txt/current-guide.txt
Coming closer to Last Call, Dominique Fober recently reviewed
the documents for AVT, and I'm in the process of revising -05.txt
to take his advice into account. That revision might actually be the
Last Call, we shall see ... subscribe to avt-request(a)ietf.org if you
want to follow along.
Also, our AES presentation got in:
http://www.aes.org/events/117/papers/E.cfm
So we'll be talking in San Francisco in October, if anyone is in
the neighborhood ... AES only comes to San Francisco once
every 5 years, and so there's a lot of fun things going on at
the conference --
---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---
Hi,
I'm trying to convert an existing analog (hardware) synth to an open source
softsynth.
One of the components of the analog synth is a diode wave shaper. The
schematic is included (diode.jpg). The Inputs 2, 3, ... n and its resistors
are optional.
I'd like to know if there's a LADSPA plugin (or even better: a DSP IIR
recurrence relation) which emulates the included schematic as close as
possible. Unfortunately, I don't have the EE background required to do this
myself.
Thanks in advance,
Stanley.
>From: Jens M Andreasen <jens.andreasen(a)chello.se>
>Reply-To: "The Linux Audio Developers' Mailing
>List"<linux-audio-dev(a)music.columbia.edu>
>To: "The Linux Audio Developers' Mailing
>List"<linux-audio-dev(a)music.columbia.edu>
>Subject: Re: [linux-audio-dev] Diode wave shaper (LADSPA plugin)?
>Date: Mon, 16 Aug 2004 11:26:24 +0200
>
>On mån, 2004-08-16 at 10:54, Stanley Jaddoe wrote:
> > Hi,
> >
> > I'm trying to convert an existing analog (hardware) synth to an open
>source
> > softsynth.
> > One of the components of the analog synth is a diode wave shaper. The
> > schematic is included (diode.jpg). The Inputs 2, 3, ... n and its
>resistors
> > are optional.
> >
> > I'd like to know if there's a LADSPA plugin (or even better: a DSP IIR
> > recurrence relation) which emulates the included schematic as close as
> > possible. Unfortunately, I don't have the EE background required to do
>this
> > myself.
>
>It's a function that converts from linear to S-shaped. It bounds
>-infinity <--> +infinity to be within -1.0 <--> +1.0 with fairly linear
>characteristics around zero.
>
The characteristics of a diode should be described by: y = ln(a*x).
- Stefan
_________________________________________________________________
MSN 8 with e-mail virus protection service: 2 months FREE*
http://join.msn.com/?page=features/virus
Hi Juan,
Can you send me a rundown on your setup (card type, ifconfig, switch
type).
I've not seen this problem with my setup and I know that others have run
large sessions between machines. It could be a combination of driver
and app interaction or it could be the switch. Do you have a
cross-over cable you could use?
Did the jitter build up in a linear manner or was there a sudden jump.
Cheers
Phil
On Sunday, August 15, 2004, at 10:52 pm, Juan Linietsky wrote:
> I tried this myself, on a 100mbit ethernet switch.. while for single
> instruments it seems okay, and latency is fine, playing full complex midi
> pieces in realtime had a lot of jittering.. I did packet monitoring and it
> all seemed ok (all the network traffic was for midi).. I'm suspecting that
> it may be related to the network card or driver doing some sort of buffering..
> but I cant really tell.. any experiences about this?
> Cheers!
> Juan Linietsky
>Hmmm, I think the alsa api is a bit huge/complicated. I would never
>reccomend doing alsa directly, and I think it was a very bad advice
>actually. Check out portaudio, sndlib or jack instead, which provides
>easier interface to the soundcard than alsa, and works on top of alsa (and
>others).
Portaudio doesn't support mixers so you still need some alsa or oss
code.
I bet I'm not asking the right list ...
Anyone here knows about good "phrase trainer" application ?
It should be able to play audio file at slower speed without changing
the pitch.
--
Francois Isabelle <isabellf(a)sympatico.ca>
http://www.notam02.no/arkiv/src/
Mammut will FFT your sound in one single gigantic analysis (no windows).
These spectral data, where the development in time is incorporated in
mysterious ways, may then be transformed by different algorithms prior to
resynthesis. An interesting aspect of Mammut is its completely
non-intuitive sound transformation approach.
This is very minor update. If you already have mammut, theres probably not
much point in upgrading. The "big" change in this release is the first
entry in the changelog. I must also warn that mammut can be a bit hard
to compile up, now that pygtk1 has been replaced with pygtk2 in all
recent distributions I know about.
0.16 -> 0.17
-Initialize sound at startup, so that mammut appears in jack patch bays.
-Removed the included sndlib binary.
-Added a point in the INSTALL file about how to configure mammut to
find the pygtk1 files.
-Added a note in the INSTALL file about that sndlib for some reason
does not work with delta cards when using the alsa driver. The oss
driver (under alsa emulation), and the jack driver works just fine.
--
iain duncan wrote:
>>I think that 'groove quantise' does have quite a specific meaning :
>>> To take the timing from one midi part, and apply it to another.
>>> So, you get the midi timing from a real drummer grooving away and
>>apply
>>> it to your beat. Then it pulls your beats to the nearest beat in the
>>> groove, rather than to the normal 16th or whatever.
>
>
>That was one sequencer's implementation of the term, but certainly not >a
>universally accepted strict definition.
I guess so. It's only Cubase that calls it that. Emagic logic, Fruity
Loops, Pro Tools, Groove Slicer, Cakewalk Sonar and Digital Performer
all call using the timing from one part on another a 'groove template',
which is much clearer.
If the 'groove' was pulling the notes towards a dotted or triplet feel,
I'd call it 'shuffle' or 'swing' to differentiate it.
'groove quantise' should perhaps be used when a sequencer or drum
machine is applying a groove more complex than a simple triplet feel,
but not using a template based on another recording.
>Iain
HI all,
There was a discussion on the LAU list about jamin using FFT
based filtering. I I missed much of the discussion but that
particular point just jumped out at me.
Has anyone thought of trying linear phase FIR filters instead of
FFT methods? Any filter that can be specified in the frequency
domain can be implemented in the time domain and vice-versa.
Often (but not always), the time domain version is significantly
cheaper in CPU cycles.
Erik
--
+-----------------------------------------------------------+
Erik de Castro Lopo nospam(a)mega-nerd.com (Yes it's valid)
+-----------------------------------------------------------+
"Neither noise nor information is predictable."
-- Ray Kurzweil