On Sep 24, 2005, at 9:02 AM, linux-audio-dev-
request(a)music.columbia.edu wrote:
> Is anyone interested in collaborating on a common sample streaming
> protocol (possibly based on a somewhat simplified version of SDIF or
> the SC3 protocol)?
I'd recommend using RTP as your base protocol, and defining your
SPIF or SC3-like payload as an RTP payload format. You'll pick up
the entire IETF multimedia protocols for free this way, including RTP
MIDI:
http://www.cs.berkeley.edu/~lazzaro/rtpmidi/index.html
I think when it comes to networking, the writing is on the wall when
in comes to packet loss being a part of the environment you need
to live in. Most new purchases of computers are for laptop computers,
most of those users want to use WiFi as their network, and the Internet
layer sees 1-2% packet loss on WiFi. Also, we live in an era where
people want to run LAN apps on the WAN and WAN apps on the LAN,
and packet loss is also an unavoidable part of the WAN Internet
experience.
Finally, modern applications want to use link-local Internet multicast.
RTP was built for letting payload formats handle packet losses in
a way that makes sense for the media type -- RTP MIDI is an extreme
example of this, but the audio and video payload formats are loss
tolerant in more subtle ways. RTP is also multi-cast compatible.
Finally, with RTP there's a standardized way to use RTSP and SIP
for your session management if you wish, or if you prefer, you can
just build RTP into whatever session manager you have committed
to (like jack).
---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---
Paul Davis
>On Tue, 2005-09-20 at 08:29 +0300, Aaron wrote:
>> please save me from another lisp/scheme scriptable application....
>> > The scripting should be in a language easy enough for a non programer to
>> use.
>>
>> Is xsl a possibility or is there a scripting language that is easier
>> than lisp/scheme?
>
>breathe deeply. think of snakes. say "python".
Are you serious? Do you know python? I hope not...
I don`t want to start a flame-war over programming languages,
but I know both scheme and python very well, and would
never consider python as an extension language again.
--
Slat 0.3 is now finished.
Windows size can be set with -s
Notes are linearly spaced (to prevent the headf**k)
Flat/sharp notes are darkened
http://blog.dis-dot-dat.net/2005/09/slat-03.html
--
"I'd crawl over an acre of 'Visual This++' and 'Integrated Development
That' to get to gcc, Emacs, and gdb. Thank you."
(By Vance Petree, Virginia Power)
Hi!
I am currently working on digesting the USB-MIDI Class Definition ...
http://www.usb.org/developers/devclass_docs/midi10.pdf
As I understand, you can have up to 16 USB MidiStreams (MS), each equal
to a physical midi-cable (and each "cable" having 16 virtual
midi-channels.) There is a bandwith limit of one 3-byte midievent/ms
which makes sense given the bandwith of of a physical midi-cable
The MIDI-USB device also have a control channel without any endpoints
(without any physical midi-jacks.) Again only as far as I have
understood; the control channel is not a MidiStream and should therefore
be able to accept a significantly higher transfer rate than the physical
MidiStreams.
Question: How do I determine the max transfer rate for the control
channel (bInterFaceSubClass == 1) as apposed to the physical midi-outs
(bInterFaceSubClass == 3) ?+
This is for setting LEDs partially lid for more than one parameter, by
pulsewidth-modulation over USB-MIDI.
mvh // Jens M Andreasen
When trying to compile rosegarden, I am getting the following error
during the .configure:
checking if UIC has KDE plugins available... no
configure: error: you need to install kdelibs first.
I am running FC4 x86-64 with gcc4. Any idea? Has anyone encountered
the sam problem?
I think that it may be that redhat compiled QT without the -threads
option.
Hello, I am trying to figure out what my Live 512 & alsa are capable of.
I have been trying to compare what I am seeing in the system with what I
have been able to find in literature and posts. I was wondering if
anyone might be able to offer any clarification. First of all, I am
trying to figure out what the different I/O I see are. In
/etc/asound/devices I see:
4: [0- 0]: hardware dependent
8: [0- 0]: raw midi
19: [0- 3]: digital audio playback This is digital mixer output?
18: [0- 2]: digital audio playback This is Synth or FX output?
26: [0- 2]: digital audio capture FX capture?
25: [0- 1]: digital audio capture Device 1?
16: [0- 0]: digital audio playback Is this the codec playback?
24: [0- 0]: digital audio capture Is this the codec capture?
0: [0- 0]: ctl
1: : sequencer
6: [0- 2]: hardware dependent
9: [0- 1]: raw midi
10: [0- 2]: raw midi Which midi devices are which?
What do the different numbers represent? subdevice: [card- device]: ?
This seems a bit reasonable but kind of contradicts the output I get
from aplay -l:
**** List of PLAYBACK Hardware Devices ****
card 0: Live [SB Live [Unknown]], device 0: emu10k1 [ADC
Capture/Standard PCM Playback]
Subdevices: 32/32
Subdevice #0: subdevice #0
...
Subdevice #31: subdevice #31
card 0: Live [SB Live [Unknown]], device 2: emu10k1 efx [Multichannel
Capture/PT Playback]
Subdevices: 8/8
Subdevice #0: subdevice #0
...
Subdevice #7: subdevice #7
card 0: Live [SB Live [Unknown]], device 3: emu10k1 [Multichannel Playback]
Subdevices: 1/1
Subdevice #0: subdevice #0
Here it would seem that device 2 does not have a subdevice 26 as
suggested by my (perhaps wrong) interpretation of /proc/asound/devices
Thanks. -Garett
Hi
I am writing a grant for a project I am doing. The potential funder
requires the archived audio be in AES 31 format.
Is there any application/lib etc on lin that outputs AES31?
Thanks
Aaron
My Turtle Beach Santa Cruz (cs46xx) card has a strange problem:
mic recording in JACK is really distorted.
Records fine (as fine as my cheap mic can) in Audacity.
arecord sounds fine as well.
When I record with TimeMachine in JACK, it sounds terribly distorted
(maybe saturated is the word).
Same mixer settings for all of the above. Has anyone else had luck
recording with this card in JACK?
--
Hans Fugal | If more of us valued food and cheer and
http://hans.fugal.net/ | song above hoarded gold, it would be a
http://gdmxml.fugal.net/ | merrier world.
| -- J.R.R. Tolkien
---------------------------------------------------------------------
GnuPG Fingerprint: 6940 87C5 6610 567F 1E95 CB5E FC98 E8CD E0AA D460
Flo, Thanks for the jack suggestion. I definetly do need to spend some
time working with jack on my system. Also, thanks for the great low
latency documentation on tapas - ugh!. After getting 2.6 setup with |
Preemptible Kernel (Low-Latency Desktop), fiddeling with my hardware to
get the soundcard irq's right, and some tweeking with chrt all I can say
is "wow. WOW!".
I don't think there is anything you can do on a 2.4 kernel to get close
to this level of performance and control. Sweet! I almost giggle as the
entire system temporarily hangs while linuxsampler loads gig files at
outrageous speed. Also, I wanted to let you know that after a bit more
hacking I was able to get dshare working with the ice1712. It's awesome.
I can send linuxsampler out of channels 1-4 and have fluidsynth playing
out of channels 5 & 6, with 2 channels to spare for ecasound or
something... all simultaneously, at very low latency, & hardly any
processor load. I paid for the hardware... might as well take advantage
of it. Besides, this way I can give my processor a break, under-clock
it, & keep fan volume very low. Here is a snippet from asound.conf. -Garett
|pcm_slave.66_slave {
pcm "hw:1,0"
channels 8
rate 44100
buffer_size 256
period_size 128
}
pcm.66ch1234_dshare {
type dshare
ipc_key 18273645
slave 66_slave
bindings.0 0
bindings.1 1
bindings.2 2
gindings.3 3
}
pcm.66ch1234 {
type plug
slave.pcm "66ch1234_dshare"
}
pcm.66ch56_dshare {
type dshare
ipc_key 18273645
slave 66_slave
bindings.0 4
bindings.1 5
}
pcm.66ch56
type plug
slave.pcm "66ch56_dshare"
}
Florian Schmidt wrote:
> On Thu, 15 Sep 2005 22:40:32 -0600
> Garett Shulman <shulmang(a)colorado.edu> wrote:
>
>> Hello, I have been fooling around with my alsa asound.conf in an attempt
>> to take advantage of the hardware mixer in my ice1712 and have multiple
>> apps output to it.
>
>
> The hw mixer on the ice1712 is not doing what one normally refers to as
> hardware mixing with consumer grade cards (i.e. allowing several apps
> concurrent access at the same time). It mixes and routes channels from
> its single 10 in/12 out channel device.
>
> The way to get concurrent access with an ice1712 based card is _software
> mixing_. That is, ALSA needs to do it (or whatever sounddriver you use).
>
> I would urge you to take a very long look at JACK though. LinuxSampler
> has excellent jack support and most apps made for peolpe creating music
> usually have jack support, too. You'll save yourself lots of hassles
> (setting up jackd isn't so tough when you read some docs). It is very
> simple to route specific LS patches to specific output channels on your
> ice based card when using jack.
>
>> In order to acomplish this I need linuxsampler to be
>> able to access devices I setup in asound.conf instead of just hw:x,x
>> devices. This was easy enough to acomplish by nuking all of the '"hw:"
>> +' code from AudioOutputDeviceAlsa.cpp. The next issue relates to the
>> fact that the alsa dshare plugin requires that all of the "virtual
>> devices" that I create from the ice1712 card share the same buffer_size,
>> period_size, periods, & period_time settings. So, linuxsampler crashes
>> when it tries to set buffersize and periods. I figured I would just find
>> out what linuxsampler was trying to use for those values, setup the same
>> values in the asound.conf and then comment the code that sets them in
>> linuxsampler. This seems to work except that when linuxsampler connects
>> to the device I notice click and pops from the device & the output from
>> linuxsampler is rather distorted. I suspect that I'm just not setting
>> the values quite right in asound.conf. However, I guess I also may be
>> just mangleing linuxsampler beyond proper function. :) I was wondering
>> if anywone has any suggestions. When linuxsampler tries to set the
>> buffersize it is using FragmentSize=128 and Fragments=2. So I am
>> interpreting this as period_size=128, periods=2, & buffer_size=256. The
>> slave device in asound.conf looks like this:
>> pcm_slave.66_slave {
>> pcm "hw:0,0'
>> channels 8
>> rate 48000
>> buffer_size 256
>> period_size 128
>> periods 2 #I also tried 1 here... counting from 0...
>> period_time 0
>> }
>> Then I create some dshare devices from this and some plug devices for
>> each dshare device and connect linuxsampler to one of the plug devices.
>> Any suggestions would be greatly appreciated. -Garett
>
>
> dshare still won't allow you to have multiple apps use your ice1712
> card. You'll need dmix for that. Have a look at alsa.opensrc.org if you
> want to go this route. dmix will probably kill latency though.
>
> But better take a look at jackit.sf.net.
>
> Regards,
> Flo
>