Paul Davis
>On Tue, 2005-09-20 at 08:29 +0300, Aaron wrote:
>> please save me from another lisp/scheme scriptable application....
>> > The scripting should be in a language easy enough for a non programer to
>> use.
>>
>> Is xsl a possibility or is there a scripting language that is easier
>> than lisp/scheme?
>
>breathe deeply. think of snakes. say "python".
Are you serious? Do you know python? I hope not...
I don`t want to start a flame-war over programming languages,
but I know both scheme and python very well, and would
never consider python as an extension language again.
--
Slat 0.3 is now finished.
Windows size can be set with -s
Notes are linearly spaced (to prevent the headf**k)
Flat/sharp notes are darkened
http://blog.dis-dot-dat.net/2005/09/slat-03.html
--
"I'd crawl over an acre of 'Visual This++' and 'Integrated Development
That' to get to gcc, Emacs, and gdb. Thank you."
(By Vance Petree, Virginia Power)
Hi!
I am currently working on digesting the USB-MIDI Class Definition ...
http://www.usb.org/developers/devclass_docs/midi10.pdf
As I understand, you can have up to 16 USB MidiStreams (MS), each equal
to a physical midi-cable (and each "cable" having 16 virtual
midi-channels.) There is a bandwith limit of one 3-byte midievent/ms
which makes sense given the bandwith of of a physical midi-cable
The MIDI-USB device also have a control channel without any endpoints
(without any physical midi-jacks.) Again only as far as I have
understood; the control channel is not a MidiStream and should therefore
be able to accept a significantly higher transfer rate than the physical
MidiStreams.
Question: How do I determine the max transfer rate for the control
channel (bInterFaceSubClass == 1) as apposed to the physical midi-outs
(bInterFaceSubClass == 3) ?+
This is for setting LEDs partially lid for more than one parameter, by
pulsewidth-modulation over USB-MIDI.
mvh // Jens M Andreasen
When trying to compile rosegarden, I am getting the following error
during the .configure:
checking if UIC has KDE plugins available... no
configure: error: you need to install kdelibs first.
I am running FC4 x86-64 with gcc4. Any idea? Has anyone encountered
the sam problem?
I think that it may be that redhat compiled QT without the -threads
option.
Hello, I am trying to figure out what my Live 512 & alsa are capable of.
I have been trying to compare what I am seeing in the system with what I
have been able to find in literature and posts. I was wondering if
anyone might be able to offer any clarification. First of all, I am
trying to figure out what the different I/O I see are. In
/etc/asound/devices I see:
4: [0- 0]: hardware dependent
8: [0- 0]: raw midi
19: [0- 3]: digital audio playback This is digital mixer output?
18: [0- 2]: digital audio playback This is Synth or FX output?
26: [0- 2]: digital audio capture FX capture?
25: [0- 1]: digital audio capture Device 1?
16: [0- 0]: digital audio playback Is this the codec playback?
24: [0- 0]: digital audio capture Is this the codec capture?
0: [0- 0]: ctl
1: : sequencer
6: [0- 2]: hardware dependent
9: [0- 1]: raw midi
10: [0- 2]: raw midi Which midi devices are which?
What do the different numbers represent? subdevice: [card- device]: ?
This seems a bit reasonable but kind of contradicts the output I get
from aplay -l:
**** List of PLAYBACK Hardware Devices ****
card 0: Live [SB Live [Unknown]], device 0: emu10k1 [ADC
Capture/Standard PCM Playback]
Subdevices: 32/32
Subdevice #0: subdevice #0
...
Subdevice #31: subdevice #31
card 0: Live [SB Live [Unknown]], device 2: emu10k1 efx [Multichannel
Capture/PT Playback]
Subdevices: 8/8
Subdevice #0: subdevice #0
...
Subdevice #7: subdevice #7
card 0: Live [SB Live [Unknown]], device 3: emu10k1 [Multichannel Playback]
Subdevices: 1/1
Subdevice #0: subdevice #0
Here it would seem that device 2 does not have a subdevice 26 as
suggested by my (perhaps wrong) interpretation of /proc/asound/devices
Thanks. -Garett
Hi
I am writing a grant for a project I am doing. The potential funder
requires the archived audio be in AES 31 format.
Is there any application/lib etc on lin that outputs AES31?
Thanks
Aaron
My Turtle Beach Santa Cruz (cs46xx) card has a strange problem:
mic recording in JACK is really distorted.
Records fine (as fine as my cheap mic can) in Audacity.
arecord sounds fine as well.
When I record with TimeMachine in JACK, it sounds terribly distorted
(maybe saturated is the word).
Same mixer settings for all of the above. Has anyone else had luck
recording with this card in JACK?
--
Hans Fugal | If more of us valued food and cheer and
http://hans.fugal.net/ | song above hoarded gold, it would be a
http://gdmxml.fugal.net/ | merrier world.
| -- J.R.R. Tolkien
---------------------------------------------------------------------
GnuPG Fingerprint: 6940 87C5 6610 567F 1E95 CB5E FC98 E8CD E0AA D460
Flo, Thanks for the jack suggestion. I definetly do need to spend some
time working with jack on my system. Also, thanks for the great low
latency documentation on tapas - ugh!. After getting 2.6 setup with |
Preemptible Kernel (Low-Latency Desktop), fiddeling with my hardware to
get the soundcard irq's right, and some tweeking with chrt all I can say
is "wow. WOW!".
I don't think there is anything you can do on a 2.4 kernel to get close
to this level of performance and control. Sweet! I almost giggle as the
entire system temporarily hangs while linuxsampler loads gig files at
outrageous speed. Also, I wanted to let you know that after a bit more
hacking I was able to get dshare working with the ice1712. It's awesome.
I can send linuxsampler out of channels 1-4 and have fluidsynth playing
out of channels 5 & 6, with 2 channels to spare for ecasound or
something... all simultaneously, at very low latency, & hardly any
processor load. I paid for the hardware... might as well take advantage
of it. Besides, this way I can give my processor a break, under-clock
it, & keep fan volume very low. Here is a snippet from asound.conf. -Garett
|pcm_slave.66_slave {
pcm "hw:1,0"
channels 8
rate 44100
buffer_size 256
period_size 128
}
pcm.66ch1234_dshare {
type dshare
ipc_key 18273645
slave 66_slave
bindings.0 0
bindings.1 1
bindings.2 2
gindings.3 3
}
pcm.66ch1234 {
type plug
slave.pcm "66ch1234_dshare"
}
pcm.66ch56_dshare {
type dshare
ipc_key 18273645
slave 66_slave
bindings.0 4
bindings.1 5
}
pcm.66ch56
type plug
slave.pcm "66ch56_dshare"
}
Florian Schmidt wrote:
> On Thu, 15 Sep 2005 22:40:32 -0600
> Garett Shulman <shulmang(a)colorado.edu> wrote:
>
>> Hello, I have been fooling around with my alsa asound.conf in an attempt
>> to take advantage of the hardware mixer in my ice1712 and have multiple
>> apps output to it.
>
>
> The hw mixer on the ice1712 is not doing what one normally refers to as
> hardware mixing with consumer grade cards (i.e. allowing several apps
> concurrent access at the same time). It mixes and routes channels from
> its single 10 in/12 out channel device.
>
> The way to get concurrent access with an ice1712 based card is _software
> mixing_. That is, ALSA needs to do it (or whatever sounddriver you use).
>
> I would urge you to take a very long look at JACK though. LinuxSampler
> has excellent jack support and most apps made for peolpe creating music
> usually have jack support, too. You'll save yourself lots of hassles
> (setting up jackd isn't so tough when you read some docs). It is very
> simple to route specific LS patches to specific output channels on your
> ice based card when using jack.
>
>> In order to acomplish this I need linuxsampler to be
>> able to access devices I setup in asound.conf instead of just hw:x,x
>> devices. This was easy enough to acomplish by nuking all of the '"hw:"
>> +' code from AudioOutputDeviceAlsa.cpp. The next issue relates to the
>> fact that the alsa dshare plugin requires that all of the "virtual
>> devices" that I create from the ice1712 card share the same buffer_size,
>> period_size, periods, & period_time settings. So, linuxsampler crashes
>> when it tries to set buffersize and periods. I figured I would just find
>> out what linuxsampler was trying to use for those values, setup the same
>> values in the asound.conf and then comment the code that sets them in
>> linuxsampler. This seems to work except that when linuxsampler connects
>> to the device I notice click and pops from the device & the output from
>> linuxsampler is rather distorted. I suspect that I'm just not setting
>> the values quite right in asound.conf. However, I guess I also may be
>> just mangleing linuxsampler beyond proper function. :) I was wondering
>> if anywone has any suggestions. When linuxsampler tries to set the
>> buffersize it is using FragmentSize=128 and Fragments=2. So I am
>> interpreting this as period_size=128, periods=2, & buffer_size=256. The
>> slave device in asound.conf looks like this:
>> pcm_slave.66_slave {
>> pcm "hw:0,0'
>> channels 8
>> rate 48000
>> buffer_size 256
>> period_size 128
>> periods 2 #I also tried 1 here... counting from 0...
>> period_time 0
>> }
>> Then I create some dshare devices from this and some plug devices for
>> each dshare device and connect linuxsampler to one of the plug devices.
>> Any suggestions would be greatly appreciated. -Garett
>
>
> dshare still won't allow you to have multiple apps use your ice1712
> card. You'll need dmix for that. Have a look at alsa.opensrc.org if you
> want to go this route. dmix will probably kill latency though.
>
> But better take a look at jackit.sf.net.
>
> Regards,
> Flo
>
On Sat, 17 Sep 2005 at 11:20 +0200, Adrian Prantl wrote:
> i'm afraid this is slightly offtopic, but does anyone know of an ogg/
> vorbis plugin for the new iTunes running on 10.4?
XMMS through darwin ports works, and that is what I currently use. I've
heard good things about VLC, too.
While we're on the subject, I've done a little bit of research on the
problem with quicktime. The qtcomponents project that worked before qt7
did things as a component, when they arguably should have made a codec
from the start. Apple hadn't (and hasn't) solidified the API, and so
things stopped working in Tiger and also with QT7 on Panther.
I'm not sure, but if it had been done as a codec to begin with it might
still be working. In any case it looks like doing it as a codec is the
way to go at this stage. I think this will require basically taking
Apple's AudioCodec example and wiring it up to libvorbisfile. I and at
least one other person intend to do this "when I get time." If anyone
out there is good with Apple codecs or good with libvorbisfile, help
would be appreciated, and speed up the process.
See this discussion for more information: http://tinyurl.com/8z6rb
Also this bug reporter got some good info from Apple:
http://tinyurl.com/cy35h
--
Hans Fugal | If more of us valued food and cheer and
http://hans.fugal.net/ | song above hoarded gold, it would be a
http://gdmxml.fugal.net/ | merrier world.
| -- J.R.R. Tolkien
---------------------------------------------------------------------
GnuPG Fingerprint: 6940 87C5 6610 567F 1E95 CB5E FC98 E8CD E0AA D460