if anyone is interested here is a really trivial patch to terminatorX's
tX_mouse.cc file: it allows to trigger the turntables with the numeric
keypad (or you can just tweak the switch cases :) without having to
focus the specific turntable first during "grab mode"
i attach it here, its shorter than some signatures i've seen in some
other mailing lists :)))
ciao a tutti :)
Willy / BeHappy_
All this (depressing) talk of the various encumbered formats competing
to be the next CD standard reminded me of HDCD.
What's the deal with it? AFAICT it's another closed thing that we'll
never be able to support properly. The only media player that detects
my HDCDs is WMP of course.
Lee
>From: Alfons Adriaensen <fons.adriaensen(a)alcatel.be>
>
>The only limitation of not using MLP would be that you can't have six
>channels at 24/96.
Why not? Flac + open DVD-audio format for N channels.
Juhana
--
http://music.columbia.edu/mailman/listinfo/linux-graphics-dev
for developers of open source graphics software
Hi there,
I think I am not the only one here to have heard about so called '3d
ray tracing cpu' (e.g:
http://graphics.stanford.edu/papers/rtongfx/rtongfx.pdf, and a recent
news on /. about some demos at CeBit ). There also were some emails here
about using GPU for sound synthesis recently, mainly about the
shortcomings of those technics (memory bandwidth problems, difference of
rate scales, if I understood corectly).
Unfortunately, I know nothing about computer graphics, I don't know
exactly what this ray tracing cpu technology is (maybe vaporware ?), but
I wondered if this can be used for audio technology, e.g RT reverbs,
etc... Does anyone have an idea about the possible usage of this kind of
chips for audio processing ?
Cheers,
David
Hi,
This may be a general IPC question, but I feel lost so I thought you may help me
as you did in the past for some other issue.
In Jackbeat (written in C), I have the gui running in one thread (the gtk main
loop) and jack in its thread. Currently, when tracks get updated, the main loop
sends a message to the jack thread, and then waits for an acknowledgement. I
currently use syscalls (message queues) for these messages, but that's not my
point (I'll turn that later into syscall-free fifos).
Waiting for an ack from the jack thread has a consequence : it locks the gui for
a brief moment, which seems to drive my spin buttons mad : when clicked they
will sometimes step the value by 1, sometimes 2, etc...
Additionally, I think the gui should never lock, for confort. So I though about
a "shadow" data structure. It would act as a intermediary area between my two
threads.
The jack thread runs into what I call a "sequence". When processing sound it
relies on a structure called sequence_t. I'm about to add a new member to this
structure, called shadow, which would actually be of the sequence_t type itself.
This nested shadow structure would never be accessed by the jack thread. The
public sequence functions could safely update this shadow, so that the gui can
read from it anytime without any latency.
Message queues would still be used to let the jack thread update itself the
"real" sequence. This is where would reside some latency : waiting for the jack
thread to get in sync. But the gui would never lock.
I'm not sure this is a very clear description, but do you have any advice about
this kind of issue ? Is my idea a good one ?
Cheers
--
og
>The firmware wasn't loaded.
>> ok. this is the output of lsusb after I modprobed emi26:
>> Bus 004 Device 002: ID 086a:0102 Emagic Soft-und Hardware GmbH
>The emi26 module expects the device to have product ID 0100. It seems
>you have a new hardware revision.
>I'll see if I can update the emi26 driver.
Mine comes up as 102, I've had to edit the driver source in the
past to get it to work. Could you make sure 102 will work too?
Thanks!
athlon:aab > lsusb|grep -i emagic
Bus 001 Device 005: ID 086a:0102 Emagic Soft-und Hardware GmbH
Hi LAD,
If ever you need a high precision A-weighting filter (as used for
sound level metering), you can find one in the usual place:
<http://users.skynet.be/solaris/linuxaudio>
The tarball contains a C++ class implementing the filter (easily
converted to C if you want that), and both a JACK in-process client
and a LADPSA plugin using it.
--
FA
I am Paul, the author of an opensource (GPL) software
synthesizer for Linux and Windows (it's at:
http://zynaddsubfx.sourceforge.net).
I am writing this mail to you because I have seen your
program (Mammuth) and the way how it process the
sound, by using long-term ffts.
I made a synthesis tehnique, that use longterm ffts
(no windows) and is very intuitive, even for a
musician. I implemented this ideea into my softsynth
(as the "PADsynth" module) and it produces very good
result and the ideea itself is very simple.
To understand this and to use this intuitively, is
very recomanded to read what I consider the "bandwidth
of each harmonic" at
http://www.kvraudio.com/forum/viewtopic.php?t=74129 .
The base ideea of the bandwidth of each harmonic is
that the harmonics of the sounds have larger
bandwidths on higher frequencies. This happens in
choirs, in detuned instruments, in vibratto,etc ; some
ways to obtain the bandwidth are well known (but there
are other ;-) ).
Now, if I am considering the bandwith of each harmonic
when I am using long term FFT synthesis, I will get a
very beautiful sound and I can manage easily how is
the enseble effect of the instrument.
So the algorithm is very simple:
1) generate a long real array that contains the
amplitude of the frequencies (it's graph looks like
this http://zynaddsubfx.sourceforge.net/doc/paul1.png
)
2) convert this array to complex, by considering the
phases random (this is the key: that phases are
random)
3) do a long-term IFFT of the complex array
4) will result a perfectly looped sound that can be
played at several pitches
5) enjoy of this beautifull sound obtained ;)
As you see, this is very intuitively (even from
musical perspective). Of course I made some variations
of how I generate the array and I can make even
non-musical sounds like noises, or metallic sounds.
This is implemented in ZynAddSubFX and, because it's
open source software, it can be studied (look in the
src/Params/PADnoteParameters.C from the source code
tree).
Paul
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
Hi all!
I managed to get half of the work with a custom .asoundrc file:
"pcm.prear {
type plug
slave.pcm "rear"
}
pcm.dsnopp {
ipc_key 1027
ipc_key_add_uid true
type dsnoop
slave.pcm "hw:0,0"
}
pcm.skype {
type asym
playback.pcm "prear"
capture.pcm "dsnoop"
}
pcm.dsp1 {
type plug
slave.pcm "skype"
}
ctl.mixer1 {
type hw
card 0
}"
Then I got an alsa pcm device that uses the Wave Surround as an output, mapped
to /dev/sdp1 when running under aoss.
I tried to record from the dsp1 with sound-record (oss recorder) and that
worked fine!
BUT: skype doesn't support aoss... :-/
I thought about lauching it with artsd but I need TWO devices: /dev/dsp(0) as
the ring device and /dev/dsp1 as the phone device and artsd cannot handle two
devices, as far as I know...
How can I manage to use the skype device as an OSS device under skype?
Then the second trick would be to set the Main volume controle from skype (and
dsp1) as the Wave Surrount control.
Couldn't find anythong on this, except for an OSS device, BUT I don't have
any /proc/asound/oss card dsp1 since it emerge from aoss... :-/
What params can the ctl section use (I couldn't find any)
Thx!
Romain
> Hi,
>
> I was wondering, is it possible to assign /dev/dspX devices to the
> secondary and tertiary PCM devices on an emu10k1?
>
> I have a SB Live! Platinum with LiveDrive IR. The stereo out is connected
> to my regular set of speakers. The surround output is connected to an
> earphone headset, it's mic is connected to the mic input on my sound card.
>
> Also, another microphone is connected to the mic/line on the LIveDrive.
>
> Now, I use skype, which is closed source. It uses OSS devices and aoss will
> not work with it. I would like to have a /dev/dspX device that records from
> the mic input and plays back to the surround output, so that skype, and
> skype only, will use the headset.
>
> The headset works fine in alsa mode and alsa apps can use it perfectly well.
>
> I tried all /dev/dspX and /dev/adspX devices, to no avail. I tries aoss
> with .asoundrc modifications, no luck. I even read the driver source, but
> I'm not really conversant with the structure of the driver and couldn't
> find anything useful. It would take me forever to figure it out from the
> source code.
>
> Is it possible, maybe with module parameters, to make alsa do this? Would
> it need a patch, if yes, does someone have one? If no, what would have to
> be done where to make that work?
>
> Melanie