Le Dimanche 13 Mars 2005 18:04, linux-audio-dev-request(a)music.columbia.edu a
écrit :
> Subject: Re: [linux-audio-dev] ALSA OSS Emulation and emu10k1
> To: "The Linux Audio Developers' Mailing List"
> <linux-audio-dev(a)music.columbia.edu>
> Message-ID: <1110655551.6865.0.camel@mindpipe>
> Content-Type: text/plain
>
> On Sat, 2005-03-12 at 19:51 +0100, Romain Beauxis wrote:
> > How can I manage to use the skype device as an OSS device under skype?
>
> AIUI skype is closed source. Has anyone asked them when they plan to
> add proper ALSA support?
I think they surely will:
http://www.skype.com/help/guides/soundsetup_linux.html
Furthermore:
http://195.38.3.142:6502/skype/
seems very interesting!
Romain
Hi LAD,
If ever you need a high precision A-weighting filter (as used for
sound level metering), you can find one in the usual place:
<http://users.skynet.be/solaris/linuxaudio>
The tarball contains a C++ class implementing the filter (easily
converted to C if you want that), and both a JACK in-process client
and a LADPSA plugin using it.
--
FA
I am Paul, the author of an opensource (GPL) software
synthesizer for Linux and Windows (it's at:
http://zynaddsubfx.sourceforge.net).
I am writing this mail to you because I have seen your
program (Mammuth) and the way how it process the
sound, by using long-term ffts.
I made a synthesis tehnique, that use longterm ffts
(no windows) and is very intuitive, even for a
musician. I implemented this ideea into my softsynth
(as the "PADsynth" module) and it produces very good
result and the ideea itself is very simple.
To understand this and to use this intuitively, is
very recomanded to read what I consider the "bandwidth
of each harmonic" at
http://www.kvraudio.com/forum/viewtopic.php?t=74129 .
The base ideea of the bandwidth of each harmonic is
that the harmonics of the sounds have larger
bandwidths on higher frequencies. This happens in
choirs, in detuned instruments, in vibratto,etc ; some
ways to obtain the bandwidth are well known (but there
are other ;-) ).
Now, if I am considering the bandwith of each harmonic
when I am using long term FFT synthesis, I will get a
very beautiful sound and I can manage easily how is
the enseble effect of the instrument.
So the algorithm is very simple:
1) generate a long real array that contains the
amplitude of the frequencies (it's graph looks like
this http://zynaddsubfx.sourceforge.net/doc/paul1.png
)
2) convert this array to complex, by considering the
phases random (this is the key: that phases are
random)
3) do a long-term IFFT of the complex array
4) will result a perfectly looped sound that can be
played at several pitches
5) enjoy of this beautifull sound obtained ;)
As you see, this is very intuitively (even from
musical perspective). Of course I made some variations
of how I generate the array and I can make even
non-musical sounds like noises, or metallic sounds.
This is implemented in ZynAddSubFX and, because it's
open source software, it can be studied (look in the
src/Params/PADnoteParameters.C from the source code
tree).
Paul
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
Hi all!
I managed to get half of the work with a custom .asoundrc file:
"pcm.prear {
type plug
slave.pcm "rear"
}
pcm.dsnopp {
ipc_key 1027
ipc_key_add_uid true
type dsnoop
slave.pcm "hw:0,0"
}
pcm.skype {
type asym
playback.pcm "prear"
capture.pcm "dsnoop"
}
pcm.dsp1 {
type plug
slave.pcm "skype"
}
ctl.mixer1 {
type hw
card 0
}"
Then I got an alsa pcm device that uses the Wave Surround as an output, mapped
to /dev/sdp1 when running under aoss.
I tried to record from the dsp1 with sound-record (oss recorder) and that
worked fine!
BUT: skype doesn't support aoss... :-/
I thought about lauching it with artsd but I need TWO devices: /dev/dsp(0) as
the ring device and /dev/dsp1 as the phone device and artsd cannot handle two
devices, as far as I know...
How can I manage to use the skype device as an OSS device under skype?
Then the second trick would be to set the Main volume controle from skype (and
dsp1) as the Wave Surrount control.
Couldn't find anythong on this, except for an OSS device, BUT I don't have
any /proc/asound/oss card dsp1 since it emerge from aoss... :-/
What params can the ctl section use (I couldn't find any)
Thx!
Romain
> Hi,
>
> I was wondering, is it possible to assign /dev/dspX devices to the
> secondary and tertiary PCM devices on an emu10k1?
>
> I have a SB Live! Platinum with LiveDrive IR. The stereo out is connected
> to my regular set of speakers. The surround output is connected to an
> earphone headset, it's mic is connected to the mic input on my sound card.
>
> Also, another microphone is connected to the mic/line on the LIveDrive.
>
> Now, I use skype, which is closed source. It uses OSS devices and aoss will
> not work with it. I would like to have a /dev/dspX device that records from
> the mic input and plays back to the surround output, so that skype, and
> skype only, will use the headset.
>
> The headset works fine in alsa mode and alsa apps can use it perfectly well.
>
> I tried all /dev/dspX and /dev/adspX devices, to no avail. I tries aoss
> with .asoundrc modifications, no luck. I even read the driver source, but
> I'm not really conversant with the structure of the driver and couldn't
> find anything useful. It would take me forever to figure it out from the
> source code.
>
> Is it possible, maybe with module parameters, to make alsa do this? Would
> it need a patch, if yes, does someone have one? If no, what would have to
> be done where to make that work?
>
> Melanie
Hello,
I'm starting a student radio station at IUPUI in Indianapolis, Indiana
and I want our entire audio infrastructure to be based on Linux. I've
got a rough sense of all the apps we need and what apps to setup on
which computers, but I thought I'd run the blueprints by you guys to see
if you could give me any feedback.
Streaming/Web Server: Runs apache and icecast or the icecast mod for Apache.
Automation Computer: Runs some sort of playback program, I've been
keeping my eyes on LiveSupport http://www.campware.org/ to schedule and
automate the station when DJs aren't present.
Audio Archive: File Server for our digital library, probably all FLAC
files, maybe Ogg, but I think we want FLAC in case we want to burn CDs.
And this is the part that I need help on...
Production Computer... so I've been tooling around with JACK and Ardour
and MusE (not to be confused with MuSE) and other JACK apps and its all
really cool and exciting. I never got the sound input to even really
work in linux until a couple weeks ago. Yay for the 2.6.8+ kernels. So
here are my thoughts on setting up a workstation, and I don't even know
if this is possible, but that's why I'm mailing you guys. One department
has kindly donated a brand new Dell Poweredge Dual Xeon 2.4 ghz somethin
or other. The rest of our computers are from the university junkyard of
midgrade PowerPC G4s and Pentium 3s. So the Poweredge is our gem
computer out of all the other crappy computers. Is there any way for me
to set up the speedy new poweredge as some kind of audio production
renderfarm, and get the PPCs and the Pentium 3s to connect to it as
production terminals? Cause, although multi-tracking on the G4s and
Pentium 3s is possible, doing extensive work with FX plugins is probably
out of the question.
See what I'm getting at? Also, the Poweredge also has about a 500gb raid
system with it, which would be nice to use for storing our audio on and
maybe even using as our digital archive as well, but that might be
pushing it if we are doing audio production work on it as well? I'd
imagine this might be the case, but I don't see why ftping flac files on
a local network would be too much of a burden on the raid drive or dual
processors. Another reason why it would be nice to be able to connect to
a poweredge remotely to do audio work, is that it the poweredge makes
about as much noise as a 747. So... its not exactly an audio production
friendly unit.
So these are my thoughts. Am I crazy... or is there some magical way to
make this happen?
- Ben Racher
bracher(a)iupui.edu
--- james(a)dis-dot-dat.net wrote:
> On Thu, 10 Mar, 2005 at 07:42PM -0500, Dave
> Robillard spake thus:
> > On Wed, 2005-09-03 at 23:13 -0800, fred doh wrote:
> > > Hi,
> > >
> > > I need to develop an audio driver (OSS on kernel
> 2.4)
> > > for a new hardware. I didn't find any resource
> > > explaining how to do that, besides looking at
> the
> > > sources of other drivers. Could someone direct
> me to
> > > an appropriate resource?
> No offence meant, but from your questions, you have
> less of a clue
> than I do, and I wouldn't fancy writing a driver
> from scratch.
>
> Are you sure you're not biting off a little too much
> here?
>
> On the othe hand, even if you give up half way
> through, you'll
> probably have learned a hell of a lot.
>
> But, as Dave says below, why? OSS? 2.4? Are there
> good reasons for
> this?
> > Why on earth would you write an OSS driver for 2.4
> at this point in
> > time?
> >
Writing for OSS is a constraint, not a choice. I'm not
starting from scratch there is some older driver that
I have to port, and it's been written for OSS. Later
if there is time for it I will make the ALSA
conversion.
I've read docs about doing kernel drivers so my
questions are not general about drivers, but really
directed at audio drivers. I didn't find any resource
about it. The OSS doc seems to be directed to the user
of the API and not for the driver developer.
For example, which function from the file operations
structure have to be implemented and for what purpose
in the case of audio?
-fred
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
Hello. I wonder if anybody has any ideas on this one:
I'm running a gentoo 2.4 kernel with a Delta 1010 sound card, wrote an
asound.conf to route audio to various channels for multiple instances of
mplayer - very cool - but this fluke happened this morning that it'd be
awesome to understand...
I got in and it had been looping three mplayer videos (mpgs) all night.
They had gotten janky, ie: weren't really playing anymore, so I did a
killall mplayer - nice. Then I tested some play back, one at a time, of the
same movies. They routed fine, everything was great, except, on analog
outputs 3, 5 and 7 (not 1), the audio was a little garbled. Like a digital
garble type of thing almost sounding like a sample rate mismatch or
something. I tested with an .mp3 and the same happened, and I tested an mp3
with alsaplayer and it happened, and then I did the tests again, and the
second time, the .mp3 and the alsaplayer instance didn't do the weird thing,
but the .mpgs still did it.
So I rebooted it and it went away. *shrug*
Anybody have any ideas on that?
-----------------
Aaron Trumm
www.nquit.com
-----------------
Hi,
I need to develop an audio driver (OSS on kernel 2.4)
for a new hardware. I didn't find any resource
explaining how to do that, besides looking at the
sources of other drivers. Could someone direct me to
an appropriate resource?
Also, I need to build the driver for a target kernel
that is not the one I'm currently running. I'd like to
have the driver building with the target kernel and to
have it included in the config options when I do a
"make menuconfig". Is there an article explaining how
to do that?
Thanks,
-fred
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
Hi!
At http://freepats.opensrc.org there is a mellotron sample in the flac
format. I'm very interested in this sound. I'd like to see it in a soundfont,
so it can be used with fluidsynth. Unfortunately I only can convert this
sample to the .wav or .raw format and split the different samples from one
another. The actual soundfont creation (with swami) I can't do. Would anyone
be interested in this kind of project?
As said I'd convert and split the flac file and do, what else I can, but
for the final swami-touch, I'd need some help, because I'm blind.
I'm looking forward to hear from someone!
Kindest regards
Julien
--------
Music was my first love and it will be my last (John Miles)
======== FIND MY WEB-PROJECT AT: ========
http://ltsb.sourceforge.net - the Linux TextBased Studio guide
hi list,
I have problems installing the emi26 on my box running fc3 & ccrma. I
rebuilt a kernel with the necessary module and reinstalled alsa
afterwards. it looks fine actually, even when I start alsa it loads
the module for the emi26. unfortunately it does not work yet. what did
I miss out??
my settings in modprobe.conf look like this::
alias snd-card-0 emi26
alias sound-slot-0 snd-card-0
options snd-card-0 index=0
alias snd-card-1 snd-usb-audio
options snd-card-1 index=1
alias snd-card-2 snd-intel8x0
alias sound-slot-2 snd-intel8x0
options snd-card-2 index=2
alias snd-card-3 snd-virmidi
options snd-card-3 index=3
I also did 'depmod -a' afterwards and restarted alsa. the messages
seemed to have emi26 loaded, also when I do this::
[karlos@posthuman ~]$ lsmod
Module Size Used by
snd_virmidi 8384 0
snd_seq_virmidi 12288 1 snd_virmidi
snd_seq_midi_event 12032 1 snd_seq_virmidi
snd_seq 59536 2 snd_seq_virmidi,snd_seq_midi_event
snd_intel8x0 36800 1
snd_ac97_codec 76792 1 snd_intel8x0
snd_usb_audio 70336 0
snd_pcm_oss 57632 0
snd_mixer_oss 23552 2 snd_pcm_oss
snd_pcm 99204 4
snd_intel8x0,snd_ac97_codec,snd_usb_audio,snd_pcm_oss
snd_timer 30340 2 snd_seq,snd_pcm
snd_usb_lib 17408 1 snd_usb_audio
snd_rawmidi 29472 2 snd_seq_virmidi,snd_usb_lib
snd_seq_device 13196 2 snd_seq,snd_rawmidi
snd 62052 13
snd_virmidi,snd_seq_virmidi,snd_seq,snd_intel8x0,snd_ac97_codec,snd_usb_audio,snd_pcm_oss,snd_mixer_oss,snd_pcm,snd_timer,snd_usb_lib,snd_rawmidi,snd_seq_device
soundcore 14304 2 snd
emi26 168704 0
but it is not claimed by any device. I have it connected and the blue
LED it lit, so I wonder what the heck I do wrong. should I see more
lights, green ones? I have not used the emi26 with something else so I
don't know.
any ideas?
thanks,
Karsten