I don't think I understand this jackplug concept.
I'm trying to get an alsa client, such as aplay to play through jackd.
I've setup jackplug in /etc/asound.conf
pcm.jackplug
{
type plug
slave
{
pcm "jack"
}
}
pcm.jack
{
type jack
playback_ports
{
0 alsa_pcm:playback_1
1 alsa_pcm:playback_2
}
capture_ports
{
0 alsa_pcm:capture_1
1 alsa_pcm:capture_2
}
}
I then tell aplay to use:
#aplay -d jackplug foobar.wav
But aplay plays the file even if jackd is not started; why?. Isn't
jackplug supposed to show up in my jack connections when aplay is
playing?
I seem to be missing something vital.
Is there some way to make a "fake audio device", like hw:3,0 that alsa
applications can connect to so that I can get the audio out and then
send it through jack, process it and then send it to the real audio
device?
--
Esben Stien is b0ef(a)esben-stien.name
http://www.esben-stien.name
irc://irc.esben-stien.name/%23contact
[sip|iax]:b0ef@esben-stien.name
On Wednesday 19 January 2005 19:02,
linux-audio-user-request(a)music.columbia.edu wrote:
> > So how would Raton-Conductor work? Again, not as simple as it sounds.
> > Minimally, one would move the mouse in a ecliptic (or circular) motion
> > (as suggested by MagicBaton's instructions for beginners). The size of
> > the vertical diameter (or average diameter) would be the
> > volume/expression and tempo or time-codes set by the change in vertical
> > direction from down to up. Real conducting patterns are more complex but
> > these two principals would more or less remain.
>
> I started tinkering after I read your message, and I created a little
> gadget that traces mouse motions and recognizes fairly general conducting
> patterns. One can extract timing and intensity information for the purpose
> of generating MIDI clock events as well as MIDI controller values. I'm
> tentatively calling it Boa Conductor.
Cool!
>
> I've got a few questions:
> 1. Is anyone interested in a tool like Boa Conductor? What I've done so
> far was just for kicks; I now have to decide how much time and effort
> to put into polishing it.
I, of course, would be interested.
> 2. Are there any MIDI sequencers/players for Linux that can be driven
> by external clock messages? My understanding is that Rosegarden does
> not currently work as a slave but that may change in the future.
> MusE can work as a slave, but I never used it before (has anyone
> tried driving MusE with clock messages?). I don't think timidity
> expects to be driven by a MIDI clock. How about other MIDI players?
> 3. Would it make sense to have a feature that uses JACK Transport
> rather than MIDI clock?
There are a few alternatives. Not much that I have on Windows or Linux support
clock messages. MagicBaton was a MIDI-player that added events based on the
mouse-conducting.
Alternatives:
1. Run the Boa in Parallel with whatever sequencer or player is going through
jack or such. Boa would then put out simply omni/overall level and tempo
events.
2. Run Boa as a plug-in Rosegarten, Muse or other such program. In this case,
it would work on one track/channel and insert expression (and tempos) or in
an omni mode as above. For rehearsing (MagitBaton's parlance) one track, one
would probably want to disable tempo changes and conduct expression.
Omin/overall would do level and tempo.
3. Controlling another software via midi-clock and some volume control device.
This assume, naturally, that this software is available :-)
Greetings:
I've added another recording to my "music made with Ardour" page, a
guitar duet this time. It's a performance of an old Jimmy Dorsey tune
called Maria Elena, you can check it out here:
http://linux-sound.org/ardour-songs.html
Best,
dp
I posted this to alsa-devel but since my previous post on this list
generated a lot of interest, I am just reposting it here.
As promised, here's an updated patch to add real multichannel playback
support (and improved multichannel capture) to the emu10k1 driver.
http://www.alsa-project.org/~rlrevell/emu10k1-multichannel-v001.patch
Please test it and report any problems. I am especially interested in
any regressions that impact regular PCM playback (the hw:0,0 device).
QuickStart:
$ jackd -R -v -d alsa -P hw:0,3 -C hw:0,2 -S
I tested this and it works well with 16in/16out at 128, 256, 512 frames.
32 and 64 should work too but I can't test as I'm running a stock 2.6.10
kernel for now ;-). You can check that the routing is correct by
connecting a JACK client to the playback ports corresponding to the FX
buses described in Documentation/Audigy-mixer.txt and
Documentation/SB-Live-mixer.txt, and verifying that the output appears
on that channel (the FX buses are numbered from 0 but JACK numbers
clients from 1). For example (from SB-Live-mixer.txt):
name='Music Playback Volume',index=0
This control is used to attenuate samples for left and right MIDI FX-bus
accumulators. ALSA uses accumulators 4 and 5 for left and right MIDI samples.
The result samples are forwarded to the front DAC PCM slots of the AC97 codec.
So "alsaplayer -o jack -d alsa_pcm:playback_5,alsa_pcm:playback_6"
should output to FX buses 4 and 5, which you can test by lowering the
'Music' control in alsamixer. With an SBLive, use ports 1 and 2 for the
front channels, 3 and 4 for the rear channels. The Audigy uses
different channels, see the above docs for more info.
In addition to multichannel recording applications, this should also be
useful for OpenAL implementations, which are currently restricted to
using 21 sources due to the use of an extra voice per stereo PCM. This
should allow up to 63 sources.
This also adds some new register info including a per channel half loop
interrupt that I have discovered by reverse engineering the Windows
drivers.
Improvements over previous versions:
- Routes the 16 channels to the 16 FX buses by default.
- Enables the first 16 FX capture outputs by default, required for
full duplex operation at latencies lower than 512 frames.
- Rewrote the voice allocator to use a more efficient round
robin algorithm, eliminating the need to reserve the
first 16 voices for the multichannel device. The next free voice
is maintained in the card record and the search starts from there.
- Use an extra voice for playback timing rather than the EFX capture
interrupt. I was only ever able to get that to work at 64 frames. Also
there are definite advantages to being able to use the capture and
playback devices independently.
- Use the newly discovered per-channel half loop interrupt source for
the extra voice rather than the channel loop interrupts. For unknown
reasons, this works better for multichannel playback, and does not seem
to affect regular PCM playback at all.
TODO:
- Fix the send routing and volume controls for the multichannel device.
The current (copy and paste) solution assumes either one or two voices
per PCM. So the default settings work fine but changing them with the
mixer is likely to have unpredictable effects.
- EFX capture should capture output channels 16-32 (mostly unused now)
by default, so that we only capture the sources the user has connected
to the multichannel recording inputs in the DSP manager. Typically FX
buses 0-15 would be connected directly to FX outputs 16-32 so the
capture channels would correspond directly to the playback channels. In
order for this to work the default DSP configuration has to be changed
slightly.
Lee
Hi all,
what is the best and easiest way to convert a w64-file (from timemachine) into
a wav-file? I am trying "sndfile <w64-file> <wav-filename>" but except 100%
CPU-usage for _really_ long time nothing happens. No new file, no changes, no
stopping after a time.
I know that one of [audacity|sweep] opens w64-files but loads them into memory
fully and the file to convert is 2.5GB.
Can anybody give me some hints?
Arnold
--
There is a theory which states that if ever anyone discovers exactly what the
Universe is for and why it is here, it will instantly disappear and be
replaced by something even more bizarre and inexplicable.
There is another theory which states that this has already happened.
-- Douglas Adams, The Restaurant at the End of the Universe
Hi all,
This question is not really a problem with hardware or software per
se, but I would just like to know what people's favorite reverbs out
there are. I used to do all of my pro audio work on Mac and Windoze,
and I'm having great success getting things working with Linux, but
I've never really found any really decent reverbs, and this is
unfortunately keeping me from doing more final mixes with Linux. What
I'm looking for is something like a LADSPA or other plugin, or a
program that will process sound non-realtime (or even destructively,
like Audacity) with some decent sounding reverbs.
The TAP Reverberator plugin was okay, I guess, but really the only
other LADSPA reverb I've come across is Freeverb, which is not good,
at least using it for intimate vocal tracks and guitars. I mainly use
Ardour for tracking and mixing, Audacity for destructive editing and
cleaning up tracks, and Csound, which I've been known to use for DSP a
bit in the past.
Maybe I just have not stumbled upon the right plugins or programs.
Any suggestions of favorites?
One more related question: Does anyone know if any soundcards with
DSP in hardware work with Linux? I'm thinking of something along the
lines of the Creamware cards (the ones with the integrated DSP's), but
I'm assuming that most of these also need a software component to make
them work, and that is most definitely closed-source. I can dream,
though, right?
Jon M.
Hi,
i'm writing this email, because i'm interested in what plans the
different linux audio developers have for the year 2005 Any new
revolutionary applications planned? Major changes to some of the
existing apps? So let us know. What are your roadmaps for 2005? What are
you guys up to?
Where is help needed?
Anyways, i start off with my own stuff:
amidimon - a terminal midi monitoring app which is very incomplete, but
works good for me. I will finally add the autoconnection feature
somewhere in the spring of this year. I have seen some alternative midi
monitoring apps on the mailing lists in the last year, so maybe i just
dump the project completely instead
rtc_mtc_gen - a small MTC generator app for alsa_seq. I will teach it
drop modes and FPS other than 30
Session - a small gtk2 app to organize collections of programs which
make up a "session". I plan to add LASH support sometime this year. I
think LASH needs a major revisiting though as imho the adoption of it is
rather slow. Maybe it's already too intrusive to applications (i will
start another thread on LASH specifically i think)? Session is still
alpha (planning stage) and if anyone wants to get involved i will be all
ears to suggestions..
The 2.6.x linux audio wiki - i need to definitely update it with newer
information. Sadly spammers have discovered my wiki and i had to disable
public editing access. This is the first thing i'll do after i have
finished some university stuff.
Find my stuff here:
http://www.affenbande.org/~tapas/wiki/index.php?Ware
and here:
http://www.affenbande.org/~tapas/linux-2.6.x-ll.html
Regards,
Florian Schmidt
--
Palimm Palimm!
http://affenbande.org/~tapas/
About a month ago I was introduced to Agnula and Linux in general and to make
things short I'm making an attempt to switch to Linux for audio work. So I've
sold all my commercial VSTs, MAX/MSP, and will move to Csound, PD and Audacity.
The only things left to do are to buy a combo DVDrw/cdrw drive, switch my cpu,
ram and HD's over to a Shuttle lunchbox sized PC (for a less noisy system) and
find a replacement for my MOTU 828 MK2. I am gearing towards an RME Hammerfall
card and a behringer A8000 adat interface. Or would I be better off getting a
PCMCIA adapter and going for the cardbus interface and Multiface? This is incase
I get a laptop in the future.
Too bad MOTU products aren't supported.
Thanks
Josh
-------------------------------------------------
This mail sent through IMP: http://horde.org/imp/
Hi LAU,
This is my first posting here. Hope to do lots more...
I have a M-Audio Delta 1010LT multitrack sound card in my computer and am
having lots of trouble with it.
I'm running Linux 2.4 (Fedora Core 1) with the "Planet CCRMA" audio
environment on a Pentium 4 PC.
I use Audacity 1.2.3 for recording, etc.
Everything worked fine with the built-in (stereo) sound on my motherboard.
I installed the 1010LT and rebooted. The computer found the card and seemed
to be happy with it.
But many things seem to be wrong.
I'm using envy24control to get the mixer, patchbay, sliders, etc.
My main (first) question is how to get more than 2 channels recording at the
same time?
When I pick 4 channels in Audacity, it does indeed record 4 tracks but the
second two are copies of the first 2.
The little drop-down menu in Audacity for selecting the source (line-in,
mic, etc) is empty and the input and output sliders seem to have no effect.
Any helpful tips or pointers would sure be appreciated. I'm very new to
Linux so you might need to t-a-l-k r--e--a--l s--l--o--w--l--y
8^)
Thanks in advance,
Mike Jewell
From: Phillip Blevins
<phillip.blevins@email-addr-hidden>
Date: Sat Jan 29 2005 - 22:18:25 EET
<snip>
>> Also, I've got about 4,000 cassette tapes I would
like >> to digitize. I can do about 1.5 a day with it
playing >> at regular speeds.
Prioritize before you digitize! Do you really care if
ALL 4000 make into CD/ogg/whatever? Tag your
favorites or the rarest, or the ones that are most
urgent, condition-wise, and go from there.
Just my $0.02 :-)
-Mark
__________________________________
Do you Yahoo!?
Yahoo! Mail - now with 250MB free storage. Learn more.
http://info.mail.yahoo.com/mail_250