Hello,
I would like to create a virtual device (pcm) that outputs sound to two pcms
(hw:0,0 and hw:0,4)
How do I write the ,asoundrc file?
Should be a common examople but I could not find one...
--
Yours, Mikhail Ramendik
Hi, maybe someone can help me,
I find no ways to route (loop_back) the Line-in(where I have my tv-tuner
pluged in) to
the headphone jack, maybe someone know how to do it or know any links on
documentation on how to do it.
The sound card is onboard Intel D945GTP
the chipset is STAC9220 (from sigmatel) (HDA_intel compatible)
I am runing ALSA 1.0.11 not in the kernel(as module).
kernel 2.6.16.5.
The only way to get sound from LineIn to HeadPhone right now is
"cat /dev/dsp > /dev/audio", but the quality is realy bad( with this
command I know that the configuration of the input and output is OK).
Thank You for your time.
Yvan
_________________________________________________________________
Mode, recettes et détente sur Sympatico / MSN Mieux vivre
http://mieuxvivre.sympatico.msn.ca/Accueil/
Hi guys,
I'm not sure whether this is off topic or not so if it is let me know.
I'm working on remixing a live spantaneous intercession track at the
moment, whose BPM increases from 70 to 103+ during the piece and wavers
around there. I'm putting it to drum and bass, using various breaks, and
the tempo needs to be kept consistent. I've chosen to make it 170 at the
moment, but the issue is that I simply can't make the track samples fit
and sound good --they either need slowing down until the vocalist sounds
like she's hammered, or speeding up until she sounds like she's on
something.
I understand (from Google) there is a method by which one samples the
track minus the vocals and then adds the inversion of that to the track
to kill the instruments. I have not got that to work well yet (maybe I
need to try again, and it's nontrivial due to the major tempo changes),
but also I'm not sure how to get round the tempo issue. I am in the
process of trying to take the vocals apart and then sequence them (Leon
Switch and Kryptic Minds style) but with the instruments present the
results are unsatisfactory. Does anyone have any advice?
I'm using rosegarden, zynaddsubfx for bass (oh how beautifully perfect
is zyn!), hydrogen for sequencing extra drum stuff and I will be using
fluidsynth for any piano.
--
Jonty Needham <jmn20(a)bath.ac.uk>
My problems with CPU hogging are unfortunately not solved. They are most
probably denormal problems. I have a Pentium 4 Mobile CPU.
I am using SWH 4.14, TAP 0.7 and also CAPS 0.7.
Following Dave's advice I compiled SWH and TAP myself and used the
--enable-sse switch on SWH and added -mfpmath=sse -msse2 to the TAP
makefile. This had no effect.
Is anyone aware of a user-space solution to this problem?
Carlo
Hi,
I am using Ardour as my main production tool. Quite a lot is at stake,
since I intend to be earning 20'000 to 30'000 dollars per month in the
business my production work is part of, and funnel quite a lot of that
into all those wonderful open source projects I'm using right now.
The problem is, as far as I can track it down, that CPU usage goes over
the roof as soon as the stop button is pressed in ardour after loading a
specific number of LADSPA plugins (though it doesn't appear to matter
which ones). It's not just a few dropouts either; either there are no
XRUNS, or you can't even move the mouse.
Sorry I can't be more specific, I'm puzzled myself. Maybe this can be
worked out right here.
I use Xubuntu Dapper, out of the box.
Carlo
hello
i ve a problem with the stability of the opensoundcontroll used in pd.
currently i am running debian 2.6 kernel and pd with the pd-osc_0.1.1_i386.deb
package. pd receives osc realtime data from a imageanalysis programm /eyesweb-
the osc input is frequently crashing ( segmentation fault ).
i tryed out several methods in buffering the data aso.
over time it was crashing again.
it seems that ive to replace the pd-osc pkg with a newer one, but i coulnd
find a good source for debain.
i just found a newer version for ubuntu pd-osc_00.20031105-5_i386.deb
http://packages.ubuntulinux.org/warty/sound/pd-osc
any ideas any links?
pls letmeno.
thank you.
best
m
Well I don't know if this term actually exists or if I've just invented
it!
This is an idea I've thought about for quite some time, years in fact,
but don't have the programming ability to try to put it into practice.
As I think it should really be part of, or a plugin to a sequencer I've
posted to LAU & Rosegarden lists. I hope nobody minds. I'd be very
interested in other people's thoughts on it.
Preamble over :)
All the quantisation systems I've seen so far only work if the music
has reasonably constant timing, and then produces much to rigid a
structure for my tastes.
When playing without a metronome (which always inhibits me) I find that
in a very long piece, I sometimes gradually speed up or slow down. This
is often only noticable if you go back to the start of a piece and
replay it immediately it has finished. If 'standard' quantisation is
applied to this then the results can be quite grotesque as notes fall
outside the quantisation capture range and get placed into the wrong
positions.
What I would like to see is quantisation algorythm the detects trends
rather than absolute values, then progressively applies small
corrections the keep overall timing correct. (it would of course have
to operate over all tracks simultaneously).
For example. The musician could put markers on notes in, say, an
accompaniment section, that aught to fall on the first beat of a bar.
The quantisation would then stretch or shrink the time positions so
most of these fit, and intervening notes of ALL tracks are adjusted
a proportionate amount. Later bars can then be interpolated and
occasional bars that don't actually have a note on the first beat will
still be adjusted based on averaging. Deliberate note delays,
syncopation etc. would then be perfectly preserved and the music would
retain its liveliness.
Having the musician place these markers rather than some automatic
system, means that not only are the correct notes used as a reference,
but the music can be brought into line even if it initially has
absolutely no relation to the bar lines in the sequencer (this happens
to me a lot when I try to record live). Overall timing can then of
course be set be altering the beat rate.
This whole idea could then be turned on it's head. I find it VERY hard
to get several tracks to slow down at the end of a piece and stay
'together'. This quantisation system could do just this by having
'target' time/beat rates at the start and end of the section that is to
be slowed (or speeded up).
--
Will J G
Has anyone tried the 2.6.17 series kernel with ingo patches on 64 bit?
I've tried the 2.6.16 and couldn't get it to boot. Locked up after the
loading schedulars message. I'm hoping 2.6.17 might actually work for
me.
Loki
CLAM 0.91.0 Release Announcement
´Spectral transformations, annotator, packaging, and
desktop integration release'
We are glad to announce the 0.91.0 CLAM release which
comes by the hand with Music Annotator 0.3.2, Network
Editor 0.3.1 and SMSTools 0.4.1. They are available
for download as source tarballs and also as binary
packages for Windows, Ubuntu dapper, Debian sid and
Fedora Core 5. MacOsX binaries are not available for
this release, but we promise they will be back soon.
This release is the first official one which
incorporates the new CLAM Music Annotator featuring
chord extraction.
Almost 30 new spectral transformations have been
incorporated into the processing repository. Some of
them are already available from the NetworkEditor.
Application usage has received some extra stress on this
release. Applications are better integrated on Windows
and Linux desktops. Step by step application tutorials
are available on the clam wiki for Music Annotator [1],
SMSTools [2], Network Editor and Prototyper [3]. And,
all of them all provide examples to start with.
Please read these and other improvements in the
changelog [4] or visit our website [5] for further
details. We expect as much feedback as possible from
all our users. Besides the mailing list, you can likely
find us at #clam channel on FreeNode (IRC network).
The CLAM team
[1]
http://iua-share.upf.es/wikis/clam/index.php/Music_Annotator_tutorial
[2] http://iua-share.upf.es/wikis/clam/index.php/SMSTools_tutorial
[3]
http://iua-share.upf.es/wikis/clam/index.php/Network_Editor_tutorial
[4] http://clam.iua.upf.edu/ChangeLog.html
[5] http://clam.iua.upf.edu