Hi,
it has been reported that the 2.6.8 kernel has fixed the handling of
USB resets, so now it's possible to use a Linux firmware loader to get
these devices to work.
You can get the loader at <http://usb-midi-fw.sf.net/>.
Best regards,
Clemens
Hi Erik,
As you can see, we made it back after chasing Ivan around. Luckily we
saw nothing but a few drops of rain. I'm sure others here or close to
those here must have had bad luck instead of good, and my sympathies
go out to them.
Thanks for your response. I applaud your enthusiasm, but it appears
to me that you are laboring under some misconceptions and
misunderstanding of what I have posted. Among the misconceptions is a
serious one that should cause concern amongst those who use
libsamplerate.
-------------------------
Minor problems:
A good portion of your recent post is aimed at something I never
claimed. It appears to me that you are failing to distinguish
between two cases: 1) Series in general; 2) Series which have been
previously prepared, specifically frequency spectra which are band-
limited and lowpass filtered.
>From my earlier post on the current subject:
"The reason is that I assume that the input is band-limited, and this
is usually true for my own work. Not only is it band-limited, but
usually also tapered in the frequency domain, i.e. already effectively
lowpass filtered."
>From your recent post:
"Now I will admit that if the signal is already bandlimited filter
many not be necessary, but that is a different matter all together."
No, it is certainly not a "different matter altogether." Your quote
was nearly all I was saying as you can easily see, except that in
addition signals I use (as well as most that others use) are also
normally effectively lowpass filtered. This is more restrictive than
your assumption, so filtering is even less necessary than you admit it
to be.
Your elementary examples from undergrad engineering courses and
attempt to rephrase what you claim I said don't apply to the situation
I've described; they have nothing to do with what I've said and are
true but irrelevant statements which cause me to wonder what it is
that you are trying to prove; you may want to give some thought to
that: What is it that you are actually trying to prove or demonstrate
to everyone?
As a very minor note, you are also attempting to answer an ontological
question (that something which has no observable effect on the
universe can nonetheless be said to exist) which has plagued
philosophers for millenia by merely declaring your opinion as fact.
Here is what I actually wrote:
"If nothing was removed or even altered, then no filtering has
actually occurred."
Note that I am not so reckless as to claim anything regarding
existence or nonexistence here.
-------------------------
** Serious misconception **:
Two paragraphs from your recent post:
"If you work out the mathematical expression for your frequency domain
converter, you will find that there is a time domain expression that
is mathematically identical and that the time domain expression is in
part a FIR lowpass filter very much like my converter." (You mean
your implementation of Smith's converter, don't you?)
"The difference between the two would be that my version uses linear
interpolation into a very large table to obtain the filter
coefficients while yours are more exact. However mine provides a
*measured* SNR of at least 97dB."
This second paragraph is considerably flawed and reveals a serious
misconception on your part. Although the *starting point for the
derivation* of both methods is the same, the method you are using as
it is actually implemented is a very localized method in which only
few samples are used to construct the new values. This is due to the
*severe* truncation of the series. The horrible effects of this
severe truncation are mitigated through the application of a Kaiser
window which effectively localizes the estimation even further as it
improves the result substantially. The method I'm using remains a
very much more global method in which each new value is constructed
from hundreds of thousands to millions of samples. Given that audio
I process taper off to zero at the beginning and end --- in addition
to being band-limited and effectively lowpass filtered --- I could
have written a global method wherein each sample is the result of
calculations involving ALL other samples rather than one which
contains windows with hundreds of thousands to millions of samples.
This is a far cry from what you are doing. This is all BEFORE the
linear interpolation step which you claim is the only difference
between our methods, a step which is totally unnecessary by the way
for most fixed-sample-rate conversions (for example: unnecessary for
96,000 to 44,100, for 48,000 to 44,100, and for 44,100 to either of
the others) and further degrades the sample rate conversion
unnecessarily for the most common use. (This was done by Smith to
permit variable-sample-rate conversions, a useful goal to say the
least.)
Now as I've already said, the sinc-based sample-rate converter you've
constructed from Smith's work seems to work well in the sample I've
listened to. But knowing what I know about sinc-based methods based
on my own implementations, Smith's published work, and the lack of
necessity for such a method for fixed-rate conversions, not to mention
the fact that linear interpolation is being used where it really isn't
necessary, leads me to reject libsamplerate and anything related to it
unless it's forced on me by someone else via a dependency. I rarely
do variable-rate conversions, so I have very little need for sinc-
based sample-rate converters.
-------------------------
For "ordinary" users:
For those of you who have read this far --- hopefully not many: PLEASE
don't be alarmed. As I have repeatedly said, Erik's implementation of
Smith's work seems to work well. The inaccuracies in that method for
fixed rate conversions, including the additional inaccuracy of the
unnecessary linear interpolation, seem to be virtually inaudible.
The main advantage is that a single method can be used for both fixed-
and variable-rate conversions, and this is indeed a *considerable*
advantage for a lib-based implementation. We are all indebted to Erik
for creating libsamplerate, and this includes me.
-------------------------
For developers:
Developers, however, should at least be aware of what is going on
there. I would advise anyone who has a *critical* dependence on
sample-rate conversions to carefully read Smith's notes and to make
sure that they understand what is going on. They should also
experiment with Smith's technique, including implentation of different
windows to see how bad it can be if things go wrong. Erik's
"measurements" that I've seen so far aren't very useful for these
worst-case situations. Smith may have had something more useful such
as a bound of some sort, but frankly I don't recall at this moment and
I'm not motivated to check it out at this time (sorry!). I have too
much to do following this trip, and this post has taken all my spare
time.
Sorry for the length, but I wanted to clear up some misconceptions about
what I've posted as well as raise a flag (without causing some sort of
fiasco) for those who may not have had time to study sinc-based methods.
Regards to all,
Dave.
I want to run a sound recorder like audacity to record
what is coming thru my soundcard when playing
streaming audio thru xmms.
However audacity can't open the audio port /dev/dsp
when streaming audio is playing. This is on knoppix
3.4 using the OSS driver.
Would using ALSA make this possible?
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
hi everyone!
in case you were wondering how to get ices-jack to stream your jack
graphs out on the net, here's a quick howto:
browse svn.xiph.org, get the following modules from /trunk:
ao, vorbis, ogg, ogg2, theora, speex, vorbis-tools, ogg-tools
(do this even if you have ogg packages from your distro installed,
it won't do no harm and makes sure you've got the latest'n'greatest)
there's nothing interesting to configure afaik, so you can compile
them (in that order) without interaction:
for i in ao vorbis ogg ogg2 theora speex vorbis-tools ogg-tools; do
svn co http://svn.xiph.org/trunk/$i; cd $i; ./configure && make
install ; cd .. ; done
from icecast/branches/kh, check out
libshout, icecast, ices
again, not really anything to configure, so the for-loop can do the
grunt work...
now fire up icecast, fire up ices, connect it to your jack graph,
and the fun starts.
the default config files are extensively commented, but here's my
config, in case you need some more inspiration:
http://spunk.dnsalias.org/download/ices.xmlhttp://spunk.dnsalias.org/download/icecast.xml
(the source and server run on different hosts, and icecast runs
chrooted and as user icecast)
btw, a graph with an ogg edge between ices-jack and xmms-jack
vertices makes a nice delay effect :) if you use feedback, there's
interesting sound deterioration due to repeated
ogg-encoding/decoding and noise buildup. here's me toying around
with my bass and such a setup:
http://spunk.dnsalias.org/download/netjam.ogg
have fun
jörn
--
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
- Brian W. Kernighan
Jörn Nettingsmeier
Lortzingstr. 11, 45128 Essen, Germany
http://spunk.dnsalias.org (my server)
http://www.linuxaudiodev.org (Linux Audio Developers)
Chris Cannam posted:
> Because sfArk and sfpack compress soundfonts much better than
> zip/gzip/bzip2 can.
This is absolutely correct. The reason is that text files repeat character
sequences exactly whenever words and other character combinations are
used over and over again. The "normal" compression methods build tables
and utilize this precisely repetitive nature effectively. Audio files
repeat character sequences only approximately, so are not recognized as
being almost the same even though they sound the same, hence are regarded
as new sequences. True binary files (e.g. stripped executables) look pretty
much random, also, to one of these compression algorithms. The random
appearance of audio files is one reason why MP3, sfark, etc. were developed.
To distinguish between these latter techniques: As most people know by now,
MP3 is a lossy technique which means that information is lost never to be
seen again. sfark is not. These soundfont compression techniques are a
compromise between loss of information and effective compression. Some
out there may be using MP3's in place of such compression schemes as
sfark's, believing that MP3's are as good or better due to the good
compression ratios obtainable. This is at a fairly heavy cost. I would
advise against it generally, which is the reason I'm spending time posting
this. MP3 is fine for distribution over limited-resource channels, but
not so fine for soundfonts/samples.
Hope this helps someone out there. Now I really am going to try to get
caught up....
Regards to everyone,
Dave.
Hi:
I 've try to use the alsa driver for my nforce2... when i try load the
module snd-intel8x0, the system say me:
/lib/modules/2.4.26-1-386/alsa/snd-intel8x0.o: init_module: No such
device
Hint: insmod errors can be caused by incorrect module parameters,
including invalid IO or IRQ parameters.
You may find more information in syslog or the output from dmesg
/lib/modules/2.4.26-1-386/alsa/snd-intel8x0.o: insmod
/lib/modules/2.4.26-1-386/alsa/snd-intel8x0.o failed
/lib/modules/2.4.26-1-386/alsa/snd-intel8x0.o: insmod snd-intel8x0
failed
My /proc/pci is:
Multimedia audio controller: nVidia Corporation nForce2 AC97 Audio
Controler (MCP) (rev 161).
IRQ 5.
Master Capable. No bursts. Min Gnt=2.Max Lat=5.
I/O at 0xd800 [0xd8ff].
I/O at 0xdc00 [0xdc7f].
Non-prefetchable 32 bit memory at 0xec002000 [0xec002fff].
Bus 0, device 8, function 0:
I use Debian sarge, and instal this packages:
alsa-base, alsa-modules, alsa-utils. alsa-lib
¿Any idea to fix this problem?
--
Leito Monk
--------------------
Miembro de CaFeLUG
www.cafelug.org.ar
-------------------
On Wednesday 22 September 2004, Dave Phillips wrote:
> I'm trying to send a MMC message to Ardour to set it up to receive MIDI
> time code. Apparently Ardour will follow MTC but it has to receive an
> MMC Start message first. So far I'm still unsuccessful, even with your
> suggestions, but I'll keep at it.
You can use "amidi", from "alsa-utils":
$ amidi -S 'F0 43 10 4C 00 00 7E 00 F7'
sends an XG Reset to the default port; `man amidi` for details.
To can use "echo", too:
$ echo -ne '\xf0\x7f\x01\x06\x02\xf7' > /dev/midi01
BTW, you can try a MIDI realtime message "start" like this:
$ echo -ne '\xfa' > /dev/midi
The message "f0 7f f7 06 02 f7" is not a valid MMC command, because the
device-id 0xf7 is also the EOX status byte. It should be a number between 00
and 0x7f.
Regards,
Pedro
How do I get my TB SantaCruz (cs46xx) to make sound when it gets midi
events? I assume this would involve a wavetable, but I'd be happy with
FM synth or any terrible sounding piano sound as well.
My cpu is not really up to the task of sequencing and softsynthing at
the same time.
--
De gustibus non disputandum est.
On Tue, 21 Sep 2004 08:52:17 -0700, Russell Hanaghan
<hanaghan(a)starband.net> wrote:
> On Tuesday 21 September 2004 08:34 am, Hans Fugal wrote:
> > The card works great for PCM.
> >
> > Which software I use to patch the midi depends on my mood, but usually
> > aconnectgui or aconnect.
> >
> > Haven't been fiddling with jack or Qjackctl yet.
> >
> > I'm trying to get my midi controller keyboard to make noise out the
> > soundcard, is all.
>
> Hmmm. Ok...so maybe the wrong midi device? How many midi devices show
> in /dev/snd ? If more than one, try patching around.
$ ls /dev/snd
controlC0 midiC0D0 pcmC0D0p pcmC0D2p pcmC1D0c pcmC1D1c seq
controlC1 pcmC0D0c pcmC0D1p pcmC0D3p pcmC1D0p pcmC1D1p timer
The TB is card 0.
> TB also has several outs for pcm (Surround stuff) but sound like your already
> using the card for normal stereo audio.
I will fiddle with the mixer more this evening. Perhaps the fm synth
is on master1 or some other pcm...
fugalh@falcon:~$ aconnect -oil
client 0: 'System' [type=kernel]
0 'Timer '
1 'Announce '
Connecting To: 63:0
client 64: 'CS46XX - Rawmidi 0' [type=kernel]
0 'CS46XX '
(the keyboard is turned off and I'm not at home to turn it on)
--
De gustibus non disputandum est.