Clemens Ladisch wrote:
What I tried to explain was that the priority is not determined by the
interrupt number but by the interrupt vector number.
Ah, right - vector number not interrupt number.
<SNIP>
When the I/O-APIC is enabled, interrupt priority is
determined by the
interrupt vector number (lower vectors numbers have higher priority).
So, I was backwards, as usual! ;-) Thanks!
(My patch to change vector numbers doesn't work with current kernels.)
Bummer, but I must admit I wasn't aware that there was ever a patch to
effect this. Maybe one day it will work again?
On the issue where I thought you and I never really finished before, in
the older style interrupt processing there seems to be a measureable
difference in the number of xruns when my cards are placed on IRQ9 (best
user level interrupt) vs. IRQ7. (worst user level interrupt) This is not
Linux specific at all. I wa sort of surprised when I came to Linux that
more people weren't aware of this sort of optimization. It was very
common for us optimizing Windows machines for audio to mess with this
sort of thing.
However, on the IO-APIC machines it was my impression that you were
saying that it didn't matter what interrupt vector hardware was placed
on. Was I incorrect about that? Were you just saying that it didn't
matter what interrupt number it was placed on since you had a patch to
make any interrupt number be the highest priority vector?
In my mind there is no cange in the IRQ service routines no matter which
sort of machine I work on. If I place a sound card on the worst
interrupt vector (highest number) and, let's say, I put a bthersome
Ethernet NIC on the highest priority vector (lowest number) then
logically I would guess that the Ethernet interrupts will get in the way
of the sound card doing it's work and we'll get more xruns.
Do I have this right in your mind, or am I still off base on IO-APIC
machine operation?
Thanks,
Mark