On Mon, 2004-07-19 at 06:48, Ingo Molnar wrote:
* Lee Revell <rlrevell(a)joe-job.com> wrote:
Just as a reference point, what do you think is
the longest delay I
*should* be seeing? I recall hearing that BEOS guaranteed that
interrupts are never disabled for more than 50 usecs. This seems
achievable, as the average delay I am seeing is 15 usecs.
ATA hardirq latency can be as high as 700 usecs under load even on
modern hw, when big DMA requests are created with long scatter-gather
lists. We also moved some of the page IO completion code into irq
context which further increased hardirq latencies. Since these all touch
cold cachelines it all adds up quite quickly.
Does this scale in a linear fashion with respect to CPU speed? You
mentioned you were testing on a 2Ghz machine, does 700 usecs on that
translate to 2800 usecs on a 500Mhz box?
On a 2Ghz machine, 700 usecs is about one million CPU cycles. In other
words, the highest priority process can become runnable, then have to
wait *one million cycles* to get the CPU.
How much I/O do you allow to be in flight at once? It seems like by
decreasing the maximum size of I/O that you handle in one interrupt you
could improve this quite a bit. Disk throughput is good enough, anyone
in the real world who would feel a 10% hit would just throw hardware at
the problem.
Lee