Dan Muresan wrote:
non-st_blksize sized blocks will be absolutley
swamped, by disk
latencies, cache latencies, scheduling latencies and file
decoding overhead.
your measurements would be so swamped with noise from other
factors that any differences would be statistically irrelevant.
If you could explain how any CPU load measurement ever devised by
mankind could be "swamped" (or in fact influenced at all) by a latency
factor, then yes, I might have missed your point... Load and latency
are orthogonal issues, aren't they?
Regardless of whether you choose to measure latency or CPU load
if you vary the st_blksize as specified in the previous email,
you will not be able distinguish between the two values of
st_blksize due to the influence of other factors.
So, when in doubt, I think one should heed the advice
of standards and
common practice
Don't you think it would be better to measure rather than rely
on standards and common practice, both of which may be wrong
when it comes to things like performance issues?
(like using fread and fwrite, not read and write), or
The behavior of fread() can be dubious for files accessed
via NFS with respect to incomplete reads (ie EAGAIN). The
only solution to this is to use read().
else address the issues that arise from breaking the
rules (i.e.
Breaking what rules?
provide a VIO layer, like you did, or cache
non-block-sized reads in
userspace).
You provide concrete proof that doing block sized reads makes
any positive performance improvement and I'll implement block
sized reads and buffering.
Until you can show concrete proof I consider this issue closed.
Erik
--
----------------------------------------------------------------------
Erik de Castro Lopo
http://www.mega-nerd.com/