On Monday 06 June 2005 10:37, Mario Lang wrote:
Heh, thats a Redmond argument I'd say :-).
There is nothing wrong (ok, not that much) with accidentally
wasting CPU time, but if you are aware of where are you
wasting it, I dont buy the argument that it is OK to leave it like that
:-).
Actually, it's an *engineering* argument. Technology design is full of
situations where getting the last 5% of a given possible performance can end
up costing 500% more than getting the original 95% did. This is called the
'law of diminishing returns'. The principle is much, *much* wider than just
computer application design.
And, even start up time counts, I find programs
that need a long
time to start anoying, and LONG is a very subjective number :-).
I would too, although I personally don't know that I'd call 3/4 sec a LONG
time to initialize a GUI application. The point I was trying to make is that
tradeoffs are part of the very warp and woof of the design process, and it's
impossible to develop anything efficiently without taking due cognizance of
that fact. Given the choice between spending a day adding a significant new
feature to an application or spending the same amount of time reducing that
application's start-up delay from 3/4 to 1/4 sec, I'll go for the first
option every time. Remember, *coding time* is your ultimate resource as a
programmer -- you want to invest it where you'll get you the biggest bang for
the buck.
I just have to respond to this. I have been writing code for 27
years and every time I get a neophyte programmer in they want to cut
corners to save programming time. Here's the bottom line - if it saves
you a day in coding but costs the user 3/4 of a second in application
time would you consider that a good tradeoff? Not if you have over 100
users and they're having to deal with that 3/4 of a second 20 or so
times a day, every day for a year. Remember, it's only hard for you to
program it correctly once - it's a PITA for the user many times a day.
Jan