At Tue, 07 Aug 2007 20:31:55 -1000,
david wrote:
Takashi Iwai wrote:
At Mon, 06 Aug 2007 23:10:00 -1000,
david wrote:
Arnold Krille wrote:
Am Montag, 6. August 2007 schrieb Fons
Adriaensen:
> I assume most drivers are using the same interfaces to the
> kernel, and the same services, and that these are relatively
> stable.
> But I could be completely wrong...
Well, the kernel devs seem to change some interfaces rather often in binary
incompatible ways. And sometimes even on purpose (to drive away blob-drivers
like nvidia)...
So it can be that one of these changes introduced a bug hard to find and
affecting only very few drivers. And as the developers will probably all have
the lastest kernels, they don't want to wast time by debugging a problem
fixed two kernel versions ago just because the user has 2.6.4 installed and
doesn't use a half decent distro...
Note: a decent distro (I've used
several) doesn't necessarily have the
"latest" kernel - cuz the latest may still be in the very unstable realm.
No more true. Distros nowadays try to pick up the latest one as much
as possible. Take a look at recent openSUSE, Ubuntu, etc.
Of course, it's adventurous to switch to early -rc kernel. But the
released kernel is supposed to be stable. This reduces the
maintenance a lot.
However, distros stick with the older kernel version for their
"business" products, mainly for keeping the 100% binary and source
compatibility, which many ISVs prefer.
IOW, it's just the matter of money :)
Well, I don't run any business distros.
Then the situation gets better (unless you're using Debian stable :)
I know I've switched to newer kernels in the past
and had whole bunches
of devices quit working - for instance, had USB quit working completely.
On one, networking quit working entirely, too. So when some developer
tells me to "test again using the latest kernel," perhaps you understand
why I'm not exactly eager to go do that?
Yeah, I can understand it, of course. I have a bunch of machines with
older kernels, too. But, you understand that if no report back from
the tester, the bug will be left simply broken? Testing is a part of
development cycle, and testing on the same environment is the
important factor, as I mentioned.
It effects the developer's ability to duplicate the bug.
You seem to forget the damn primary thing: the developers don't have
_your_ hardware. The bug, at least the one related with the kernel
driver, cannot be reproduced without the exact hardware.
I think it
behooves the developer to test on the environment the person reports the
bug on.
Could you explain more logically the reason, except for laziness?
Debugging consists of the following:
1. reproduce the bug
2. spot the bug
3. fix the bug
The developer cannot do 1 in many cases. So, what he could is only 2
and 3, as a guess work. Usually, the debugging seesion go through the
loop of 1 -> 2 -> 3 -> 1 -> 2 -> 3. Note, here, 1 is not done by the
developer but solely by you!
As the bug is often related with other components, you may have a good
chance to skip to 3 by updating to the newer version, even before
identifying the culprit. If the bug happens to be fixed, with luck,
then it beocmes much easier to spot the bug by bi-sect. So the step
goes from 3 to 2. Even if it's still not fixed, the update action
often narrows the area to focus on.
Now you resist testing the newer version and claims the developer to
downgrade the version instead. How can it help?
Takashi