On Wed, 2013-04-03 at 04:00 +1300, Chris Bannister wrote:
On Tue, Apr 02, 2013 at 02:35:29PM +0200, Peder
Hedlund wrote:
I saw a test where a bunch of professional
musicians and engineers
listened to a guitar player playing an old $5000 Les Paul and a $500
copy and were asked to tell which was the expensive one. About half
of them failed, including the guitar player in the group.
The same was true for a Stradivarius and a cheap beginners violin,
though IIRC the violin player was correct.
Isn't there a thing called hearing fatigue? Where you may not be able
to acurately tell the difference in a blind test, (hell, even bad gear
can sound better to certain people playing certain music.) but where,
after a longish period of time, the listener may tire and get fatigued
with the cheaper stuff? (e.g. beginners violin) compared to a lot longer
listening period on say, a Stradivarius?
Habit, physical and psychical circumstances, e.g. getting stressed by
some kind of sound, IOW fatigue too, are important factors.
The price of a Stradivarius is that bad, that nobody should play one,
but that's another issue.
IMO 3000,- EUR for an electric guitar are ok, I just can't spend that
much money, but IMO those 3000,- guitars usually are better than 500,-
EUR guitars.
How do they make statistics? I would make an ABX test like that:
Compare ABX with ABX box 1 and compare ABX with box 2. Is there a
difference between X and A and B for one of those boxes. I wouldn't ask
for the sound quality and I wouldn't ask if they should notice a
difference, what does sound better. I bet doing the test like that,
people would hear a cowbell in A of box 1, when there is no cowbell, IOW
some people wouldn't listen and compare the quality, they would "search"
for something, some people perhaps would notice an extra bar in B of box
2 or hear subliminal messages.
The so called blind and double blind tests are already manipulated by
the question. So if people think most of us won't hear a difference, we
just guess that we hear the difference, than the question of the test
would manipulate us too. Tests are good to get a rough impression, but
they likely say less about real usage and it's similar for statistics.
They are helpful, but you must be able to understand that a test is a
test and that a statistic is a statistic.