On Tue, 02 Nov 2010 20:12:26 +0100
Benjamin Freitag <benjamin(a)die-guten-partei.de> wrote:
Hey, linux machines dont crash as long as your
proggies are compiled n
programmed correctly.
BWAHAHAHAHAHAAAAAAAAAAAAAA!!!!!
They're computers. They're going to crash. Mine do so very infrequently,
but sometimes they do.
And why is the drummer sitting next to the UPS and you
didnt tell him he
IS THERE FOR DRUMMING, not manipulating your systems?
The drummer was wandering around on a break. I have no idea what made him
push that button. When I asked him why the holy hell he'd done that, he
shrugged and said "I dunno, I wanted to see what it did." My response was that
it just added $25 to his band's session because I had to reboot the systems
and fsck the drives. (I didn't *need* to, but 1) better safe than sorry and
2) consider it a stupidity tax.) He even had to reach down and under the
desk to push it.
So can bad RAM
and/or bad hard drives. In my experience SSDs are now no
more or less prone to problems than any other part of the computer. Backups
should be approached like voting in Chicago - early and often!
And "in your experience" is how many years?
Let's see, 23 years as a UNIX (and later Linux) admin in high-availability
environments, and now a year as a QA engineer at a storage company.
The 1.000.000 hrs MTBF is calculated when running 1000
drives for 1000
hrs and one starts to fail,
so every 1000 customers it may happen. Yes wear level algorithms became
better, and yes it all becomes better, but still the testing time was
too low for final statements.
What I'm seeing is a significant reduction in the amount of failures in SSDs
than I saw 2-3 years ago, where an EMC storage system that had been fitted with
32 SSDs (in addition to 100 or so magnetic drives) would lose 2-3 of them each
month. By the time I left, the drives had all been replaced with newer generation
SSDs and the failure rate went down significantly - I only had to have one
replaced in the last 4 months I worked there.
Throw as many MBTF numbers that you want around; the first gen SSDs did not
come close to those manufacturer estimates.
For the money of two cheap SSD(needed for solid
performance)
. . .
I do agree that having 2 mirrored drives is more secure, though.
Not secure, i mean speed, ssds still go down in read/write speeds when
you have fragmented disks
Fragmentation can be an issue, but I guess I don't see how 2 drives
alleviates that problem. If they're mirrored, you're still writing all data
to the drives in the order it comes in. You need at least 3 drives for parity
RAID like RAID5 (which is evil anyway), so I'm missing how having a pair of
SSDs decreases fragmentation. Even if I'm writing to multiple files at a
time (such as recording 16 tracks at once with Ardour), the written blocks are
not so spread out that fragmentation would be a major issue for playback, and
overdubs would tend to be even less fragmented since there are fewer tracks
written.
Security only comes by Backups.
Indeed.
I love SSDs when recording in live venues, because
heavy low subs WILL
kill HDs sooner or later.
Interesting, but it makes sense.
I hate my G.Skill with JMF602b because it had a short
circuit via
USB(faulty port on the disk ) and now all e-mails, bills etc are lost.
Sorry to hear that - I'm particularly careful with my data. I back up each of
my machines nightly to a central server, and the backup partition (which is on
a mirrored pair of drives) is then backed up to an external drive. I've never
lost any data, which would be extremely embarrassing, since I'm always preaching
about backups to my clients and employers! I assume all hardware is crap and
will break sooner ratehr than later! Or, plan for the worst and hope for the best.
--
======================================================================
Joe Hartley - UNIX/network Consultant - jh(a)brainiac.com
Without deviation from the norm, "progress" is not possible. - FZappa