as I am getting closer to actually buying hard drives which are supposed to last me at least 5 years if not longer - what is a good measure to test every drive if it is DOA or prone to fail soon?
-
as I am getting closer to actually buying hard drives which are supposed to last me at least 5 years if not longer - what is a good measure to test every drive if it is DOA or prone to fail soon? just a complete read test like dd to dev/null? or is ther some fancy software for that?
-
as I am getting closer to actually buying hard drives which are supposed to last me at least 5 years if not longer - what is a good measure to test every drive if it is DOA or prone to fail soon? just a complete read test like dd to dev/null? or is ther some fancy software for that?
@MagicLike
smartmontools
usesmartctl -i <device>
to see if it supports SMART (if it's not ancient it should)smartctl -a <device>
(or-x
if you like seeing more info, this one does) — check {reallocated sectors, uncorrectable errors} = 0, if it isn't that's a pretty bad sign that the drive is gone soonsmartctl -t long <device>
will do thecomplete read test
you mentioned, use previous command for info to see results -
@MagicLike
smartmontools
usesmartctl -i <device>
to see if it supports SMART (if it's not ancient it should)smartctl -a <device>
(or-x
if you like seeing more info, this one does) — check {reallocated sectors, uncorrectable errors} = 0, if it isn't that's a pretty bad sign that the drive is gone soonsmartctl -t long <device>
will do thecomplete read test
you mentioned, use previous command for info to see results@0x7700e6 mhm, noted - and yes I like to see more info -x ftw x3
@privateger also suggested to do a badblocks test will probably just combine both things to 1) smartctl, 2) badblocks, 3) long smart test, 4) smartctl.
Only problem being time but I have 14 days to return a dead drive and that is plenty for 4x 8TB drives to be tested so that won't be a problem...many thanks
-
@0x7700e6 mhm, noted - and yes I like to see more info -x ftw x3
@privateger also suggested to do a badblocks test will probably just combine both things to 1) smartctl, 2) badblocks, 3) long smart test, 4) smartctl.
Only problem being time but I have 14 days to return a dead drive and that is plenty for 4x 8TB drives to be tested so that won't be a problem...many thanks
@0x7700e6 @privateger getting back at this I went through the Arch Wiki page of badblocks and found a recommendation for a faster alternative, as I need to have a NAS up and running within roughly the next two weeks. I have no idea how long a badblocks test will take for 4x 8TB drives - an estimate formula I found/made for one 8TB drive:
(8000000/(255/2)*2)/60/60/24≈1,5 days
. No idea if that is even remotely accurate and I don't know the bandwidth of the controller for running this on 4 drives at once. (Formula explanation:(disk size/(max disk transfer speed/2 [for a somewhat average transfer speed])*2 [for a full write and read])/60 [conversion into minutes]/60 [conversion into hours]/24 [conversion into days]
)
Also I have no idea how long it will take to build the pool and after that I still need time to transfer all the files to the NAS over a 1G network.Can you give me an idea if that estimate is somewhat accurate? Should I just try the presumably faster method the Arch Wiki lists?
Did I miss anything? -
M magiclike@soc.sekundenklebertransportverbot.de shared this topic