Discussion:
HDD problem
(too old to reply)
m***@care2.com
2015-03-29 22:02:12 UTC
Permalink
I have a 2T hdd reported to be overheating and sometimes fully accessible, sometimes not.

Boot linux, start gparted: hdd partitions are shown correctly in gparted. Tell it to unmount one of the partitions and it then tries to reread the disc and always fails.
Boot gparted22 live cd, it doesnt read the hdd at all.

I'd like to wipe the disc, put a single new partition on and format. But something somehow is prevnting this happening. Any ideas?

Even if its only accessible some of the time I have a use for it, as another backup layer.


NT
Mike Tomlinson
2015-03-30 11:05:26 UTC
Permalink
Post by m***@care2.com
Even if its only accessible some of the time I have a use for it, as another backup layer.
You would be crazy to use an intermittently-working disc for backup.

Take it out of the external housing - it's probably overheating as
external enclosures don't provide enough cooling - and mount it
internally in a PC.

Test it with Linux and 'badblocks -swv /dev/sdX' where X is the drive
letter. Warning, this will overwrite all data on the disk. If it finds
any bad blocks, even just one, bin it.
--
:: je suis Charlie :: yo soy Charlie :: ik ben Charlie ::
The Natural Philosopher
2015-03-30 12:32:36 UTC
Permalink
Post by Mike Tomlinson
Post by m***@care2.com
Even if its only accessible some of the time I have a use for it, as another
backup layer.
You would be crazy to use an intermittently-working disc for backup.
+100
Post by Mike Tomlinson
Take it out of the external housing - it's probably overheating as
external enclosures don't provide enough cooling - and mount it
internally in a PC.
+1
Post by Mike Tomlinson
Test it with Linux and 'badblocks -swv /dev/sdX' where X is the drive
letter. Warning, this will overwrite all data on the disk. If it finds
any bad blocks, even just one, bin it.
+1

also consider getting any stats out of it with SMART

http://blog.shadypixel.com/monitoring-hard-drive-health-on-linux-with-smartmontools/
--
Everything you read in newspapers is absolutely true, except for the
rare story of which you happen to have first-hand knowledge. – Erwin Knoll
m***@care2.com
2015-03-30 21:54:21 UTC
Permalink
Post by The Natural Philosopher
Post by Mike Tomlinson
Post by m***@care2.com
Even if its only accessible some of the time I have a use for it, as another
backup layer.
You would be crazy to use an intermittently-working disc for backup.
+100
An extra layer of backup with less than 100% certainty is an extra bit of insurance, not a downside.
Post by The Natural Philosopher
Post by Mike Tomlinson
Take it out of the external housing - it's probably overheating as
external enclosures don't provide enough cooling - and mount it
internally in a PC.
+1
its always been in a pc. I don't have enough detail to know the full story on why it overheated. With a bit more think time I suspect maybe it didnt, it does seem unlikely when bolted into a case with lots of airflow.
Post by The Natural Philosopher
Post by Mike Tomlinson
Test it with Linux and 'badblocks -swv /dev/sdX' where X is the drive
letter. Warning, this will overwrite all data on the disk. If it finds
any bad blocks, even just one, bin it.
+1
I'll find & try the relevant util. Cheers.


NT
Post by The Natural Philosopher
also consider getting any stats out of it with SMART
http://blog.shadypixel.com/monitoring-hard-drive-health-on-linux-with-smartmontools/
Rod Speed
2015-03-30 22:38:40 UTC
Permalink
Post by m***@care2.com
Post by Mike Tomlinson
Post by m***@care2.com
Even if its only accessible some of the time I have a use for it, as another
backup layer.
You would be crazy to use an intermittently-working disc for backup.
+100
An extra layer of backup with less than 100% certainty is an extra bit of
insurance, not a downside.
Post by Mike Tomlinson
Take it out of the external housing - it's probably overheating as
external enclosures don't provide enough cooling - and mount it
internally in a PC.
+1
its always been in a pc. I don't have enough detail to know the full
story on why it overheated. With a bit more think time I suspect maybe
it didnt, it does seem unlikely when bolted into a case with lots of
airflow.
Specially with a samsung, they don't even need lots of airflow to stay cool
enough.
Post by m***@care2.com
Post by Mike Tomlinson
Test it with Linux and 'badblocks -swv /dev/sdX' where X is the drive
letter. Warning, this will overwrite all data on the disk. If it finds
any bad blocks, even just one, bin it.
+1
I'll find & try the relevant util. Cheers.
Mike Tomlinson
2015-03-31 12:21:28 UTC
Permalink
Post by m***@care2.com
An extra layer of backup with less than 100% certainty is an extra bit of
insurance, not a downside.
It's your data. Only you know how much it is worth to you. if you want
to play Russian roulette with it, that's your lookout.
--
:: je suis Charlie :: yo soy Charlie :: ik ben Charlie ::
Johny B Good
2015-03-30 16:44:08 UTC
Permalink
On Mon, 30 Mar 2015 12:05:26 +0100, Mike Tomlinson
Post by Mike Tomlinson
Post by m***@care2.com
Even if its only accessible some of the time I have a use for it, as another backup layer.
You would be crazy to use an intermittently-working disc for backup.
Seconded! I wouldn't waste the time and effort, not even on a drive
that merely runs into write errors due to excessive MZER events after
clocking over a million head unload cycles.

It wasn't a Western Digital HDD as one might have supposed - it was a
2TB Samsung SpinPoint that had suffered the misfortune to be subjected
to its maximum power saving Power Management option in a FreeNAS (aka
NAS4Free) setup for more than a year before the staggeringly high head
unload count was noticed (its twin which also been subjected to the
same settings had clocked a mere 168,000 cycles in the same period -
Go figure! as the yanks are wont to say).
Post by Mike Tomlinson
Take it out of the external housing - it's probably overheating as
external enclosures don't provide enough cooling - and mount it
internally in a PC.
I see you're thinking it's one of those infamous Seagate external
drives. You may be right but I thought he was referring to an
internally fitted drive (possibly a Maxtor or Maxtor like Seagate)
shoehorned into the space above another such drive in the two drive
drive bay of one of the older lesser ventillated mid-tower cases.
Post by Mike Tomlinson
Test it with Linux and 'badblocks -swv /dev/sdX' where X is the drive
letter. Warning, this will overwrite all data on the disk. If it finds
any bad blocks, even just one, bin it.
Alternatively reboot from a UBCD (CD or pen drive) and run Seatools
HDD diagnostic (or the appropriate manufacturer's diagnostic utility
if it's not a Seagate or Maxtor drive) or failing any of those
diagnostic options, Vivard. You may have to set the drive interface to
IDE compatability mode in the BIOS/UEFI setup menu to make it visible
to the diagnostic software.

If you find more than a dozen or so bad blocks, I'd be inclined to
retire it if they're scattered across the LBA range. If any such
blocks are in a fairly tightly defined area, you can get Vivard to
concentrate its testing over the small region that encompasses the bad
one to verify whether they were simply the result of a "One Off Event"
or the sign of an ongoing or worseneing problem.

Don't use the 'remap bad blocks' option without at least verifying
that it's most likely been the result of a one off event (the HDD
makers provide so pitifully few 'spare' sectors for such remapping - a
mere thousand or so out of the hundreds of millions of sectors that
make up the usable capacity on a modern disk).

Remapping sectors takes considerably longer than retesting and
there's no point in using the remapping option if retesting indicates
a worsening problem that's likely to 'burn up' all the spare sectors
in short order anyway. You'll have just wasted several hours of
'remapping' time for no useful gain in that event.

A few bad sectors (or even a few hundred bad sectors) is not always
an indicator of impending disk failure so if money's tight and you can
spare some time retesting and exercising the drive to prove that it's
not a worsening condition, then it can sometimes be worth taking a
chance, after all, even using a brand new drive involves some element
of 'taking a chance' with your precious data anyway.

However, contemplating the use of a drive that totally fails to
respond due to a suspected overheating problem is a totally different
'kettle of fish'. Overheating usually results in permanent damage
rather than just problems that disappear when the overheating issue is
resolved.

You need to be able to examine the SMART logs before coming to any
such conclusion. The problem you're seeing may have nothing to do with
overheating other than it being a factor in how a more serious fault
in the controller begins to manifest itself.
--
J B Good
m***@care2.com
2015-04-01 15:08:49 UTC
Permalink
Post by m***@care2.com
I have a 2T hdd reported to be overheating and sometimes fully accessible, sometimes not.
Boot linux, start gparted: hdd partitions are shown correctly in gparted. Tell it to unmount one of the partitions and it then tries to reread the disc and always fails.
Boot gparted22 live cd, it doesnt read the hdd at all.
I'd like to wipe the disc, put a single new partition on and format. But something somehow is prevnting this happening. Any ideas?
Even if its only accessible some of the time I have a use for it, as another backup layer.
NT
News....

Seatools didnt see any HDD, waggled the power connector then it did, so wherever the fault lies it looks likely fixable. Seatools has found no fault with the drive, SMART stats all passed.

It turns out the overheating was caused by a psu fan problem, along with a high dissipation CPU.

So...all looks positive. Just need to track down the bad connection and hopefully will have a good drive fit for use in a PC

Thanks to everyone for the suggestions!


NT

Loading...