Using an hdd with bad sectors

Hello

Has anyone tried using a disk with few bad sectors with STORJ? I think that it should not be a problem if they are like 10 20 bad sectors but haven’t tested it. Please share your experience or thoughts on this topic.

Thank you in advance!

BR,
Niki

I ran a storagenode with bad sectors it didn’t go very well, this was a pretty bad health one though your chances may vary depending how bad it is and how full the drive is.

For example, a 2TB drive with 10 20 bad sectors. The bad sectoractually occupy not more than 10MB on the disk. If they get written over by some file/chunk from StorJ, afterwards there is a great possibility that those files/chinks will not be able to be read, right? But that is actually ignoreable because thhese will be 20 files out of more than2 milion(when you have e full 2TB drive). Right? Am I thinking in the right direction? I guess it is not that of a problem to have the audits at 99.5% instead of 100, right?

If its a new node that could be a high chance to get DQed early if it happens to write to those bad sectors as well, then when it comes to audit time it could fail. If your going to use the drive you could start much smaller and slowly add more space to see how it handles it. You wont really know how bad it is till you start getting IO and read errors. My drive was 1TB It started giving issues around 300gigs Tried to partition the bad sectors out but the drive was to far gone. You will learn quickly if the drive is just failing you may have much more bad sectors then the smart says.

i am using a lot of tools to diagnose the HDDs. And I have several, that have few bad sectors on them and have been using them for years. They haven’t developed and new bad sectors. I was using them to mine BURST. However, when I used them with BURST I had them repartioned to isolate the bad sectors as you mentioned. But this is eating a lot of space because I can not tell with 100% certainty where the bad sectors are. So I make “buffers”, so to say 10GB before and after a bad sector. And as you can see this eats up a lot of space. And you know that the bad sector itself takes up very small chunk of the hdd.
The bad thing with StorJ is that you can not give several paths/folders for the node to use like with BURST, where it is easy. You give it for example 4 folders from an hdd, which contain your so called plot files and that is it. Each folder for example contains 2 plot files with size 100GB and you are good to go with 800GB storage provided to the BURST network. However, here the node wants it own drive. An isolated FS lets say and it gets a bit difficult to use such disks, that is why I opened this topic to find out if anyone tried anything like that.

I took a similar approach but I scanned the drive marked which sector was bad then created a partition around the bad sectors and somehow after I filled the drive still had errors, But your drive may not be in as bad of shape either so it could be fine. If there normal consumer drives your chances may vary though since they dont handle bad sectors as well as say an enterprise drive does. But even those can fail just as bad.

Yes, they are normal disks.
Well I always have the option to test.
So, YES, if noone mentions here his/her experience, I will create a new node and start in one of those drives and see what happens. If it gets disqualified … what the hell, not that of a problem.

1 Like

If you make sure that the drive does not retry failed blocks ad infinitum, e.g. by using a setting like TLER, then it might be good enough for storage nodes. I used to have .ccache on a drive like that, it was decent.

Though, I wonder whether the storage node’s error handling is robust enough to not crash in such situation. I’d hope so.

Do an mke2fs -cc, will help you a priori isolate the existing bad sectors.

Hi
I am using Windows. And I forgot to mention that I prefer to continue using windows. So I would prefer the suggestions to be for Windows OS. And in that case, can you please explain a bit more in details, how can I enable TLER on the drives. If that is possible at all.

1 Like

That depends on the specific drive model. The Wikipedia page I linked has a list of tools for doing so.

Ah, sorry then, I’m rather ignorant regarding Windows. Though I think it’s likely that there are similar tools for NTFS.

Just run a chkdsk /r on the drive, that should lock out any bad sectors. The thing to worry about is not really the bad sectors the drive is already aware of, but those it isn’t or the ones that are about to fail. Bad sectors are often a sign of an HDD about to give up. But I’ve seen HDD’s with some bad sectors survive for many more years as well. I say just do the chkdsk /r and yolo it. I’m pretty sure your node will be perfectly fine.

2 Likes

Are you sure /r will mark the bad sectors that they will no longer be used? I think it just tries to recover the data on those sectors nothing more. This is the bad thing that you can not just mark the bad or weak sectors. If you could, that would be great and you could save a lot of space. But if you had something occupying those sectors, like a file for example, nothing else would be able to try and access those sectors, therefor everything else would be usable.

There are some reports of people saying it should block out bad sectors, same for doing a full format. Truth is if bad sectors appear to begin with it means the drive itself had already ran out of space to remap sectors. You don’t have just 10 or 20, you have 10 or 20 more than the HDD could replace itself. So yeah, usually a bad sign on modern drives.

But Storj is redundant. You can just give it a go. One of my nodes runs on an HDD that has been kicked out of a Drobo array because it was supposedly bad. It’s been running for over a year now without a single failed audit.

If you’re a Windows user HD Sentinel is hands down the best utility to take care of the health on your drives. It scans, does destructive re-initializations, notifies, and everything pretty much you could ask for. It also keeps track of the exact information and life left on SSD and spinners alike.

https://www.hdsentinel.com/

Cheap and hands down the best…

Keep in mind you have to re-run most of the windows solutions after each reboot. At least that’s what I remember from some years age…

Hi
Santinel is well known.
But it does not isolate the bad sectors. I have tested full re-initialization and NO it does not isolate the bad sectors from being overwritten again at later point. That is the bad thing.

Not a Windows solution, but you can use bad blocks and mkfs to inform the filesystem which sectors to skip. If you have a low power device such as a Pi or Compute Stick you could dedicate to a node, you may find it to be more economical than running a larger system for only a few nodes.

https://wiki.archlinux.org/title/badblocks#Have_filesystem_incorporate_bad_sectors

1 Like

Just for sh*ts and giggles I’m going to give this a go as I pulled a 3TB drive with errors out of my NAS today and have set that up in a spare pc.
Should be a bunch of fun to see how long this will actually last.

15 years ago I had a hard drive with a bunch of bad blocks. It was a great drive to store torrents and web browser’s cache. Lasted more than the PC it was connected to.

1 Like

I’ve been using a 2TB drive with some bad sectors for 14 months now. No problems so far.

1 Like