Remove disk space hard minimum

I have some 500GB hard drives but I can’t run storagenode on them because, after being partitioned, they are slightly less than 500GB and the node refuses to start. I don’t think it’s necessary to have such a limit in software, and I’d think anyone could compile the software themselves with a lower limit if they really wanted to.

1 Like

In the config.yaml was plase minimum storagage, it is comented(#) remoove coments, and edit it, may will help. I think minimum is placed that so low space absolutly not profitable to hold. I also have 4-5 of such hdd, but dont think it has usecase, thay takes 6-7W each.

1 Like

Problem with 500gig hard drives is that there not true 500gigs in the first place, so you would need a hard drive that is slightly bigger to beable to run a node. But if you have more then one hard drive put 2 500gigs in a raid then your ok.

Then you would risk the entire data if the drive fails, as I assume you would run Raid-1.
Also if you have read the docs you should really NOT allocate 100%, you should leave 10% headroom on the drive.

If your drive gets full before the allocated space you have configured on the node the whole node could be corrupted causing bad reputation and you might be kicked out from the network and you have wasted time and money for nothing.

I’ll try this, thanks

EDIT: Yep, setting the minimum lower in the config allowed me to use the hard drive.

2 Likes

RAID-1, As you wrote, is a mirror and does not lose data. A failure of one of the even drives leaves all data saved. If you merge all disks into RAID-0, then Yes, the entire node will be lost when one disk fails.

I’m wondering why there’s a limit in the first place?

I believe it is still the case today?
If I had 2 disks of 320GB and 500GB, why could I not use them for STORJ?

I get that it may not be profitable depending on how much power they consume, but that’s my problem in a way :slight_smile:

Is there a technical reason why StorjLabs wouldn’t allow allocating less than 500GB to a Node?

1 Like

Simple just not worth it. The second approach is to protect some storagenodes from start (pi and windows/mac docker nodes) when the mount is failed for some reason. Those systems doesn’t have enough free space on the root to store data in the mountpoint instead of the disk and they will fail with not meet a minimum required space available.
Operator will have a time to fix the problem and node will not be disqualified later for losing data (data will be hided in the mountpoint after the successful mount, in case of windows or mac that data would be lost).

1 Like

But when the mount fails, the drive is simply not available, why would it fallback anywhere?

Besides, although it might be pretty rare to have RPis with hundreds of GB in the root/system storage device, windows/mac/linux desktop PCs surely do. So I’m not sure I understand what you’re getting at… :confused:

it not realy rare, it hapen very often, if disk fail for some reason, mount go broken, it will end with DQ very fast, if node wont go down, when node go down you have mach more posible to know about it.

1 Like

If I’m not mistaken, I believe that isn’t the case anymore? Aren’t nodes suspended if the storage directory isn’t available since recently?

No suspension is only for “unknown” audit errors. So things other than missing, inaccessible or corrupt files. So the recent issues of database locks would be an example of that (fixed in v1.6.3). Missing data would still lead to disqualification.

The thing is: if the mount fails, you don’t lose only files, you lose access to databases too. Right? (unless you put them elsewhere)
Hopefully the node won’t start in such conditions?

It won’t start if the path is not available. But if you didn’t use a sub folder within that mount point, it will simply start as if it is a new node in the folder where the mount point used to be. For the satellite there would be no way of making a distinction between a mount not being there and a node losing all data.

This is why SNOs are often advised to use a sub folder of an HDD instead of its root.

2 Likes

Damn! I see the problem now :scream:
Thx for the explanation. I should fix my configuration asap… :no_mouth:

Just check the available space in the docker container on those systems :wink:
They uses a Linux VM to run a docker.

Will not be protected with this trick. The only way - to have a subfolder for the storagenode, as explained by @BrightSilence above. I would suggest to move the identity there too. It will be even more robust than only subfolder - with missed identity the node will not start either.

1 Like

That should be written in bold in the docs. I think this is the cause for almost all disqualifications.

1 Like

I did that this Weekend :+1:

I was wondering: would this trick of the subfolder prevent disqualification if the mount were to fail while the storagenode is running?

I don’t think so, no. It’s not perfect, but it helps in some scenarios at least.

Alright, okay. Better than nothing.