Disk Space is less than requested - Bug?

I’ve been running a 500GB experimental co-lo node, but the latest update has broken it

WARN piecestore:monitor Disk space is less than requested. Allocated space is {“bytes”: 497553056384}
ERROR piecestore:monitor Total disk space is less than required minimum {“bytes”: 500000000000}

It’s a 550GB iSCSI slice, but it looks like there’s a Base2 conversion issue that has shut down my node…
What gives and why did this happen?

1 Like

Maybe because log files are now eating up to much other space. The storage node is checking the free space on disk. Yesterday that might have been above the mimimum and today something else consumes a bit of space so that you can’t reach the minimum anymore.

Storj data lives on a separate drive/iSCSI slice and is not the main drive on the computer…

Everything has 50% of free space or more, and the logfiles are not an issue for now

Like I said before, a potential Base2 numbering conversion issue, I raised the slice size to 650GB to see what would happen, and now Storj is running again

Note that it ran fine for 2 months at 550GB before it got clobbered by the latest update,

I doubt it’s a base2 conversion issue, since Storj will use whatever units you specify (GB or GiB for example).

Prior to v1.11.1, allocated storage space was rounded to the next 100 GB, so when you specified 550 GB, the node was allowing for 500 GB or 600 GB. I am unsure which direction it would have rounded. In v1.11.1 and later, the 550 GB would have been respected and your node would have possibly been using more than allocated for a while until pieces were removed. Perhaps this is part of what you are seeing, but without knowing the exact sequence of events, it is hard to say.

The iSCSI slice only has 250GB used of 550 or now 650GB

My storj app failed on October 3rd, I rebuilt the node today just in case it was a weird Windows thing, but that’s when I found the <500GB error because somehow 550GB = 497GB to Storj

The Storj’s software uses SI measure units by default, so 550GB in config.yaml (or -e STORAGE environment variable) is 550,000,000,000 bytes.
OSes uses binary measure units, so, 550 GiB should be 590.55 GB and vice versa 550 GB should be 512.22 GiB

For me it looks like you have used space by something other.

There could have been an issue with the db’s where it didn’t count some or all of the stored data. In that case it checks how much room is available on the disk and subtracts to little data already in use by the node. This is hard to check now since the node recalculates that used space on start. So if this was the problem it’s likely fixed now that your node is up and running again.

The iSCSI slice only has the Storj Database on it, nothing else

The reason it is concerning is that if my other 5TB node dies for whatever reason, I’m going to be stuck if Storj doesn’t reconnect the drive correctly and sees it’s own database as foreign-used space.

It really shouldn’t get stuck. In my experience this usually only happens if there is corruption in the storage_usage database. If this happens and it sees less than 500 GB free space, the temporary work around to get the node to start (which will make it recalculate space used) is to stop the node, set the storage2.monitor.minimum-disk-space: 500.00 GB parameter in the config.yaml to a lower value (and make sure to uncomment the line), and restart.

(and as a side note, make sure not to confuse the “storj database” and used storage space, as these mean different things. I am assuming your iSCSI slice both has the databases and storage.)

2 Likes