Error disk space requirement not met in usb ssd

I run 3 nodes in my synology nas under docker for a couple of years now but all in external usb hard drives. The system is generally stable but some times i have to restart the nodes…
i recently added a new usb ssd drive. i setup the storagenode as usual
but after a day or two, i get the message Error: piecestore monitor: disk space requirement not met

i have read that this is a problem with mounting the disk. i restarted the synology…no success
i reinstall the node…no success
i delete everything and started the node from the begining (i didn’t mind because its a new node with just 100gb).
but after a couple of days again i show the node with the same problem. I tried restarting, re-installing but no luck.
i don’t want to start the node again from scratch.
the ssd works fine and my synology reports the volume size and disk capacity right.

Add this to your config, if you are really convinced about the fact it’s really not a space problem.
And file an big report, or not @Alexey ?
Sometimes problems come in pairs…

Please check the filesystem type and the clustersize.
I do not know how on synology.

thank you my friends
i tried to add in my settings
storage2.monitor.minimum-disk-space: 1MiB

now i don’t get the same error message anymore.
Now it says doesnt find identity.cert

how do i upload my log file here?
copy - paste or is there any other way?

Did you recreate an identity, signed it and reinstalled the node, before restarting it all after previous failure?
Are you really convinced, the mounts are correct?

no i used the same identity…why should i recreate it?. wont i loose everything if i recreate the identity?
yes i am certain mounts are correct
just one thing…
my storagenode has the name storagenodessd

should i write the?
“storage2.monitor.minimum-disk-space: 1MiB”
as
storagenodessd.monitor.minimum-disk-space: 1MiB

if you start over with new data, or a new node, you need a new identity, otherwise disqualification is ineviteable (for the missing data, wich gets audited)

identity belongs to a specific node. its not universal…

NO

it is a docker run command or the config.yaml you edit. for the corresponding node

not recreate, generate and sign a new with the same email. for the new node.

you loose everything if you use the identity on more than one node, at worst.

thank you…your help is precious
but i didn’t my self clear.
Ofcourse i have a new node identity. The node is up about 3 days …until now
i don’t mind starting it over but this was the second time that i started the node again from scratch . the first times lasted 1 day and now 3 days

the tip with the “storage2.monitor.minimum-disk-space: 1MiB” really works
but now i am stuck on failed to load identity

ok…
i will start my node over again…
thank you

i already have 3 nodes for over 3 years but all hosted in usb HDD
this one is the first in usb ssd nvme…
i think something is going wrong with --mount for ssd nvme in usb ports

posting logs and run commands could help here too.

good luck

what i did and worked:

  1. i deleted the content of the folder: config
  2. left intact the folder identity
  3. run the docker setup again
    4 i added the storage2.monitor.minimum-disk-space: 1MiB in the node and run the node again

ofcourse i lost all the data and started again
lets see this time how it goes (3rd try)

It’s Not. This is a precaution setting, as I explained elsewhere to do not allow to run a node with a broken mount point.

because

if you deleted data - you must delete identity too. If you want to start over, you must generate a new identity (not copy the old one!), and sign it with a new authorization token.

Since the node cannot find their disk, it refuses to start. You need to fix the mounting problem and mount the disk statically, see How do I setup static mount via /etc/fstab for Linux? - Storj Docs

You must delete identity too, and generate a new one (identity create storagenode), not copy.

It is, because it gives most at the times it comes into effect only trouble and no help like in this case.

If the mount point isn’t there, the node will fail anyways. Probably even the config file isn’t there, and storage-dir-verification will not be there. The only way to get it running anyways, is when you’re unwise to rerun the installation.

Furthermore, there are more effective ways like chattr +i /mountpoint.

Thank you all for your help…
the mount is normal for the system in synology. Everything is there as should be.
the problem was the error about : disk space requiirement not met
then with the instruction storage2.monitor.minimum-disk-space: 1MiB appeared the error about the .cert.
I say again …i checked with the synology cli and the web interface too…the system is ok but the storj did.nt find the identity

and i didn’t delete the previous identity…its now 3 days on and seems fine .
it was a node in vet thats for…not an old node but a new one…so the system didnt disqualified my node even with the same identity and certifications.

anyway now runs fine …until the next…problem…i left the storage2.monitor.minimum-disk-space: 1MiB that seems to help somehow the correct usage of my usb ssd.

but you never explained to me why i had the disk space requirement not met twice and only in this node which is ssd. why never seen that in my other 3 nodes which are HDD? and how disappeared with the “storage2.monitor.minimum-disk-space: 1MiB” setting?

i will let you know …thank you all again

sudo docker run --rm -e SETUP=“true”
–mount type=bind,source=“/volumeUSB4/usbshare/storjssd/identity”,destination=/app/identity
–mount type=bind,source=“/volumeUSB4/usbshare/storjssd/config”,destination=/app/config
–name storagenodessd storjlabs/storagenode:latest

sudo docker run -d --restart unless-stopped --stop-timeout 300
-p 28970:28967/tcp
-p 28970:28967/udp
-p 14005:14002
-e WALLET=“xxxxxxxxx”
-e EMAIL="jimpapi@yahoo.com"
-e ADDRESS=“xxxxxxxxxxxxx:28970”
-e STORAGE=“1.0TB”
-e storage2.monitor.minimum-disk-space=1MiB
–mount type=bind,source=“/volumeUSB4/usbshare/storjssd/identity”,destination=/app/identity
–mount type=bind,source=“/volumeUSB4/usbshare/storjssd/config”,destination=/app/config
–name storagenodessd storjlabs/storagenode:latest

The most simple explanation is this:
Storj isn’t implemented without flaws, on of the best known flaws is the need of a filewalker to get the correct stats of the usage of your storage.
As soon as your stats aren’t incorrect or lost, the binary assumes all files on the disk aren’t related to storj; untill the filewalker has finished again…
Meaning that if your node is already filled up to the rim, there’s a risk the node will be filled up more than the space you actually allocated for the node. Because the node is assumed empty, and therefore accepting uploads till the filewalker finishes.
So, you might end up in the situation the remainder of the storage is less than storage2.monitor.mininum-disk-space… (which is 500GB by default!)
In which case this option (apparently meant for another reason) makes your node not restarting again, which make you risking the loss of the full node.

Actually, an SSD should make the chance on happening of this unforunate situation less because the filewalker usually finishes within a smaller amount of time.
But, if the SSD node is already very tiny, you might end up in this situation as well.

In the end, the implementation of this option is just bad. Because the assumptions behind it are inevitably wrong and the reason of its existence is already being taken care of by other means. But apparently I got an opponent in @Alexey

This is a consequence, not the root cause. The root cause that the filewalker didn’t finish the scan of used space, or databases are locked or corrupted.
If databases are locked or corrupted, some orders might not be sent and will not be paid.
You need to fix the root cause and not cover the fire alarm with chewing gum (reducing the threshold for free space), as @JWvdV suggests :smiley:

Please search for errors related to databases

docker logs storagenode 2>&1 | grep database | grep -iE "error|failed"

and filewalkers

docker logs storagenode 2>&1 | grep walk | grep -iE "error|failed"

and FATAL ones

docker logs storagenode 2>&1 | grep FATAL

@JWvdV the monitoring threshold for a minimum free space must remain to prevent different issues with the storage or the code (we could introduce a bug).
So, I’ll disagree to reduce this parameter without a reason. Each case should be investigated and fixed the root cause, not killing a messenger :wink:

Ok…i stop the node and start it again without the storage2.monitor.minimum-disk-space=1MiB

sorry…the nodes gives me again the erron about space requirement not met

Alexey i seached for errors as you said…in dockers log…there are no errors as you described about this particular nodes…only warnings !

look for today…as the node is re-run without the “…disk-space=1MiB”
docker logs today

Information 2024/01/25 16:59:41 jimpapi Delete container storagenodessd.
Information 2024/01/27 09:16:44 jimpapi Stop container storagenodessd.
no sign of error !

Without this parameter your node already wouldn’t start, if you have less than 500GB of free space.
The node should work until will not produce any error during a filewalker, it can work for a several days, if the disk is slow, you just need to wait.
After the databases will be updated, you can remove or comment out this parameter.

Please note, each restart will restart a filewalker from zero.

1 Like

I suppose that’s the problem of my ssd…its 500GB
i am waiting a 4TB ssd next week. So i thoughed i could start the node in this 500GB ssd and when the bigest arrives i will copy the node there.
This way i could anticipate the vet period for some days…
Thank you Alexey!

Should be…

sudo docker run -d --restart unless-stopped --stop-timeout 300
-p 28970:28967/tcp
-p 28970:28967/udp
-p 14005:14002
-e WALLET=“xxxxxxxxx”
-e [EMAIL="jimpapi@yahoo.com](mailto:EMAIL=%22jimpapi@yahoo.com)"
-e ADDRESS=“xxxxxxxxxxxxx:28970”
-e STORAGE=“1.0TB”
–mount type=bind,source=“/volumeUSB4/usbshare/storjssd/identity”,destination=/app/identity
–mount type=bind,source=“/volumeUSB4/usbshare/storjssd/config”,destination=/app/config
–name storagenodessd storjlabs/storagenode:latest
--storage2.monitor.minimum-disk-space=1MiB

In my opinion.
Never came across the option to put it in environment vars.

1 Like

Yeah, so the default value is quite word to start with. And second the action taken as soon as the remaining space is less than the threshold is odd, it shouldn’t exit the node. It should just stop ingress and let the filewalker do it’s job.

As soon as mount point is the problem, there will be other checks that won’t be fulfilled making the node fail anyway.

This assumption is a real problem.

However, running nodes with storage2.monitor.minimum-disk-space=1MiB over 6 months now interestingly never gave me problems. Even on SMR drives, without finished filewalks. Leaving always about 3-5% of the drives untouched. So there seems to be another safety measure implemented.

But probably it’s wise to rethink when to kill the node and when to just stop accepting ingress. Because simply cleaning up orders often already resolves the problem (so this setting is checked to early in the process), and again: the mount-point thing is really nonsense, already taken care for by other means.

The root cause in my opinion are
flawed assumptions during the binary implementation, just like you can see above:

  • Assuming filewalking will go fast
  • Assuming as long as filewalking hasn’t been finished, you can advertise last known free space or the whole allocated space in case previous data is missing at all.
1 Like