The read only attribute has been ignored by windows for years, disregard it; basically, that attribute has been supplanted by security rights.
thanks for the confirmation
Is there anything in the logs indicating that its a startup process? I was under the impression all of the walkers are re-run eventually?
ālazyfilewalker.used-space-filewalkerā indicates the startup process. All other filewalkers run periodically like these
lazyfilewalker.gc-filewalker
lazyfilewalker.trash-cleanup-filewalker
I learn a bit more every day here, thanks!
Iāll leave it tonight, if lazy doesnāt finish Iāll prob turn of lazy and restart the node to see if it finishes that way.
For specs I am:
Win10 (I know know, but I canāt change this anymore)
32gb ddr3
3 nodes (1 per hdd)
1 ssd for OS + all the databases (3 nodes worth)
CPU: AMD a10-7860k (fake hyperthreaded dual-core ā 4.1ghz)
So specs wise it shouldnāt be taking this long but Iāll see
Just try to stop restarting the node
. Let it finish its job ![]()
You can always change.
A Raspberry Pi is a cheap computer and running a node āforcedā me to learn Linux.
It was daunting for me at first (I wasnāt used to a command line since DOS back in the 90s!) but totally worth it. Itās been great fun (but still SO much to learn!)
For running services like Storj it just seems so much more stable! Most of the times it went wrong was because I fiddled with it ![]()
FFS. LET. IT. FINISH.
well, I have ~15tb of NTFS data and no hard drives to migrate that to ext4 (?). I was also under the impression Linux doesnāt like ntfs as much as windows so Iām stuck here for the time being ![]()
Alright, I heard you guys, Iāll let it keep running for a while longerā¦
Hahahaha! Someone needs his morning coffee! ![]()
So you guys were 100% on the money, I just wasnāt letting the lazy filewalker finish (I guess it takes a long time since there is so much activity that it doesnāt get the priority it needs (?))
Iāll mark it as closed now - thanks everyone for you suggestions/feedback!
Thatās great news!
I still think you should still spend about £100 and get a Raspberry Pi and an SSD and start playing with it.
Maybe even spin up a new node with a small external HDD.
You wonāt regret it and weāll be here to help you out ![]()
All the best! ![]()
If storj fills up all my current disks (and I have room to add at least 1 more) then my future will be playing around with Linux servers (I really hope I get an excuse to do that) but as is, Iād rather just buy another HDD and throw it in the current case ![]()
PSA: used-space needs to be re-run with the next update (1.105.x), even after running it when updating to 1.104.5. After cleaning up most of the expired trash, nodes are starting to report 0B trash even though there are files in the trash folders.
This is the error:
2024-05-29T23:03:22+03:00 ERROR blobscache trashTotal < 0 {"Process": "storagenode", "trashTotal": -421774055460}
2024-05-29T23:06:19+03:00 ERROR blobscache trashTotal < 0 {"Process": "storagenode", "trashTotal": -25964643328}
Looking forward to another week of used-space running.
It took 7 minutes on a node with ~1TB data including trash on ext4. Another node with ~2TB on xfs took about 14 hours.
Iām moving a node from xfs to ext4 right now. I will run 2 used-space filewalkers on xfs before the last rsync and then Iāll run another used-space filewalkers on ext4 afterwards. This way I can compare the speed more accurately as it will be the same node with approximately the same data.
2TB nodes arenāt the same as 20TB nodes though.
If itās linear it should take 20*7 minutes. And who has 20TB nodes these days? ![]()
Itās not linear and let me explain why: Metadata canāt be cached when you grow a node that big. If itās a 1TB node, most (if not all) of the metadata would be in RAM (cached). Since a 20TB node has to go out to the disk to get the metadata, and if you have lazy walkers enabled (I do), that means the used-space FW needs to wait for free IO (since it runs at a lower priority) to get the data out of the disk.
Canāt say I have 20TB nodes, thanks to all the deletions.
This is usually mean, that the database is not updated. Could you, please, restart the node with the enabled used-space-filewalker on start?