Two weeks working for free in the waste storage business :-(

I thought such cache or file database should already exists somewhere in node’s .db files?
Oh it did not? so the node always need to iterate all files.
i mean, doesn’t satellite knows it already? (which files the node has and which size, and when sent and accepted by the node) that way the payments are always accurate despite what node’s data says. Whats needed is only to make audits, to make sure those files are still on nodes, and not computing all that constantly from scratch on limited I/O nodes side.

Why then forcing poor HDDs to iterate all that files for days/week which is robbing disk from it’s I/O. Its just blocking the node to receive new files.

i mean why such cache should exists on the node side, if satellite already knows all that.
Why forcing the node to colecting all the data over and over again, Even if i run full used space filewalker, then after 1 week the composition of files changes, (even drastically with the new pattern with 30 days TTL) so the node need to do another full filewalker to update the cache, isn’t that insane?

If You need the cache locally, well a database of files, why just don’t download it form the satellite?
With all the files node SHOULD have, and size of all the files, and timestamps from the point a node accepted it. And proceed on it, and IF somehow the node would not be able to process on the files, like file missing, that would indicate that node lost file, so You have a free audit simultaneously! How about that?

Starting to understand arrogantrabbit here

Maybe not that much so dashboard has to go, but some simplicity needs to be done, maybe like this idea freshly posted:

1 Like