Sorry, I have to disappoint you…
I have explored the logs of many of these nodes that state only to be up for some hours. But for unknown reason, if I look back at the time which they claim to be up for, there is invariable a moment of version-check. But sometimes apparently the version check can’t be fullfilled / server isn’t reachable, and the uptime is being reset in one or another way. I even don’t see a real restart of the storagenode.
Can you elaborate on this?
Ok got it. It will be a bit short notice. I expect just a few days between signing the contracts and the uploads hitting our nodes. So with the surge nodes we can buy time for the network to adopt to the new situation.
Yes. I am a node operator myself and fully agree on that. I have communicated that point to the stakeholders already. If we are lucky they might make a commitment to take the risk of no deal signed from the table. If the deals are getting signed we get a higher payout and if the deals are not getting signed we also get a higher payout. That would also allow us to start growing in advance. That commitment isn’t on me so lets wait and see how it will look like.
Could you please open a new thread for that and ask [@] elek how he did it? To my knowledge he managed to get a storage node down to a few minutes file walker with an inode cache. I don’t know the details but he can share his knowledge.
I have this exact same problem on some nodes only:
My trash is empty but its not updating the trash to the node, so at the moment is thinking i have 6.5 TB in trash. Im monitoring the logs now and lets see what it says. But nodes on the same storage does not have this issue. And filewalkers are finished successfully.
Thank you. I have a few old 6TB drives that I’ve spun up and got ready to start new nodes when you pull the trigger. If they fill up I’ll buy some decent SAS ones for more permanent use.
Will there be any changes to vetting in order to speed up the process of bringing new nodes online? I think you alluded to that earlier on but not sure if that is actually “a thing”.
used filewalker? Sure?
No you don’t, that’s a different problem. I mean you might have the same problem as well in addition to that, but the dashboard won’t show you if you do.
Yeah i deleted all the databases now and let it run thru from scratch and it should show the correct info in the dashboard.
I assume you have considered the surge nodes to also only be chosen for downloads if there’s not enough regular nodes? Whatever the place you will pick to host the surge nodes, this would reduce your egrees while maybe making SNOs a little bit happier (at least those who would not complain about their egrees).
That’s just life…
It sounds like a lot of SNOs need to put their money where there mouth is. Lots of talk of “when I fill… I’ll expand”, so we’ll see if that’s true. Because that’s over 10% of online nodes: if they stop accepting ingress it’s definately significant.
Many of us are weary of investing in resources for test data…
So far my nodes didn’t even get back to the pre-deletions level. Storj doesn’t want my nodes now
Not yet full, so not yet expand
As a reminder, if the uploaded test data is 30day TTL, it’s almost time to be deleted. We’ll see how that works out as well, and go from there.
We can also see it on the flip side as well: After a month of pushing everything to the limits (and sometimes beyond), 10% of the nodes became full.
Do you mean the Uptime on the dashboard? If it’s reset, then probably the node is restarted. Please check your logs (both - for storagenode and storagenode-updater), why is it restarted? The last message before a restart should have some explanation (unless you have had a power cut for a moment… but in this case dmesg
or journalctl
should show something).
Referring back to this, currently I have started seeing a large amount of the test data get garbage collected:
This might give insights to the issue you were diagnosing with the dashboard. To my untrained eye, it looks like the TTL data uploaded is not being correctly registered as “active” data, and therefore is being removed by GC before TTL kicks in.
One of my nodes just moved like half of its data to trash. Seems to be data from slc sat.
Dashboard makes me thinking this data was never correct registered by satellite and will not be paid.
I will probably expand by adding 20-30 nodes in the near future. Nothing close to 1000 of course but I think other people will add capacity as well. It takes a little bit of time though.
Right now I’m just trimming away some free space on existing nodes.
Since a lot of data on my node has been deleted again, I don’t need to create a new node for now. I have plenty of space again.
Data deletion will start on sunday
I still don’t understand why the dashboard can not provide reliable data. As a customer it is good to have good and right data. And we as node operator are in my opinion the same as paying customers.
I really don’t want to complain but the stats are almost every month wrong from the satellite. The public status page is inconsistent and so on. Why is that so? And how do we know if the satellite knows the right data? And why is it sending wrong/no data to the storage nodes if it knows the real value?
I really like the project and will continue to support it, but i wasn’t able to find clarification for that. All that I was finding is “The satellite knows the right data and you will get paid the right amount”.