For Storage Nodes
Upload Canceled in Logfile
A few releases ago we changed the uplink behavior at the end of an upload. Instead of wasting time for sending a final success message back to the storage node the uplink just closed the connection. On the storage node side that was creating a lot of false
upload canceled log messages. We fixed the storage node behavior. It now keeps track whether or not the storage node has submitted the signed piece hash back to the uplink at the end of the upload. The uplink needs to submit all signed piece hashes to the satellite. The satellite will reject any invalid or missing signed piece hash. This should be a very good checkpoint to distinguish between upload success and upload canceled.
Used and Free Space on Dashboard
CLI and webUI dashboard are now showing the same values for used space and free space.
Free Space Fix
Storage nodes are executing the used space calculation only once on startup. From there the storage node is keeping and updating used space and free space in memory. The storage node should notify the satellite below 500MB free space. This free space value can get outdated and for that reason, we added a second free space check a few releases ago. This second free space check gets executed on every upload but it didn’t notify the satellite. With this release, we are combining both free space checks to make sure the satellite stops selecting full storage nodes.
TBm on Payment Dashboard
The storage node payment dashboard will now show used space in TBm instead of TBh. That should be better to read and understand.
Held Amount History
We are adding a held amount history to the storage node payment dashboard. You will find it at the bottom of the payment dashboard.
Finally, the suspension score is visible on the storage node dashboard. The calculation is the same as for the audit score. A score of close to 100% is good. You will get suspended for a suspension score below 60% and disqualified for an audit score below 60%. Successful audits will increase both scores. Most audit failures will decrease the suspension score except missing or corrupted pieces. These 2 failures will directly impact the audit score instead. A storage node can recover from suspension mode. Disqualification is permanent.
Graceful Exit Initiation
The storage node graceful exit command is now asking each selected satellite if graceful exit is possible. If the storage node is not old enough on one satellite the execution will error out without starting graceful exit for any of the selected satellites. The storage node operator has the choice to stay or call graceful exit again but this time without that one satellite that wouldn’t allow it.
Graceful Exit Cleanup
The repair job is uploading a few bonus pieces to compensate expected upload errors (disk full, storage node overloaded etc). If the repair job is lucky it might end up storing more than 80 pieces. Graceful exit will notice that and just skip these pieces. At the end of an successful graceful exit the storage node gets paid without having to transfer these pieces. There are a few other edge cases like this. If the storage node didn’t get the order to transfer a piece it will leave that piece on disk even after a successful graceful exit. Now a final cleanup should remove all remaining pieces at the end. If graceful exit failed the cleanup will not get triggered. We want to be able to investigate why graceful exit failed and maybe even restart the process.
Storage Node Update Notification
We fixed and enabled the storage node update notification. This time the update notification should not get triggered as soon as we start the rollout process. It should get triggered 4 days after the rollout finished. These 4 days include the docker rollout.
The storage node dashboard has to query a lot of data and that might take a few seconds. To avoid possible confusion about empty values we are adding a loading screen.
Revoke Access Key
If one of your shared access keys gets compromised you can revoke access with
uplink revoke. This will revoke the access key and any sub access key that was derived from it.
We introduced a limit on the number of buckets you may have in a project. It is currently set to 100 per project. Please contact support if you need a higher limit.