This storage node update is optional, not compulsory for the functioning of storage nodes. For Satellites, the update is required.
Drop certificate table - The storage node keeps track of the uplink ID it contacts. The implementation has changed so we don’t need that table anymore. We expect a performance improvement on uploads and downloads. The process of dropping the table will take about an hour—during this time, your node will be unresponsive, and the dashboard won’t open.
Remove database locking - We’ve changed how database locking works. For write operations, the database still needs to be locked, but only for a short time. Read operations are now possible even while the database is locked.
Limit concurrent uploads - Slower nodes like a Raspberry Pi3 are having a hard time getting any data. They were accepting too many concurrent uploads and were unable to finish them in time. In this new release, we added a config
storage2.max-concurrent-requests: 10. This will allow slow nodes to focus on a smaller number of uploads and finish them as fast as possible while refusing the uploads it couldn’t process anyway.
In memory used space/bandwidth tracking - The storage node needs to know how much bandwidth and disk space is free. Instead of querying the database every time, we calculate bandwidth and free disk space once upon startup and then keep it in memory.
Change voucher log message - A storage node can get a signed voucher from the Satellite only if it’s already vetted and not disqualified. We corrected the log message in order to eliminate the confusing previous message that lead some SNOs to believe their node is disqualified when really it was only in the vetting stage.
Repair checker use reliability cache - Instead of killing the database with too many requests, the repair checker is now caching the storage node reputation. This will speed up the repair checker and reduce the performance impact on the database.
Faster Uptime checks - The discovery service was pinging all storage nodes and requesting additional data like the wallet address. We removed this ping because the second request will still tell us if the node is online or not. This allows us to check the uptime of all storage nodes more frequently.
Fix repair trigger - The repair checker was too nervous and added too many segments to the repair the queue even if a storage node was offline for only a minute. The Reed-Solomon numbers allow us to be more tolerant now. We now trigger repair if a node is offline for more than one hour.