Changelog v1.6.4

For Storage Nodes

In Memory Used Serials
The storage node keeps the used serials list in memory instead of writing them into a SQLite DB. Serial numbers are used for double spend protection. To minimize memory usage we decreased the expire time for uplinks from 24 hours to 1 hour. Within 1 hour the uplink needs to start the transfer because otherwise, the storage nodes will reject the order. The transfer itself can still take longer than 1 hour. Orders are also still valid for 48 hours. After one hour the storage node can delete the used serials without any consequences. By default storage2.max-used-serials-size is 1 MB.

If the maximum size is reached the node will start dropping random serial numbers. If you want to optimize your storage node you should take a look into monkit stats for delete_random_serial

On a restart, the storage node will drop all serial numbers. For a short time, a malicious uplink could submit an old order twice while the storage node would get paid only once. However, the uplink still needs 29 pieces to reconstruct the file. Other storage nodes will keep rejecting the malicious uplink request.

500 MB Node Selection Threshold
With the previous release, we enabled the node selection cache on all satellites. The cache is refreshed every 3 minutes and as a result, nodes can still be selected for upload for up to 3 minutes after the node signaled that it’s nearly full. Nodes previously sent this signal when there was less than 100MB of space left, but this was insufficient and some nodes filled up that 100MB within the 3-minute delay. We raised it to 500MB to compensate for this.

Storage Node Payout History Fixed
We fixed the payout data on the satellite side. The storage node payment dashboard should show correct payout data from day 1 for all satellites.
There was also a problem with the download traffic accounting on saltlake. The accounting was too slow compared to the number of orders the satellite had to process. In the beginning, Saltlake was falling behind with the accounting only for a few especially bigger storage nodes. For the majority of the smaller storage nodes, saltlake was still able to process all orders. Over time the number of affected storage nodes increased slowly. We fixed it in the first week of June and are currently moving the data into the May payout.

Storage Node Dashboard
We added total earning to the storage node dashboard and also a diagram that shows used space, free space, and garbage. We are aware that the values shown in the storage node CLI are inconsistent with the WebUI dashboard. The CLI dashboard is showing piece size + metadata but the WebUI dashboard is showing piece size only. This will be fixed with the following release.

Suspension Mode Audit Score
Last release we added unknownAlpha and unknownBeta values to the storage node dashboard API. Now unknownScore is available as well.

For Customers

Bucket Value Attribution
For open-source projects, we offer an open source partner program. Open source partners are getting a special partner ID and a short name from us. The short name can be used for a feature that we call bucket value attribution. More information on how to enable this feature can be found here:


Storj Team, i realy like new Version looks like.
I show me much more information.


The web dashboard is still wrong regarding past payouts

I think @stefanbenten still needs to update.

1 Like

didn’t littleskunk say there would be moved stuff from

most likely what you are seeing… because yeah the numbers look odd.

One of my nodes is still on 1.4.2… Mabey I should check the updater service…

Update: I restarted the update service on my node, and it still has not updated… Should I just wait? Is 1.5.2 still being released?

1.5.2 is out across all platforms

With the new update rolling out I’m guessing the cursor has been reset and your node will receive 1.6.3 whenever it’s its turn. There is no need to rush it, 1.4.2 still works fine.


one will get warning emails from storj that using older node versions can hurt reputation tho…

Yes, but the minimum version is currently 1.3.0, so it’l work just fine and there won’t be a reputation impact.

1 Like

yeah i kinda think it’s FAKENEWS to… but it says so in the emails…
i really get to use so many new words under all this censorship xD

I love the new hollow pie chart for disk space - finally we can see total used space! Also how about making the bandwidth chart show previous month and make the days since start of the month a different colour? And make it say “Traffic used this month”.
The disk space graph isn’t switched with selected satellite for per-satellite view though, I just noticed.

Can we have a dashboard feature to capture failed audits or critical errors and show it on the dashboard?


Please use the ideas portal for that.


does this really mitigates the db locks ?

duno what changed… but my node just suddenly stopped doing the db locked thing… and i wasn’t able to get it back… kinda makes it seem a bit like it might also related to node age… but i duno.

ok, my main node stopped having db locks , especially on this one, but now under heavy traffic it’s back. Can’t wait for the linux docker image update to see the difference. But storing in memory as a buffer just can be better performance wise.

ill need to check if its back… there was a long long time with nearly no traffic which kinda disrupted my testing… and kinda forgot about it… also haven’t rebooted for a week and it’s usually after reboots it an issue for me… or during clean up… ill have to inspect my logs more closely

A post was split to a new topic: FATAL Unrecoverable error {“error”: “CreateFile

Is this update stuck somewhere?
My dockers still haven’t got it :expressionless: while it’s more then a week since it’s released.