Changelog v1.11.1

For Storagenodes

Estimated Payout
The storage node payout dashboard is showing an estimated payout for the current month. It now subtracts the held amount.

Payout History
We added a new payout history table. It is showing a payout overview for a selected month. This overview contains information about held returned after GE, held returned in month 16, and the payout transaction ID for each satellite. The transaction ID is available starting with the 2020-02 payout. We don’t have the transaction IDs for older payouts.
Note: By default, the payout history table will select the earliest instead of the latest month. We are going to fix that with a future release.

Held Amount History
The held amount history was changed as well. It shows the total held amount and the total held returned per satellite.

Periodic Filesystem Check
On startup, the storage node will write a file and once per minute, it will try to read this file. If the hard drive gets disconnected from the storage node this check will fail and the storage node will error out instead of getting disqualified for failing audits.

Order DB
Some storage nodes had problems with a locked order DB. To avoid this problem we replaced the order DB with flat files. The storage node creates one file per hour per satellite and appends all incoming orders. One hour later the storage node will submit these orders to the satellite and move the file into an archive folder. By creating a new file every hour we avoid defragmentation and with that also DB locks.
Note: The storage node will check the unsent orders every 5 minutes. For 55 minutes it will notice that there is nothing to submit. That is intentional and not a bug.

Auth Token Service
We are replacing the old auth token service with a new one ( The new service should work in all browsers even with adblocker enabled. There is a captcha on the new page. If you don’t see it you might have to enable the corresponding java scripts.
The auth token is getting displayed directly on the page. No waiting for an email that might or might not arrive. You can go on and sign your identity immediately.


We also made some progress for graceful exit. I have posted a list of graceful exit bugs here: Known issues we are working on

Only one bug out of that list is fixed with this version. I don’t think that will change much and for that reason I haven’t mentioned it. I don’t want to open the floor for false expectations.


when you start to rollout 1.11.1 version? it is on github, but not changed

Graceful release…?

That. Is. Awesome.

Every minute is quite frequent though especially if the disk is struggling, hopefully the system is not going to try and check the file again if the previous check did not complete yet?
But I guess you’d argue that if my disk takes more than 1 minute to check whether a file exists or not, I’ve got serious issues with my disk and you would be right :laughing:

Serious issues, like an SMR drive for instance :wink:

When you say “the storage node will error out”, what does it mean exactly? Is the node software going to exit, or just switch to an error state visible on the dashboard?


SMR can read data just fine, it has problems with random writes, but not reads.

1 Like

In general yes, you’re right @Pentium100, as long as the disk is not stalling.
When it does though (because of writes indeed), even reading from it becomes really difficult. At least that’s the case with the little 2.5" consumer-grade HDD I have, which is absolutely not designed to run 24/7 with this amount of data read/written to it.

I’ve seen this SMR drive take several tens of seconds for a simple ls command to complete on the root of the Storj folder where there isn’t much to list, and stopping a node running on this very SMR disk (when completely overwhelmed by writes and reads) could take 5, 10, 15 minutes…

It will exit and refuse to start until the operator fixed the issue.


Okay thanks @littleskunk, so a service like UptimeRobot will detect the node is down. Cool :slight_smile:

This node software really is getting better and better, all the developers should get congratulated.

So this means we won’t have to run cronjobs anymore to detect whether our disks are online or not! :partying_face:


If you have disk that behaves that bad, this new file written every minute is not a game changer. Plus the new ordersdb mechanism will offload the disks. Very nice changelog, good job :slight_smile:

there can be different prblems, like USB connected disk lost power, bas sata cable…
So beter if node go off, then get DQ.

This time to check appears configurable with “VerifyDirInterval”

Happy to see this change. I can stop relying as much on my own cronjob. I do a few other things with it while monitoring AC power status on the UPS, and it will shutdown the node gracefully at certain limits too. Need to publish that.

@littleskunk it is not roled out? show now 1.10.1 this mean rollout not even started.

That part is not under my control. I don’t know the answer.

Will these archive files be cleaned up like the current Order DB is?

After 7 days (unless you change the config value for that) it will get deleted.


Not with small files. robocoyping my node from an SMR drive took nearly 2 days for each TB.

WOW i discoverd that in 1.11.1 last Month earnings is working huraa


@Vadim have your gui node finished updating yet

Yes all is updated. but i use my own updater.