Changelog v1.4.2

For Storage Nodes

Payment Dashboard Improvements
The payment dashboard should now display the correct payout for previous months including surge pricing. Please note that the historical data for anything before March is currently wrong on the satellite side. We are working on fixing it.

Windows Updater
A few users reported issues with the windows updater. This version should fix it. Please contact us if you notice any problems with the updater on the following release (not the current release!).

Disqualification after Suspension
We have added a feature flag for the final disqualification after suspension mode. Any storage node that stays in suspension mode for more than 7 days will get disqualified as soon as we enable this feature flag (currently disabled on all satellites). Please contact us if you haven’t been able to get out of suspension mode. We are aware that some nodes are getting suspended for DB locking errors but hopefully not for 7 days straight. We are thinking about enabling disqualification with the next release.

Delete Queue
To speed up file deletion for the customer the satellite is sending the delete messages as quickly as possible to the storage nodes. The storage nodes will acknowledge the delete messages as quickly as possible without executing them. It will store the delete messages on a queue and delete them afterward. The queue has a limit. If you are concerned that your storage node might drop delete messages please take a look at monkit stats piecedeleter-queue-full. Here is how you get the monkit stats: Guide to debug my storage node

Node Selection Cache
To speed up uploads for the customer the satellite will keep all storage nodes in cache for 3 minutes. This means if you update your IP, port or allocated space it will take up to 3 minutes before the satellite starts or stops uploads to your storage node.

For Customers

New Signup Workflow
We have reworked the signup workflow. After signup, a wizard will guide the customer through the first steps. We are working on increasing the default project limit with the next release.


@littleskunk Can I ask you, how we can configure this path with docker?

storagenode: allow configuring database path independently


29 posts were split to a new topic: Database is locked. What the reason?

It should be Storage2.Database-Dir but please be careful with that.


Do we need to setup the piece queue size or the number of delete workers? It seems the default is 0.
Thanks in advance.

DeleteQueueSize "size of the piece delete queue" default:"0"

Hi, was the docker image updated to pull it ?

No, it probably won’t be for a few days at least, usually they roll out Windows in stages first.

Yea I noticed that. For the moment I would say please don’t. I will fix the default values. It is annoying that later in the code the default 0 is getting set to a different value that we should have used as default in the first place.

Hopefully the default is fine. Lets just keep an eye on our storage nodes and if the default is too low we should increase the default instead of overwriting it.

1 Like

Thanks for the update @littleskunk,
I’ll wait for watch tower to propagate the update to my node.

It will be interesting to monitor how performance increases with the 3min cache at satellite end :+1:

1 Like

That is also a feature flag and currenty disabled on all satellites. We are going to activate it on SLC today or tomorrow. I am concerned that the customer might get a higher error rate on upload / slower uploads. I would like to verify that before we enable it on all satellites with the next release.

If any storage node is getting out of space errors in the logs that would be a good sign that we need to change a few numbers in the code.

1 Like

That means the dashboard should now show the right data for April? - Because it doesn’t show me the right data.

1 Like

There was some slack of 100MB built in to prevent out of space issues, but you might want to increase that a little with the node cache now.

True, but the tradeoff would be that you need to run a database server alongside the node and I don’t think that complexity is warranted. There are other ways to solve this issue.


Somehow the mood in the comments is not the best.

I have to say that I am very happy about this update and I would like to thank especially the developers for their efforts.

I’m very happy to finally change the location of the database files. Is there already a documentation about this in combination with docker?

I’m also looking forward to any optimization which means that less IOPs are needed. I am curious how the memory write buffer will affect this positively.

If the disqualification for downtime is activated I would prefer that when an update is released it is suspended for a certain period of time. If problems occur after an update it would be a pity if nodes are disqualified because they get updates first.


Well we are all sort of in the same boat here… without the storj developers, i wouldn’t be able to provide the hardware for the backbone of a hyperscale global cloud service, and i really like the convenience of it, i doubt i could stitch that together, so for that i’m very thankful for them working to maybe give me an part time online job, if this thing goes grand.

So a big thank you to the developers, and sorry for all the horrible things we may inflict upon you… lol

in regard to the update to 1.4.2 its atleast the last few times while i’ve been around… windows first, then after a week or close to two weeks then it comes for linux / docker and what not…

you can check the docker image here, if you want to do so manually…
then you can see when its out and ready

A post was merged into an existing topic: Database is locked. What the reason?

Since we get paid monthly for the data served then dowtime/uptime and audits should also reset monthly.

I have noticed when Satellites were down, then SNO’s were affected with downtime which isn’t accurate due to that bug but I can still see 98% on 1 of the satellites and this number hasn’t changed for months after the bug occurred.

FYI @littleskunk

That, of course, means double allowed downtime/audit fails at the end of the month.

The dashboard shows the number incorrectly.

A post was split to a new topic: HDD slowing down for some reason

This is impossible. A satellite that is down can’t perform uptime checks. And even if it somehow did, the node would still respond to them. So yeah, your lower stats are caused by something else. Either way, it doesn’t matter, a new uptime check system will be implemented anyway, with new scores that will be the only ones that matter.

It’s not impossible, it was proven last year when 1 of the satellites went down, the SNO uptime also decreased.