Announcement: major storage node release (potential config changes needed!)

@peem Yes. You need to wait for startup-piece-scan to finish. It used to show an entrie in the log, but is broken for some and dosen’t.
The other way to check when it is finished is to watch the HDD usage. When it’s 100% piecescan is working. When drops to 20-30% and stays there for more than 15 min, then the scan finished. After the scan finishes, the database is updated with the correct values and the dashboard displayes the reality.

The warning was for anyone using inappropriate language in this thread.

We are all adults here.
It is reasonable for adults to know to use appropriate language in the Forum.

One would think it is obvious that cussing or crude references to human biology are not appropriate. If you did not know this, consider this your one time tutorial.

6 Likes

As I mentioned. For running multiple docker nodes I would simplify the config in one central env file and use docker compose to apply that to all storage nodes. That way it is one line to change and one docker compose compand to restart all nodes.

No, the new default is fsync disabled, yes, it could be less secure on unstable systems. If your system didn’t randomly restarts and do not have random power outages, you do not need to do anything.

1 Like

I have 2 nodes per machine, each machine in different locations. So the centralisation wouldn’t help much.

I don’t mind cursing and free of speach as long as it’s not addressed to members of this forum directly. We are all grown men/women, we are not kids. But of course, with limits.

1 Like

Enough.

If you need to express yourself in these ways please do it elsewhere and return when you can comport yourself in a way commensurate with the Code of Conduct.

You have escalated now from Reminder status to Warning status.
The next status level is Forfeiture under the “flagrant” category.
The choice is yours.

3 Likes

But it’s not a question of what individuals mind.
It is a clear violation of the rules of participation.

We must maintain the standards for everyone equally.
I really hope we can all let this be the end of this issue.
It is a disrespectful waste of everyone’s time.

5 Likes

8pzz7r

8 Likes

No, that is not possible.
The Forum has rules.
The rules apply to everyone equally.
The rules are clearly written
Code of Conduct

By participating in the Forum, you implicitly agree to abide by the rules, the same as everyone else does.

You have wasted enough of everyone’'s time on this issue.
You know how to act like an adult and I hope you choose to do so here forward.

You have received your last and final warning.
I accept your apology and am glad it will not happen again.

4 Likes

Is it current position saved when running filewalker (storage2.piece-scan-on-startup: TRUE) or will scanning start from 0 after the node is restarted?

The feature still seems to be under review:

https://review.dev.storj.io/c/storj/storj/+/12806

1 Like

This is bad. I’m afraid to even think how long the filewalker process can take for 70 million files. Surely something will happen and the node will restart. And everything will start from 0
In this topic they write that in order not to show terrabytes of garbage in the dashboard, you need to run filewalker to update the node databases. But I can’t imagine how this can be done on nodes with 70 million files without the scan saving function

2 Likes

Correct.
I have turned filewalker off on several nodes because of that.

I don’t know what is halting the completion of the feature and deploy it. I have asked to get this done several times:

2 Likes

It shows pending code review and :point_down:

3 Likes

Yes, but no additional information since then. So I don’t know what’s going on with this behind the scenes.

3 Likes

There are differences in Ingress between 2 nodes running in different locations depending on the version. I think this is because fsync is disabled.


There’s no extra ingress yet for 1.104 to take advantage of, is there? SLC hasn’t been pushing data since the start of the month… so this must be normal customer traffic… which could coincidentally be closer to your 104 node?

I’m patiently waiting for the torrent of ones and zeros to start falling from the heavens! :cloud_with_rain:

I thought it was because of fsync :slight_smile:
I guess there is another reason