Hashstore rollout commencing!

Nothing left for rollout here. :wink:

I don’t see it on my nodes, at least files have not modified and folders are too small.

My hesitance to add it to the config.yaml, previous versions will not recognize the variable, and not start. Different nodes, different versions at any given time prior to updating to actual implementation, ergo it’s just more efficient to add an environment variable.
The rational for the prefix escapes me, but thanks for the correction & verification.

2 cents,
Julio

Will I need to do anything with my nodes? - Or will this migration be done automatically at some point?

I will happen automatically. And we don’t have to do anything. And we probably won’t even notice.

We hope :face_savoring_food:

5 Likes

Why are .migrate, .restore and .bloomfilter files stored under $logsPath$/meta, and not under $tablePath$/meta? Is this intended? Assuming I’m going to move the table files to fast storage, do I only change $tablePath$, and not $logsPath$, and therefore it is intended for these files to stay on bulk storage?

See storj/storagenode/peer.go at 52ae1a8e37ddd58e8f132ae6028600255fb16e15 · storj/storj · GitHub

1 Like

Yes, change only the table path. .migrate/.restore/.bloomfilter files are not important, the are not used frequently. logs directories store the data (can be huge).

2 Likes

Rollout is managed by satellite (in the checking response, there are parameters to configure the hashstore). I prefer to start the migration after full 1.136 rollout.

No, it’s not started by satellite. But can be started manually.

3 Likes

Well, that makes it clear.

Environment variables are global. So if we would name it lets say migration=true/false it might get triggered by an environment variable you set for a totally different application. Adding the prefix makes sure there is only one application that is interested in this set of environment variables and they are not used by other applications.

2 Likes

Thanks LittleSkunk for the additional clarification, not unlike the environment variable :wink: Awesome, I am assured now - good stuff.

5 cents,
Julio

I guess what @elek is trying to say is:

Yes, satellites have the capability and will be managing the migration process once it starts, but it has not commenced yet :wink:

I think it’s more a problem with the logged message, that may be interpreted as the process already having started: all enqueued for migration; will sleep before next pooling

Just out of interest, does the migration run in a specific order?
So are the files in the blobs folder iterated through alphabetically and transferred to the hash store, or is it a random order?

It seems the satellite picked for active migration is by random (for me it tends to change when a node is restarted), but the piece-folders are iterated by order, from a-z and then 2-7.

2 Likes

Ahhh… yes, that could be confusing if you are not familiar with how it looks when an active migration is running..

Sorry for the misunderstanding :slight_smile:

2 Likes

I want to test restoration/recovery of the hashtbl. What’s the difference between this tool and the write-hashtbl command in the main Storj repo?

I am also assuming to restore we’d just drop hashtbl into the meta folder? I noticed sometimes the hashtbl filename will also have a long number attached to it, for example hashtbl-0000000000000003.

This should be linked:

i just saw this!.. sorry.. is there anything i must do as an operator, or i can just let the version upgrade take care of this?

Sit back and let it happen in due course.

That’s what I’m doing :slight_smile:

3 Likes