Nothing left for rollout here.
I don’t see it on my nodes, at least files have not modified and folders are too small.
My hesitance to add it to the config.yaml, previous versions will not recognize the variable, and not start. Different nodes, different versions at any given time prior to updating to actual implementation, ergo it’s just more efficient to add an environment variable.
The rational for the prefix escapes me, but thanks for the correction & verification.
2 cents,
Julio
Will I need to do anything with my nodes? - Or will this migration be done automatically at some point?
I will happen automatically. And we don’t have to do anything. And we probably won’t even notice.
We hope
Why are .migrate, .restore and .bloomfilter files stored under $logsPath$/meta
, and not under $tablePath$/meta
? Is this intended? Assuming I’m going to move the table files to fast storage, do I only change $tablePath$
, and not $logsPath$
, and therefore it is intended for these files to stay on bulk storage?
See storj/storagenode/peer.go at 52ae1a8e37ddd58e8f132ae6028600255fb16e15 · storj/storj · GitHub
Yes, change only the table path. .migrate/.restore/.bloomfilter files are not important, the are not used frequently. logs directories store the data (can be huge).
Rollout is managed by satellite (in the checking response, there are parameters to configure the hashstore). I prefer to start the migration after full 1.136 rollout.
No, it’s not started by satellite. But can be started manually.
Well, that makes it clear.
Environment variables are global. So if we would name it lets say migration=true/false it might get triggered by an environment variable you set for a totally different application. Adding the prefix makes sure there is only one application that is interested in this set of environment variables and they are not used by other applications.
Thanks LittleSkunk for the additional clarification, not unlike the environment variable Awesome, I am assured now - good stuff.
5 cents,
Julio
I guess what @elek is trying to say is:
Yes, satellites have the capability and will be managing the migration process once it starts, but it has not commenced yet

Yes, satellites have the capability and will be managing the migration process once it starts, but it has not commenced yet
I think it’s more a problem with the logged message, that may be interpreted as the process already having started: all enqueued for migration; will sleep before next pooling
Just out of interest, does the migration run in a specific order?
So are the files in the blobs folder iterated through alphabetically and transferred to the hash store, or is it a random order?

So are the files in the blobs folder iterated through alphabetically
It seems the satellite picked for active migration is by random (for me it tends to change when a node is restarted), but the piece-folders are iterated by order, from a-z and then 2-7.

I think it’s more a problem with the logged message, that may be interpreted as the process already having started:
all enqueued for migration; will sleep before next pooling
Ahhh… yes, that could be confusing if you are not familiar with how it looks when an active migration is running..
Sorry for the misunderstanding
I want to test restoration/recovery of the hashtbl. What’s the difference between this tool and the write-hashtbl
command in the main Storj repo?
I am also assuming to restore we’d just drop hashtbl
into the meta
folder? I noticed sometimes the hashtbl filename will also have a long number attached to it, for example hashtbl-0000000000000003
.
This should be linked:
Since the conversion to hashstore success rates on repair uploads have plummeted on all hashstore nodes. Nodes still on peice store still have 97% repair upload. All other upload/download stats have remained consistent. Before Conversion: ========== REPAIR UPLOAD ====== Failed: 0 Fail Rate: 0.000% Canceled: 199 Cancel Rate: 2.604% Successful: 7443 Success Rate: 97.396% After Conversion ========== REPAIR UPLOAD ====== Fai…
i just saw this!.. sorry.. is there anything i must do as an operator, or i can just let the version upgrade take care of this?
Sit back and let it happen in due course.
That’s what I’m doing