Hashstore rollout commencing!

It’s normal, you’ll see them at the end of each satellite’s migration

1 Like

Did you enabled like described there?

Because this one

Looks totally incorrect.

It should be after the image name, since it’s a node’s argument.

You may place it before the image name too but as a variable name, e.g.

with a -e option prefix.

1 Like

So this is where I can specify a completely different drive/folder path to migrate the main data to?

./storagenode setup --help | sls hash

      --hashstore.logs-path string                               path to store log files in (by default, it's relative
to the storage directory)' (default "hashstore")
      --hashstore.table-path string                              path to store tables in. Can be same as LogsPath, as
subdirectories are used (by default, it's relative to the storage directory) (default "hashstore")

So, yes.

Thanks A.I.exey, you are a wealth of knowledge, as always!

5 cents,

Julio

1 Like

I’m a human, not A.I. So, N.I.

2 Likes

So to opt-out from migration to hashstore, you should use one of these 2 variants in Docker run command:

Var.1:

--mount type=bind,source="/volume1/Storj/Identity/storagenode/",destination=/app/identity \
--mount type=bind,source="/volume1/Storj/",destination=/app/config \
--name storagenode storjlabs/storagenode:latest \
--storage2migration.suppress-central-migration=true \

Var.2:

-e STORJ_STORAGE2MIGRATION_SUPPRESS_CENTRAL_MIGRATION=true \
--mount type=bind,source="/volume1/Storj/Identity/storagenode/",destination=/app/identity \
--mount type=bind,source="/volume1/Storj/",destination=/app/config \
--name storagenode storjlabs/storagenode:latest \

It works from v1.135.5.
P.S. I corrected the flags. Now it works.

Question on hashstore reads and data location. As I understand it, with hashstore pieces are no longer file pieces but data within “log” files?

I read that this is supposed to help writes.

But what about reads? As the pieces are no longer file system objects do frequently accessed pieces get cached? Because there are frequently accessed so called lucky pieces, causing massive amount of downloads:

So 2 questions regarding pieces in log files:

  1. How (good) are reads cached with hashstore with the big logfiles in place now (because in my opinion such massive downloads should come entirely from RAM)?
  2. Is there still a way to tell where a specific piece is stored which was easy with piecestore of course?
1 Like

From which version can we use the parameter to opt out?

I suppose that the whole file would be cached, if it’s requested too frequently.

I think the next one. It’s a preparation to get early feedback, before we would implement anything on the public network.
We will use a partial updates exactly how we rollout a new version - with a possibility to stop and start over.

With ZFS blocks are cached not files. So there is no difference for caching the lucky pieces.

1 Like

If this is true, then there should be no difference between backends. Perhaps some tunings will be obsolete, but I guess that’s not a problem.

Is there a way to tell in which log file/path a piece is stored like it was possible with piecestore?

v1.135.5 (currently being rolled out)

i see it every hour and it causes CPU spikes and log floods.

Nope. Dosen’t work on that either.

Yeah it’s the i.a fault :confused: I trusted the tools and they got worse and worse. But at the same time, I should have thought about it a little more…

Yes, I did this [Tech Preview] Hashstore backend for storage nodes - #567 by jammerdan

and this then this [Tech Preview] Hashstore backend for storage nodes - #569 by Alexey .

I think it is working because I have log file in hashstore in s0 and s1, more the 20 GO for now.

But I don’t move them to the ssd for now.