V137.5 - high load?

i have hashstore, not on the HDD, i have it on separate SSD like DB files… have i mis-understood what this is for …

is hashstore, replacing the piecestore ? so , “hashstore” “piecestore” all on hdd

yes it should be all on same HDD, hashstore will replace piecestore.
hashstore is not just DBs it files itself, metadata can be on ssd

1 Like

ooooo god … i definitely messed up - i will have to migrate it and change my docker script.

where is metadata stored? - i can store that to ssd right?

I do not see big benefits on that, only one more fail point. if you have enough RAM you can turn on memtbl then it will be faster even on hdd

1 Like

looks like egress party is over

4 Likes

Same here, it dropped yesterday evening around 21:00 (CET)
There is one benefit, the migration to hashstore is going a little bit faster :slight_smile:

3 Likes

I wonder if old pieces are compatible with the new settings? Or is replacement of all segment pieces needed for migration?

As long as you’re not modifying the first (smallest) number, they are compatible. You just then add/delete pieces as necessary.

1 Like

One more question: Is storj just waiting until the new threshold is reached by normal fluctuation or are they deleting excess pieces?

The nodes are good in eliminating pieces or going offline, so no need to force delete, the repair threshold is enough.

Does this imply that when a node goes offline, there is about a 25% chance for every piece that the node has to go under 46 and trigger a repair? Then that node will lose a lot of pieces. Better keep those nodes online :face_with_monocle:

I like this new setting tbh. Short term I am losing part of my data but in the long run the 46/49 will cause a lot of repair traffic. :money_mouth_face:

Maybe they pushed it a bit too far… :thinking:

I agree. Projected payout has gone up, with that said I don’t know how much stored data was actually removed so it might have gone up anyway.

I have a potato node that needs a nudge every week or so because of lack of memory. I haven’t noticed any significant storage lost on it even though it is sometimes down for half a day.

That was done on purpose. The old RS setting was a bit too expensive on storage expansion factor. The current RS setting is a bit too expensive on repair costs. Now we can run the numbers to find the sweat spot in the middle.

On US1 we are now changing the RS number to 29/46/54/70 (maybe we are going to increase long tail to 75). EU1 we keep the current RS setting a bit longer in order to collect more data. We have some ideas how we can decrease the repair base rate and that can lead to additional cost savings but there is not need to wait with US1 for that result.

5 Likes

This is actually a clever change: because repair costs are something Storj maybe pays. While expansion costs are something they for-sure always pay. It’s cool that they can tune in the best ratios!

The largest factor for how much overhead (storage or bandwidth/compute) is going to have to cost boils down to node reliability.

Right now it feels like around 0,8% of all stored data is downloaded for repair reasons every day. It just feels like a lot to me.

Healthy pieces stats for US1 still going down. I wonder if/when the network will start to recover? :thinking:

I believe that the fix for data reporting service is not released yet.

Reporting seems to be fixed now but I see a big increase in storage_remote_bytes compared to last working state. Is this real or another bug?

There was a bug in reporting of used space: Grafana
We didn’t decide yet what to do with a different RS numbers (some data has different RS settings), so the minimum healthy pieces metric is now some kind of median.

1 Like