And if you use NTFS with 64KB clusters (to reduce fragmentatio), one bad sector inside the MFT makes 64 files disappear.
Sooo⌠I think I will stay with the current format and badger cash. I just activated it on all nodes.
Itâs safer, has the same speeds as hashstore and dosenât waste 25% of my space.
Curious thing: when updated to 119 ver. a new directory appeared in the storage dir: hashstore.
It has some dirs inside, one is meta, and some files. I didnât switched to hashstore, they just poped up after update.
Actually, I have an answer:
Yes, itâs expected, see the initial post. After the supported version is released, these folders and files are created automatically.
And if you want to enable, you know how to do it.
So 2 things have to be worrie about than:
- The code to reconstruct the hashtable to be written and implemented.
- Loosing more than 4% of log files.
Well, Storj can take care of point 1.
The point 2 is SNOs problem. Got it.
There could be a variable for hashfile directory like we have for databases. So SNOs can use a folder with redundancy for those files. In a low RAM setup it also might be a benefit to have hashfiles on SSD.
If the chance is high enough that this can happen, why doesnât the Node have the ability to request the needed information from the satellite? (Or does it?) Wouldnât it be a good thing, that the node can say âHey, this file got corrupted and this piece(s) is (are) destroyedâ. So the satellite can keep track of healthy pieces too?
And my second question is, how does a satellite know a piece is corrupt? Is there a checksum that will get checked? If someone is requesting a file and a piece is corrupt, how does someone know that this exact piece was wrong?
The logic is: if you lost a piece, that means you canât be trusted with that piece anymore. Why they should give you that piece again for keeping?
In the end, this is your main job, to keep pieces healty and available. If you donât do your job for which you are paid to do, why should they give you the same job again?
And the repair is costly for the network. Why they should pay for repairing the lost pieces in the same spot they were lost the first time, and for which they paid already to be kept safe?
A midle ground would be to have like an option (opt in/out) to pay yourself for repairing the lost/corrupted pieces on your node. But this can be disputed, and many SNO would start arguing that Storj is taking their money with no reason.
You, as a SNO, canât realy verify the health of pieces; you have to trust the software and the network.
I believe there is a checksum/hash in the header, and the sat compares it with itâs own, or something like that; I didnât dive into this, but I recall reading something about how it works.
And lost piece means 1 of 3 things: corrupted, unavailable for more than 4 hours (node offline), deleted by operator.
It works with symlinks, we checked. However, I wouldnât recommend to do so, especially split hashtables and logs. Especially, if SSD is reused by several nodes - you may lose ALL nodes at once on case of SSD failure.
Because itâs not worth it. See many discussions like
Why not? I think itâs good idea to move hashtables to ssd for less fragmentation, especially if it can be reconstructed from logs in future.
It still sounds like a good idea to meâŚI used to be so clever!
Itâs a bad idea if the SSD would die, your node will die too.
Please note - not your storage!
If I start a new node and I want to go with the hashstore since start, is there a different way/command/parameter to use?
Or start as usual and than do the steps described in the first post?
These folders and files are created when the node check-in on the satellite. So, you need to run it online at least once.
The only thing which I can think of - is to try to pre-create these folders and files after the SETUP step.
I think you can run the script from the first topic, just specify your location and replace false
to true
.
please tell me what will happen if âfalseâ is specified and what will happen if everything is replaced with âtrueâ
Whatâs the logic?
I started it like this(Windows):
# Enable passive migration (requires version v1.119)
Set-Content -Path "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6.migrate" -Value '{"PassiveMigrate":true,"WriteToNew":true,"ReadNewFirst":true,"TTLToNew":true}'
Set-Content -Path "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S.migrate" -Value '{"PassiveMigrate":true,"WriteToNew":true,"ReadNewFirst":true,"TTLToNew":true}'
Set-Content -Path "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs.migrate" -Value '{"PassiveMigrate":true,"WriteToNew":true,"ReadNewFirst":true,"TTLToNew":true}'
Set-Content -Path "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE.migrate" -Value '{"PassiveMigrate":true,"WriteToNew":true,"ReadNewFirst":true,"TTLToNew":true}'
# Enable active migration (requires version v1.120)
Set-Content -Path "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6.migrate_chore" -Value 'true'
Set-Content -Path "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S.migrate_chore" -Value 'true'
Set-Content -Path "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs.migrate_chore" -Value 'true'
Set-Content -Path "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE.migrate_chore" -Value 'true'
I want to figure out how to do it correctly, please help me.
After node restart on v1.119 all new pieces will go to hashstore, on v1.120 old pieces also will be migrated to hashstore
Expected behavior. See my post above this one.
Where have you seen that as of 120 we will be migrated to hashstore? I didnt see that in the changelog.