Hello,
If I set the flag for dedicated disk use, will the disk be 100% used? Or will there be enough space left for the DBs? Or should I move the DBs to another disk for safety?
Hello,
If I set the flag for dedicated disk use, will the disk be 100% used? Or will there be enough space left for the DBs? Or should I move the DBs to another disk for safety?
You would also provide a parameter how much free space should be reserved:
What do we add where? For the dedicated disk flag and retained space.
To config.yaml the following:
storage2.monitor.dedicated-disk: true
storage2.monitor.ReservedBytes: 100 GB
storage2.piece-scan-on-startup: false
Edit:
Both storage2.monitor.ReservedBytes
and storage2.monitor.reserved-bytes
do work the same it looks like, but to keep the usual format we should use storage2.monitor.reserved-bytes
.
That doesn’t look right. The translation should be reserved-bytes.
I enabled them, and dont see any mention in the logs. It just keeps the last allocated space it saw, which isnt the full disk. Is this in any documentation? I have 1 node on 1.114.6.
It probably is as @littleskunk have mentioned:
storage2.monitor.reserved-bytes: 100 GB
I was in the impression it does work, as full node with available disk space, after a restart, started accepting uploads.
In the log the line mentioning “upload started” I did check the “available space” in bytes, and it matched what I had free on the disk minus the reserved space, but that might have been just a coincidence.
Now I’m waiting until the node will stop spamming the log with collector unable to delete pieces to experiment further.
And to add, I do not think this will actually show correct values (such as free space of the actual drive) on the dashboard, that most likely is yet to come in the next versions.
I had the default 2TB entry in the config file, so after i nixed that it was just sitting at 2tb. So its not doing anything for me. Ill mess around with it.
Update: After removing the 2TB entry in the config, when i comment out my manual allocation, it still goes back to 2TB.
Would this affect the metrics in the api? I have a graphana dashboard.
Update: Seems like it. I get uploads with it enabled. Quite annoying to anyone who wants an Allocated Space metric.
My assumption is the API, Prometheus exporter and the dashboard will currently show what is configured as available space in the node config and used space based on so to speak the filewalker, so all the old values.
I believe however they are working to rework this, so even with the option to use the whole drive the metrics will be correct in some future versions.
So the only way to see if the dedicated disk config is working is to see uploads with the allocated space being lower than used. But then you can set the allocated space to what it should be, and the dedicated option will do its thing anyway? We need a log entry saying its working.
If this is docker… There is a environment variable STORAGE which defaults to 2TB.
Gotcha. Thats what i have set manually. Makes sense it has a default. Annoying that its set with the dedicated disk, but fixes are incoming im sure.
You may also set it to the empty value -e STORAGE=""
, then the allocation would be taken from the config file.
However, if you wouldn’t specify it in the config file too, it will take the default value of 1TB
--storage.allocated-disk-space memory.Size total allocated disk space in bytes (default 1.00 TB)
Doesn’t make any sense, please see my message above.
Likely you need to set it high enough instead (either -e STORAGE=""
and specify the amount in the config.yaml
file or in -e STORAGE=
only, until this feature wouldn’t be fully supported over all places in the code.
Have been using this for a day or so and I have to thank the developers for implementing this feature—one of the best ones out there.
Much easier and more robust than trying to calculate the usable space, which at some point won’t match the reality anyways.
So thank you!
its really that good? no cons?
i mean i have some files occasionally on this “dedicated” for storj disks, that may fluctuate and that’s my doubts if i can call it “whole disk” and enable that feature
The performance improvement isn’t what you would expect. Garbage collection and co still track the used space. So to answer your question: Is is not that good yet but it will get better with future commits. There is no downside on enabling it and get at least the small performance improvement the current version already provides.
hmmm coz im yet trying if just the badger catche is doing the job, which requires used-space-firewalker ON:
storage2.piece-scan-on-startup: true
and this whole disk need it to be OFF, “false”
so far i have 2 nodes on that and after a month they seems to be doing OK with that.
Sooo i will keep looking how its going and maybe not complicate by adding this whole disk, yet tempting feature.