i have read so many threads about it.
Just want to make sure i got it.
First, 1 have a battery in front of my Synology.
Understood that the problem is if an customer upload cannot be taken from my disk anymore, because of load or other problems.
But there is the buffer size in the config.
Guessing that the write acknowledge is done as soon as the file has arrived the RAM, right?
That would mean a problem comes only if my disk load is unable to take over the files from the RAM, AND the RAM is fully blown.
Is that the case?
I have an 4 TB SMR WD Blue with 256 MB cache and a 2 GB RAM Synology DS220+ for two hosts.
Guessing two SMR disks are better than one as they split the uploads
Buffer is set to 1 MiB, do you think this will work?
That’s my guess too because when an SMR drive is stalling the RAM usage usually goes dangerously up.
Well at least that’s what I experienced when my SMR drive could not keep up with the load. Which happened during heavy tests in the past, that particular situation never happened since. Which doesn’t mean it will never happen again though
The buffer size will not help much. Even though in theory it’s better to have it set to 1 or 2MiB, I tried many values but none would prevent the disk from stalling, the RAM from filling up and eventually the node process from being killed by the OOM killer.
Some improvements were made to the Storage node software since though, such as one of the databases was moved to RAM and other enhancements, so things are probably better now.
Also, there are some minor things that can be improved like moving databases and logs to another disk like you mentionned.
But ultimately, if the disk cannot keep up with the incoming ingress load, then there’s not much that can be done appart from throttling down the number of accepted pieces in parallel (or switching to a CMR disk obviously ^^'). Which is far from ideal, but AFAIK there’s no other solution, yet.
Sure, logs can be redirected to anywhere you’d like, so it’s doable if you set up a RAMdisk, but it would reduce the amount of RAM available to the system.
For your information, right now my nodes hold roughly 5TB of data and their logs for the month of January take 735MiB in total, and this was a quiet month. So the RAMdisk could not be too small, so depending on how much RAM you have available for running Storj related stuff this might not be ideal. Would be better to find some space on a spare disk somewhere ^^’
For testing i have a buffer size of 4 MiB.
What i just saw is that the HDD has a usage from 3-25% and the RAM is growing up to about 300 MB.
Then the disk goes to 100% and in parallel the RAM was cleaning up to 150 MB in seconds.
Guessed that every 4 MiB block would be written as soon as it is full?
That must mean that the average use of RAM and HDD would be more balanced as it is.
Not sure why this comes to that working, but it does this all the time.
Is there maybe a max RAM buffer to be configured?
I’m not seing such patterns on my end, I cannot tell whether it’s normal, okay or not.
My RPi 4B running my nodes has been steadily at 366MiB of RAM usage, for the past few minutes.
What’s the time window of this behavior?
I don’t think that’s how it works. When you mentioned the “buffer size in the settings” in your first post, you were referring to --filestore.write-buffer-size right?
This setting defines how much space will be allocated in memory for writing each piece.
If you receive 10 pieces in a very narrow time window, a total of 40MiB will be allocated for writing these 10 pieces to disk.
I don’t think there’s any option for setting a max RAM usage the node software shouldn’t go above.
Which in my opinion is a shame because it would be an efficient way for providing a safety net against stalling disks like SMR ones.
With such an option, the node software could receive as much data as you sent by customers as long as the storage device can handle it. But if the RAM were to reach a threshold because the disk is starting to be overwhelmed, then it could start refusing new pieces automatically to let time for the disk to handle what’s on its plate.
Do you know if the PC is crashed or rebooted all that data will be lost? But since the node is confirmed that it got them - it could be disqualified for losing pieces.
The only known solution for SMR disks is to run more than a one node in the same /24 subnet. You will reduce a load in two or more times to the level which SMR can handle. If the second node is not enough - run third… Obviously they should point to own disks.
Why does the node confirm that it has the piece without making sure that it is on the drive first?
Shouldn’t piece writes and some writes to the database be in “sync” so that if the server crashes or reboots at any time, there is no piece that gets lost because it was confirmed to the satellite, but not yet writen to the disk?
They should, but then the RAM would not be used too much - pieces should be flushed to the disk to be confirmed. And we again would have a stale SMR.
This is the main problem - you could not have enough RAM to receive pieces while disk is unable to write them. If node would wait for the disk, it will have cancels from the customers.
So, buffer would not help too much. The big buffer will not help either because of long tail cut and slow disk. Or the node should confirm the piece as soon as received, but this is a high risk to lost it if storagenode would be killed by OOM or system got rebooted.
But better data integrity for nodes with CMR drives.
Or nodes with SSD write cache (zfs SLOG or similar)
Or nodes with RAID controllers that have battery-backed cache.
I would rather have less data than no data after an unexpected reboot.
After an unexpected boot you typically lose like 10 seconds of data. On a reasonably sized node that will completely irrelevant and your audit score will remain good.
Of course with a stalling SMR drive this might ramp up to (idk) 5 minutes, which at an ingress of 10Mbps is ~375MB at an avery piece size of 1.5MB which will be 250 pieces. Still irrelevant if you have a 1TB node hosting >600k pieces. In this case you would have lost 0.04% of your files. That’s ok if it doesn’t happen too often.
With zfs you can choose to use sync=always on the node dataset which will make sure all your files end up on the HDD.
Not sure if other systems/filesystems have similar options.
They do, however, then you need to move the database somewhere else. The reason is that there are lots of writes to the database that probably are not that important (maybe soemthing very short lived etc, if it was important it would be in sync), so, mounting te filesystem with sync option results in a a lot of IO.
I have tried this some time ago, maybe it would be different now because orders were moved to RAM and such.
I don’t think so, at least this was not true in the past. If db writes were always sync and because sqlite does support logs, there should be no database corruptions after an unexpected reboot.
I remounted the data partition as sync and will see how it goes. In the past it used to increase iops by a lot.
Unless storj uses this async sqlite module, all writes to the db are sync. That’s how every database works. everything else would just be a recipe for disaster.
As to why DBs tended to get corrupted by unexpected reboots: I’m not sure… most filesystems just aren’t that stable and an sqlite db itself might not be the most stable because of all performance optimizations used by storj I guess.
yes because now it can’t just flush every 10 seconds but needs to write every small piece of buffered file to the disk immediately and without a slog it will have to do that twice. So it’s a horrible option for SMR drives.
But if you’re using a slog, the performance should still be very good since it basically still flushes from RAM every 10 seconds because the sync only writes the file to the slog immediately. So with a slog it could still be good for a SMR and not create much of a difference.
However, on average it would be the same MB/s to disk, whether it is written immediately or flushed every few seconds. IIRC, it increased. So, I assume some writes to the database were very short lived, where something got written, then it was changed before it could even be flushed to disk.
Seems right. As soon as I set my Storj DB dataset to sync=always, I immediately got more reads and writes on my SSD. So I guess they might have tampered with async writes… risky for a db…
(I was surprised about the increase in reads but it looks like the changes don’t immediately end up in the arc when using sync=always but need to be written first and will then be read into the arc again? Note: My SSD doesn’t use a SLOG, would be kind of pointless…)
Since the db is constantly written to, the content changes often within the 10 seconds between each flushing. But with async (and due the arc caching) most operations will be done in RAM and you only get the end result after 10 seconds on the disk. But if every change has to be written to the disk, you have a lot more data over the same period.
async writes can take a long time before they are written to disk, it’s my understanding that these delays are what is presented when doing a zpool iostat -w
but the numbers doesn’t make sense for me, my pool,
SLOG flushes every 5sec but that is adjustable… default is 5 tho.
then how can my pool with all my dataset… maybe i should check if that is true the all datasets
running sync=always and still i get higher than 5sec of this
iostat -w thing…
and it’s certainly not because it takes 137sec or more … even the 10sec mark seems rather extreme…
i got like intermittent writes of maybe 50-80 mb/s
but it will do like 1.2-1.3 GB peak… sure we need to look at iops… but sync iops i should be able to do like 2000 maybe even 3000… raw hdd iops…
dual raidz1 and maybe 2x1000 sequential write iops
the reason i say 2k is because i’ve seen it do that before… even think i’ve seen it do near 3… but that was when i was doing a 3x raidz1 pool… tho using sata slogs
yeah checked Bitlake has every dataset and itself on sync always.