On the impact of fsync in the storage node upload code path on ext4

I agree with most of your points.

Indeed, I can’t say that the way the OS/file system operates without pauses is similar to how it operates in my test. I think though that if the system has any decent I/O load in addition to the storage node—which may happen when reusing free resources on existing hardware, as recommended by Storj—my test should offer reasonable approximation. Besides, at this point I can also infer from my test that this hardware can maybe cope with traffic around 30× the current amount (seems unlikely short-term, but who knows what would happen when Storj becomes more popular?). My experiment shows that we could make this hardware work with ~80× the current traffic with a small software change.

The kernel can influence the order in which writes and reads are performed. This is a big feature here, as the more blocks are pending for write, the bigger chances that there are clusters of nearby blocks that can be written to without excessive seeking. Real-time tracking is not necessary, instead just knowledge which part of the disk need to be visited at some point in future.

I wonder myself! Given that even cheapest consumer external drives are often basically NAS models with minor tweaks to firmware, this would be very useful to know.

We still observe problems like this one or this one, showing that unless the operator moves databases to a different drive or uses some kind of a cache, decent performance is not guaranteed.

Not all of them, my Seagate Backup doesn’t. I wish it did.

3 Likes