Possible problem with v1.60.3 and docker in Synology. Anyone else?

I recall in the btrfs thread I linked that initially it was also fine, up to some point. Though, in my case I noticed that earlier, as I was annoyed by nodes not shutting down quickly. I don’t remember the memory usage at that point. If I had to guess, I’d say that it was because of database updates, as each upload and download triggers an insert or two, and the frequency of downloads grows with the size of a node. As such, I might have just passed some saturation point at which the file system started lagging.

Note that the databases are not critical for storage nodes. They only collect runtime statistics or temporary information whose loss is not vital to operate a node. Databases can be regenerated at any time, too. Hence storing them on less reliable medium is not necessarily a bad idea.

I doubt it, though I don’t have any specific arguments. If the memory usage grows due to lagging writes, it will also easily grow beyond any specific memory reservation. It’ll likely be better to use the storage2.max-concurrent-connections, add some block-level caching (like bcache/lvmcache, in theory even offloading IO reads with a writethrough-style cache should help), or maybe—though this is still a hypothetical at this time whether it would help—wait for someone to implement this change.

2 Likes