How many node dbs can manage a single SSD? Do you have some experiences?
I have up to 17 on Sata SSD, today i see some problem, Because some nodes also on motherboard controller IO makes problem. Some other server I separated DB to NVME also 17 tk work OK
I’ve got two primary systems with many nodes.
One is a Synology box, that has 8x 20TB disks, 4 nodes pr disk for a total of 32 nodes.
All the disks have a 400GB share of a RAID10 array of 4x 2TB sata SSDs. the SSDs are configured as read/write for the disks and thus hosts both the databases and general caching for the disks. They show around 60% activity, currently with around 4k write IOPS and 2k read.
The other system is a virtualized VMware setup hosting 10 nodes. All databases are on a 100GB NVMe SSD, which is not breaking a sweat.
10 dbs on 100GB drive? 10GB/db seems a bit low isn’t it?
How large are your nodes?
Depending on his drives, but yeah, with this TTL, the piece_expiration.db got insanely huge. For 21TB data I have 28GB database. @Ottetal you may want to take a look at those databases and maybe upgrade the NVME and SSD.
Oh wow 28GB is huge! I currently run a 14TB node all data is TTL data and my piece_expiration.db is 2.38GB now ![]()
There is around 75GB stored between those ten nodes. I’ll go check the utilization later and report back if it’s dangerous
I believe the 107 version did something to piece_expiration.db, and shrinked it… maybe.
So, I have 8 x 22TB drives that got filled up by SL TTL data. All have the piece_expiration.db of 25-32GB sizes.
One single node got updated to 107, the rest are on 105 ver.
As maintenance, I did a pragma check and vacuum on all db-es, and the 105 ver. ones got shrinked to 20GB, but the 107 ver. one got shrinked to 12GB. Strange.