Moving databases on ssd?

My databases for node are on storage folder on hdd. Is it important for node performance moving the databases on ssd? Or it matters only for my stats…

Only if your node’s performance is affected by slow I/O (example symptom: your node uses more than 1GB of RAM). If not, don’t bother.

2 Likes

I would add that you should probably pre-emptively move them if you use an SMR HDD. Those will slow down eventually. Even if they seem fast now.

4 Likes

i would like to add, over time, esp. with ntfs and no defrag, the dbs will be fragmented and slow down the loading of the dashboard.

Ok, I really don’t understand all this. I moved my dbs to SSDs a while ago based on what everyone has said about performance, however in doing so I realized there nothing. I notice absolutely no difference. I mean, it only took half a second to copy the dbs. My average is 12MB/node. It’s literally nothing. If you have slow drives or high I/O / ram usage, it isn’t because of the dbs.

Edit: I also played around with larger record sizes which actually crippled I/O (TrueNAS Core, ZFS, brand new 18TB CMRs). Currently running 256K. Don’t have all the exact numbers but 1M record size brought average rsync rates down to between 6-15 MB/s with a single node running on it vs ~50-60 MB/s doing the same with 128K. Just throwing that out there.

not all nodes profit equaly from moving dbs.
i noticed the drive “runs smoother/more quiet” while under high load(defrag/filewalk)
dashboard much more responsive.
little filled,fast node: no diff.
slow drive, nearly filed profits more.

i did the experiment and moved the dbs to an Samsung usb stick. 5y warranty 16€. reformatet to NTFS/4k good and not the “cheapest” solution.

Samsung USB-Stick Typ-A BAR Plus (MUF-128BE4/APC), 128 GB, 400 MB/s 60 MB/s
waiting how long it will last, just to trigger @arrogantrabbit .

its an minipc with no other free connection ports than usb3.

I’ve seen at a friend a small “silent” home server with 3×"decent flash drive" in RAID1. He had to replace a drive every two-three months. I’m curious about your experiment!

its about the dbs, not the node data.
node data is on wd elements 12tb external drive. but it turned out: its slow.
so its only the dbs who will be recreated in case of “5y warranty” not enough.

It’s likely (to avoid saying most definitely) write amplification because your friend did not override sector size when assembling the array. (Also, raid1 of three drives for a home server ?! Something does not add up here. It’s definitely not a full story).

It was few years ago, I don’t know all the details. But he’s a hardened sysadmin, I’d find it likely that he did set up things correctly.

And yes, it was RAID1 exactly because of risk of drive failure.

But three drives? However, these are cheap ones…
But I don’t often come across such a configuration. I guess they uses some checksum FS?
I just thought of such config myself to be honest, because my “server” was in the sleeping room (it was also a living room too… we were lived in a one-big-space apartment that time…)

Likely ext4 on mdraid, knowing him, he’s the type of an old bearded unix admin. Yeah, this was kind of an experiment. He did have access to a supply of flash drives from the company he worked for, he had an old terminal-style minipc, so it was all free. He hosted some small services like gitolite.

1 Like

In my case it was a media/game server, and I started my first node (v2) there, so I’ll prefer to run it 24/7. Also some side projects… so, it should be online. And when I found Storj (the far away 2017…), I was just very enthusiastic to run it, especially when I searched exactly for something like this. This was a such excitement! I already used a bittorent sync on that time, but it required to use an own servers, and I wanted something like that, but not on my servers.
Storj become is what I needed!
Now it’s a serious trustworthy project, not some startup with unknown future, so I still excited to be part of that!

4 Likes