PSA: Beware of HDD manufacturers submarining SMR technology in HDD's without any public mention

well thats the method storj recommends… so should be fine…
also really the best performance to the amount of hardware used solution, you might want to run something like zfs so you can keep an eye on how the data integrity is, disk health monitoring will only tell you so much…

zfs will keep checksums on everything and thus you will be informed upon errors popping up for whatever reasons… like bad cables and what not…

1 Like

Thanx a lot, ZFS on Ubuntu server, i will look into it!

4 Likes

I migrated from a 4TB SMR 2,5drive with only 400GB stored. Took me 6 hours to copy to my Synology.

Hi guys,

can you tell me which external 3.5 HDD with USB 3.0 is safe to buy (no smr)?

WD easystores over 6TB are CMR and are really popular. They go on sale quite often too. Check a price history for your size of choice.

1 Like

Thanks, LrrrAc, I will buy a WD easystore.

Then I got 4 nodes with 18 TB each. Ramping it up!

There are no safe choices, the manufacturers may change mind any time. However, so far there were no reported SMR drives within WD Elements/Easystores/MyBooks ≥ 8TB and Seagate Expansions ≥ 10 TB.

2 Likes

The WDBWLG0180HBK should be CMR according to this site

There are also tests you can run on them once you have them to see. I dont have any on hand, but you can see and return them if they are SMR.

1 Like

I guess I’ve finally found the reason of my heavy load in this thread, and at the same time that I don’t have the disk I thought I had.

Brief story : I started a node about a year ago with one of the disks listed at the top of this thread. It went relatively well, with sometimes an alert on the load but nothing too worrying. Until a few weeks ago when the disk usage reached ~ 65 %. The load alerts became more often until the disk had a sudden stop and I started to investigate. The load was then skyrocketing. And I found this thread.

After reading it, I learnt about the storage2.max-concurrent-requests setting. Is it the only thing that can be acted on to try to mitigate the heavy writings ?

Yeah, it sucks that they are not upfront about it. At least these days you can find the info when you know to look for it.

There isn’t much you can do other than that setting. Other options require other HDD’s. Make sure you use the noatime mounting option if you use Linux though.
Other things you can do is move DB’s to another drive (preferably SSD, but not required). Another option is to run a second node on another HDD which will spread the write load.

2 Likes

Thanks for the hints. The noatime has already decreased around 25% of the peak load. For the DB files, I guess they are all the *.db, *.db-wal and *.db-shm from the storage directory ?

Have a look at these instructions. How to move DB’s to SSD on Docker

2 Likes