Change from SMR to SSD drive

Hi, I recently found out that my node is running on an HDR SMR disk since October 2019. I decided to move it to an SSD. Is it enough to move and start it and the network will verify the performance of my node again or will it be better to create a new node with a new identity?

I wouldn’t buy a SSD to move your node to, SSDs have alot of limitations as well. I run SMR drives and it does just fine.

1 Like

A lot of people here on the forum says to avoid SMR disks. Even my tests show that they have very low performance.

You should first check that your SMR disk is actually causing issues. An SSD is kind of the other extreme and likely a waste of money. You’d be better off spending that money on a much larger HDD.

Especially considering that adding another node on another CMR HDD would cut the load on the SMR HDD in half, which may just be enough for that one to survive fine as well. Don’t go overboard on spending.

Yes, that doesn’t mean buy SSD. It means buy CMR/PMR HDDs.


What is an HDR disk?

D is next to R on the keyboard, that’s all :smiley:

1 Like

i would be interested i seeing how a node would perform on an SSD
i sure have trouble getting enough disk IO to get beyond 85% upload successrates, and i can immediately see it drop with added array activity, so i have to assume that it’s only a matter of getting good enough random RW I/O

ofc there will be a point of diminishing returns, i duno if its a good or a bad idea to run a node on only SSD, it’s by far not the cheapest option…but a SSD might do more thatndouble the work of a HDD in relation to storagenode successrates, ofc then there is the whole debacle about how much successrates really matter.

In the end capacity and not speed is what we might end up being paid for…
ofc you can always move a node off an SSD since they will in 99.9% of all cases be smaller than even medium sized HDD today.

would be a fun experiment if i had an SSD to spare which was large enough.

SMR is only in real major issue if you are running it in RAID.

Some RAID controllers (software and hardware based) think SMR’s slow write speeds is an issue with the drive (unless they are SMR aware) and so marks it “bad” in the raid causing it to either fail the array, or to start rebuilding on a spare drive, so if you have more than one SMR drive in an array, you could have multiple “failed” SMR drives and loose the entire array.

If you are using it by itself then their is no real issue apart from the write times.

My SMR drive is seeing continuous 100% load since I installed it.

Also, the node it is in does seem to get less data than before (with a PMR drive)

Are you constantly charged? Strange … There are some tests, but not for everyone …?
I have a knot in the 14th month and 0% load …
How do you do that?

I think this is an exaggeration because mine isn’t 100% since I installed it.

It has been 100% EVERY time I manually checked it since I installed the SMR drive (every 2nd day maybe).

Since there has been no ingress traffic to this node from 4th of May, there is no drive load anymore ether.

I have a node with a 4TB SMR hard drive 3.5TB used on a RPI4 that hasnt ever been 100%, Maybe your issue is you need to vaccum the databases.

What do you mean by that?

Not all SMR drives are made equal. It’s very well possible some models don’t get saturated, but others do. That doesn’t mean those models will also do well if the load gets higher. I’d still try to avoid purchasing them for Storj.

1 Like

SMR drives should at least work well enough when the node is full or if you have more nodes behind the same ip so both nodes share the load (50% on each node).
Using zfs with caches might help too but is a bit more advanced.

Since mine is an Enterprise class drive, I’m not to worried about it being utilized all the time.

Also I don’t care about probably exceeding some warranty-related amounts of data written per day, so I’ll jst see. I just hope the drop in inbound traffic is not related to the drive, but that started some time after changing out the HDD

If the drop was related to your drive, you should see a higher canceled rate because the drive can’t keep up and therefore loses more races. At least that’s my assumption.

My node is getting 15MBit/s at the moment (was higher the last few days) and successrate of 40% which seems to be normal for my setup and geographic location. (that’s just for reference).

1 Like

Which enterprise hard drive is SMR though?

1 Like

Seagate Exos E series

1 Like