PSA: Beware of HDD manufacturers submarining SMR technology in HDD's without any public mention

There’s no way this is an exhaustive list. I have a 3.5" 6TB WD Black (WD60EDAZ-11BMZB0) which claims to support TRIM (no reason a CMR drive would) and it does stall for 5-10 seconds when writing continuously.

Yep I know :wink:

I like playing around with stuff. The HDDs in the second node are pooled via Windows Storage Spaces, that might be the more wonky part compared to using an SMR drive, but we’ll see…

from what i understand about windows storage spaces, its one of the few softwares on which i could see a SMR HDD working just fine… storage spaces is suppose to load balance based upon disk efficiency…
but i haven’t tried it out… storage spaces sounds kinda amazing tho… don’t get me wrong i like having zfs, but microsoft does make some very user friendly software…

had 8 hours of dt today a long continuous fight to upgrade and make my server work again.
2 or 3 hours of hardware and 5 hours puzzling over why my damn OS was just offline, must have gone through all configuration files in nano so many times, only to find out that i had changed to static ip since the last reboot 14 days ago, which made it want to have a netmask in the interfaces file.
such a rookie mistake lol, i do find it a bit odd that when a configuration file contains ip = 0.0.0.0
and you change it to an actual ip address, then you need to add more lines to the config file…
why the hell wouldn’t people just make it simple and have it so it has a netmask 0.0.0.0 for auto config or whatever… doesn’t have to be anything fancy… hell it could be # out… with a little description.
f’ing linux

1 Like

Actually, i don’t use any SMR drives with MS Storage Spaces - the SMR is in a different PC

yeah i got that…still it made me think of that SMR might actually work well for storj when in a pool on windows storage spaces and utilizing that load balancing thing where it can basically does raid over uneven sized drives… and or allocates data based on disk speeds…
or atleast it sounded quite brilliant when i got it explained…make no mistake tho… don’t expect a SMR drive to do much… aside from sequential half duplex

1 Like

https://toshiba.semicon-storage.com/ap-en/company/news/news-topics/2020/04/storage-20200428-1.html

3 Likes

Thanks for the links posted here. I’ve added the lists that were published by WD and Toshiba to the top post. If anyone finds official word from Seagate on which models use SMR, please let me know. So far, they’re just taking digs at WD, instead of being transparent about their own use.

1 Like

WD now has now listed which models use SMR in their model pages as well (say for Red etc).

Have a look at the top post again :wink:

I ordered a Seagate Barracuda 6TB before discovering this thread, it should arrive in the second half of May (due to COVID-19 Amazon is delaying shipping for not first-need goods).

I will let you know how it performs when it arrives.

cancel the order and get a CMR 8TB if you can. I wouldn’t support the actions of these companies by buying these product models if you can avoid it. Especially when the 8TB model probably doesn’t cost much more.

4 Likes

Thanks, I just canceled my order.

The problem here is not merely to choose a model by budget or size limits, but to know exactly if a model is CMR or SMR, since vendors have started removing this information on datasheets.

3 Likes

I’m now trying to choose a PMR/CMR drive and my choice is currently between WD Red 8TB and Seagate Ironwolf 8TB.

Regarding WD, I was able to easily find a formal statement on their website (WD Red Pro NAS Internal Hard Drive HDD 3.5" | Western Digital, scroll down to specification).

But on Seagate side I wasn’t able to find anything equivalent on their web site. The only information, linked on the first post of this thread, is the following:

Should I trust that statement? It really seems a marketing message. If they stated that, why don’t put this information on their website? Maybe to have more people buying SMR drives?

As @KernelPanick said, we shouldn’t support companies which are not acting in a transparent way, so I’m tempted to buy a WD, even if it’s a little bit more expensive than the IronWolf and (it seems) with a slightly lower performance (RPMs and throughput and a little more louder).

Moreover I’ve noticed that Seagate is using a measure for acustic level, the bel, whih is 10 times the decibel, measure more widely used. So the number is 1/10 of the decibel (dB) found on WD datasheets. Is this another way to disorient the customer?

What do you think?

You can trust the statement from seagate. They’d be in quite a bit of trouble if they lied about that. However, the transparency argument holds. They have yet to list all devices with SMR while both competitors have.

As for noise, ironwolf tends to be a bit noisier, faster and generates a little more heat than WD Red, probably draws a bit more power as well. Differences are minimal though. Both are a solid choice.

It’s completely fair to not want to support seagate at this point. I don’t like how they responded to the SMR thing at all.

1 Like

I’m wondering how the Node software is handling a disk having a hard time to respond: Does it start rejecting incoming packets while the disk is busy (which means rejecting uploads from clients: “sorry, can’t store this piece right now, come back later…”), or if it keep accepting requests up to the point where the host OS has its I/O cache full, and then everything starts crumbling down and the Node may crash, or skip pieces it should have stored but end up not stored… leading to failing audits and eventually disqualification?

In short: are nodes resilient when storage hard disk are getting very slow?

It will have canceled uploads. The uplink selects 110 nodes to upload pieces in parallel, after 80 are finished, all remained are canceled.
Your slow node just will not store almost any pieces until it would be ready to accept piece.
You can read more there:

1 Like

Serve The Home did some extra testing on the SMR WD Reds specifically in the context of RAID rebuilds.

3 Likes

That clears it up, thanks for the link. Thank God I got a WD Red “old” generation (CMR) :smiley:
But, as for what I understood, this performance degradation only happens with ZFS filesystems right?

They approached that a little weird by hammering on specificity of ZFS, but the slowdown is a result from constant load saturating the entire CMR cache. Which will impact any kind of rebuild. I would expect similar results with RAID5 or RAID6. Or actually even just with peak loads from Storj. The point is that these disks can be saturated with constant high loads and RAID rebuilds are just one common example of that.

I think they just wanted to make it clear that ZFS is all they tested.

2 Likes