Why USB drives are not recommended?

What’s wrong with USB?

Depends on the used hardware - but known issues:

  • sometimes shared bandwidth;
  • tendency to overheat and shut down in case of prolonged operation under load;
  • instability without external power supply;
  • very often low quality controllers;
  • often enough a lower quality of USB external drives themselves.
2 Likes

But the advantages…
Amazing hotplug
Data disk failure never takes out the OS because hotplug is expected
Variable bandwidth just by changing cable
Controller is paired with disk by manufacturer and so 100% compatible
Virtually unlimited number of disks
Simple, cheap cabling
Low noise, low power, low cost

Is this a joke post?

  • Amazing hotplug
    • All modern storage servers, and most if not all motherboards support this at the SATA level, where it’s more robust than USBs implementation
  • Data Disk Failure won’t take out OS
    • This won’t happen with a regular harddisk mounted with hot plug either
  • Controller is paired with disk by manufacturer and so 100% compatible
    • … unless the user decides to shuck drives, put in other drives, or it’s an enclosure shipping with no disks. USB controller will just present a stepping stone to SATA in the end, so this is yet another SPOF.
  • Virtually unlimited number of disks
    • 125 is the max number. This can be achieved in less than two disk shelves. We have users, today, on this forum, that would run into this limit
  • Simple, cheap cabling
    • No? Each hard disk represents a wall wart and a USB connection. Above ~10 hard disks, each group of 10 also represents a USB hub with it’s own wall wart. This get’s insanely messy after just a few disks. A a disk shelf with ~60 disks can potentially represent a single power cable and a single uplink. That is simple and cheap cabling.
  • Low noise
    • Possibly
  • Low power
    • I can assure you that a backplane fed from a high wattage, high efficiency PSU will use less power than a tonne of individual low efficiency wall warts.
  • Low cost
    • Interesting point. I’d love to see a cost breakdown of ~60 disks in a disk shelf vs ~60 standalone USB enclosures with their respective support structure.
6 Likes

I am using all kind of hardware from external USB to server grade SAS and I can’t see a difference tbh. And with hashstore storj becomes even less demanding to disk system, so just use what you have.

However, I agree that quality can be a problem with cheap usb stuff.

I haven’t been lucky with the reliability of external USB HDDs + hubs… but I’m just one dude, so am not going to plead that case.

I will say that learning about SAS has worked out very well. Take a $20 HBA and $20 Expander and some cheap cables… and not only will it handle all the HDDs you can stuff in a case… you can start daisy-chaining them into the hundreds of devices. It just works.

It’s not that USB is bad. It’s that for storage needs SAS is better.

Without a context this sounds like USB to me… :sweat_smile:

Hubs also take addresses, so it’s ~110 theoretically per controller if each hub is a 1→7 expander. But you’ll be limited by bandwidth and IOPS. You’d saturate bandwidth of a USB 3.2 Gen 2x2 (20 Gbps) host with 12 modern HDDs (~200 MB/s each). You’d get some degree of USB-level scheduling contention at >50 devices, resulting in IOPS dropping.

And regardless of the number of controllers, CPU usage will be pretty high unless you have a very high quality host chip—costing probably similar, if not more, to a decent SAS controller. Like, you will be able to saturate a single core of a CPU with interrupts with around 50 drives doing work simultaneously. And the cheaper controllers can’t spread that load over many cores.

Here I was quite surprised when I checked. Turns out the power devices supplied with at least WD Elements are >95% efficient at the wattage required by the device. Pretty good result. They basically benefit from the fact they know exactly what wattage will be used, and can optimize for that.

Still, that’s 50 dedicated power bricks instead of zero.

1 Like

AH-HA! So you agree, usb is the best! :smiley:

Seriously though, you consider 100+ disks a limitation for storagenode?

I’m thinking more like 10 is more than I ever need. Most pcs will had a few usb on the front and back. Each controller will usually manage 20-30 drives each but I would add more usb controllers if for instance I was doing chia not storagenode

1 Like

Good luck :partying_face:
Let us know when it breaks and which of the obvious reasons..
Cable connections, bandwidth limitations, bandwidth pre-allocation on bus, ID’s changing, speed-downgrades, disk reset and IO hung issue, or just plan unexplained instability (not complete list at all)

Also..

Throughput limitations and bus-stability will cause this to become unusable when a few of them are trying to max out IOPS at the same time.

Ive been storjing on usb for something like 5 years. From memory, the things that broke were a nvme node (weird short reads) and a 4TB 2.5" disk (suspected shingled)

Now, maybe I had a usb switch break I cant remember if that was storj or chia but you know, you just swap it out for any other usb hub.

The thing that did lose my storagenodes was losing internet. It was connected by a usb dongle but it was the ADSL side that broke

USB is slow, need addition fans for HDD, inside PC case you have all needed. Also power supply in PC is much better and reliable than used for USB cases. I also Agree that there is USB cases that has it all in best shape, LUX for HDD, but usually no one use them as they are expensive.

2 Likes

I used USB for the first 6 months or so for StorJ as well - tried everything from PCIe to onboard controllers, multiple main boards, different OS over External USB disks from poor and quality vendors, cheap and expensive external chassis.

Some were less prune to cause issues than others, but in general all had bad performance as soon as more than 1 disk on the same port were pulling high IO. Most caused intermittent connectivity and timeouts, and all had bad cable or connector issues over time. Even tried different filesystems from EXT4, XFS, MD, ZFS, NTFS etc. Still no permanent luck.

I ended up going back to what we know works, LSI JBOD controllers and good PSU with quality power cables. Havn’t seen an issues since and performance is not limited. For my combined workload (not just StorJ) requiring high volumes of small read/writes I’ve opted for ZFS with high ARC allocations and small blocks on special dev where appropriate.

But, I guess it’s down to expectations in the end?

6 Likes

Yay, some fun bantering! :heart:

The latter doesn’t count, a drive put in a USB enclosure would still be a broken drive. And what’s an “nvme node”?

I have a broken hub (still using it as a charger, but out of 10 ports only two now work somewhat reliably), a faulty USB cable, and a cable that was randomly downgrading from USB3 to USB2. And a bunch of broken power supplies.

That said, I did find the aforementioned WD Elements components decently reliable, so at least as long as I’m using only WD-provided cables connecting directly to MoBo, it’s not bad.

On the other side, never had a SATA or SAS component failure… except for a 15-year-old pre-SSD era SAS controller that was handling SSD traffic. Way over its intended lifetime and bandwidth. And even then, only gave up life after ~3 years of operation in that setup.

2 Likes

Everything that says “external” is more expensive than “internal”, and, regarding drives, there are no enterprise external drives to my knowledge, so they are not made for 24/7 workload. They are designed for moving files and storing backups. If you go with external case + an HDD of your choise, it’s hard to impossible finding one case that will last 5 to 10 years and with reliable performance.
But why are you planning a farm of nodes in one location, it’s the main problem.

Not always true. I bought a few WD MyBook external drives cheaper than the internal drives in the past. On top of that I sold the enclosures on eBay.

Hi all, this is my first post, I’m glad to be here :waving_hand:

I’ve started as SNO about a month ago and just wanted to share my experience with external HDDs over a USB. I’m here for a short period but I’ve also running the same setup for Chia farming for a 3 years without issues.

My setup is following:

  • Mini PC with two USB 3.0 ports
  • 2x WD My Book Duo Case with 20TB (10TB per disk) - Total 40TB
  • Each WD My Book Duo case has its own power supply
  • No USB hubs involved, one USB port per My Book Duo case
  • All of them are placed on a cooling stand with a single fan

Not all external HDDs via USB by default are bad. Cheap products make them bad, but with original cables and no additional complexity they can work quite well.

2 Likes

Hello @shunta,
Welcome to the forum!

Some external cases like yours also often positioned by a vendor as a small NAS. In a such case they also use more quality components unlike just any other external USB drive, which are usually not intended to be used 24/7.
However due to their higher quality the price is higher too and the full setup may become a more expensive than an internal drive, especially on scale.
But sometimes you have no choice, like with SoC like Raspberry Pi. :person_shrugging:

1 Like

Minor nitpick: Hubs usually aren’t 1->7. A “7 port hub” is typically two 1->4 hubs, with one plugged into the other. So it actually counts double against the address limit.

I know. There are some rare true 1→7, like this chip, so went for them as a more optimistic case.