SATA vs SAS controllers and drives

Are there any real benefits from using SAS drives, instead of SATA drives, for storagenodes?
Is there a better way to use a SAS controller? Like having 2x SATA drives on them, insted of 1x SAS drives?
Any other hardware/software tips for running SAS vs SATA?

for the drives, no, there’s no real difference between SAS and SATA. in the old days the SAS drives used to be more robust or faster, but they’re basically identically now, except some SAS drives support dual port interfaces for redudnancy. Ironically, used SAS drives are often a bit cheaper than SATA.

For the controllers, a SAS controller can handle more. many common controllers can drive 8x or 16x drive. Plus when they can plug into SAS expanders or SAS backplanes to handle hundreds of drives. (but if something says it will only handle 8x SAS Drives, it will also only handle 8x SATA, no magical doubling)

Meanwhile It’s a struggle to get more than six SATA connections at once.

for the controllers, used enterprise SAS controllers (LSI, etc) are often very robust and reliable if a bit hot. SATA expansion cards are often cheap and janky, and you may need to be more careful about OS support.

SATA cabling is more complicated because there are different connector types for different generations, and internal vs external. and the breakout cables can terminate in either SATA or SFF 84822 (SAS) ends, and the cables can be a little expensive. while SATA data cables are cheap and ubiquitous.


Yep, pretty much that. If you want lots of drives, an 82885T expander costs $25 and lets you turn a single port on your HBA into 20 or 24 drives (depending on firmware) plus external ports to daisy chain even more expanders. SAS is the only manageable way to gets lots (20+) drives on one host. Also, if you have a server chassis, you can often get a backplane with an expander built in, which reduces cabling mess (and cable cost) as well.

OK, so you daisy-chain controllers and connect many drives, but all are connected to the motherboard through a single interface (I imagine it’s a PCIE). Aren’t those drives all share the same interface which limits their IOPS and throughput? Or it’s some special interface?
And the same for the base controller in which all others are linked?

1 Like

Controller uses PCIe
The drive share bandwidth if you daisy chain. Typically sas2 backplane support up to 6gb/s drive speed, you can put in a 12gb/s sas drive and might run at 6gb/s or at 12gb/s speed, really unsure on this one. And a single sas2 cable sff8087/8 can do 24gb/s.

newer backplane are sas3 which is 12gb/s, and 48gb/s per cable.

I’m using sas2 external port connected to 2x 24 bay Netapp and I don’t see any problems with it, I feel like I can add another 24 bay and not feel like I’m losing any performance. Not like I’m hammering the drive 24/7 and some drive will be idle from other workload

I was told there is negligible difference in using NVMe vs Sata SSD when it comes to boot times. Is there any truth to it ?

Depends on windows, older windows more light weight, go faster. windows 11 makes really difference on load time on sata or nvme. Linux-dont know. But i had to move db from sata ssd to NVME some servers i have them 17tk then database locked happened also. But it may be not drive fault but, sata controller, because on same controller there is nodes also.


nvme ssd and sata ssd can be compared two different leagues!

To echo some other replies: SAS HDDs have no advantages over SATA for use with Storj nodes. However SAS HBAs and expanders are massively more scalable when it comes to adding many HDDs to a a system: you can get into hundreds-of-HDDs-per-HBA for cheap.

Like using one PCIe slot for a HBA ($25) and adding an expander (another $25 - needs power but not an actual PCIe slot) gets you to 28-internal and 8-external HDDs for the price of the cables to attach them. And you can daisy-chain SAS connections to attach more!

Windows 10 boots up in like 45s-1min on SSD. Even 1.5min with all AV, VGA utility, etc. depending on what you run on startup.
On NVMe boots in 15 sec to GUI and under 1min with all the software. I believe my Win 11 16GB RAM on Ryzen 3600 loads everything in 45s. I’ll have to check. Those are numbers remembered from win 10 time when I installed my fist NVMe. Anyways, it’s fast, and RAM matters too.
I don’t know how to clock my headless Ubuntu Server 32GB RAM on NVMe, running 2 nodes in Docker.
Any sugestions?
I’ll guess one way is trying to access dashboard untill one pops up from another computer.
Maybe logging the time when OS is fully loaded, and comparing it with the time of Power on button push, but I don’t know to script this.


I would go by noting down the time you pushed the power button then get the uptime from using top command. Deduct uptime from current time to get the time system booted completely. This time - time you pushed the power button = time it took to boot.

Maybe someone else has a better alternative. (Windows guy here)

It’s difficult to hit those limits on spinning rust, especially with random workloads. A single SAS3 link gives 12gb/s, and one cable is generally 4 links (sometimes 8).

But look at it this way - your internet connection is going to be a lot slower than the SAS link from the HBA to the expander (let alone the PCIe link to the HBA), so even if it does bottleneck based on the theoretical maximum sequential throughput of the drives alone, it doesn’t matter because that data has nowhere to go.

not all mesured by internet speed, lot of IOPS and traffics taking filewalkers, GC… data itself is only part of all this. Also lot of raid cards, have minimum strip size, you need 1kb but minimum frame 32k

Linux guy here

uptime can be taken from the “uptime” command:

 19:14:19 up 4 days, 20:06,  1 user,  load average: 0.99, 0.78, 0.51

system boot time can be taken from “systemd-analyze time”:

systemd-analyze time
Startup finished in 7.615s (firmware) + 1.533s (loader) + 17.953s (kernel) + 10.922s (userspace) = 38.024s reached after 10.862s in userspace.