All-flash nodes viable in the next 5 years?

If I’m doing the rough-math correctly: these guys could build a 2PB setup in about 4u, only pulling 1000w? And since it’s flash: they’d have the IO to support several nodes per drive?

Of course that looks to be about $200k+ in disks today… but in a few years… once enterprises start to swap them out… and they hit Ebay/ServerPartDeals… maybe we’ll be running them too?

I can’t wait! :heart_eyes:

1 Like

They are really big, I was surprised to find out that 32TB SSD exists yesterday, and now these! Man! I only knew of 8TB models. But, the question is how many rewrites can handle? How many levels of cells are there?
Of course they are not for Storj as new because of ROI, but yeah, maybe when they hit ebay as used drives, maybe… the price will be somehow appealing. But it’s very important what life remains in them.



Ha! So their endurance rating is essentially “yes” :slight_smile:

1 Like

<1 DWPD is actually fairly low endurance for an enterprise SSD.

Unless this kind of SSD gets to be cheaper than hard drives, no real point. You don’t need performance for storj, nor are these more power efficient than hard drives. On top of that, you have to factor in the PCIe lanes needed to run a ton of NVMe - you either need a full-blown server platform or a bunch of PCIe switches, both of which increase cost and power.

3 Likes

Did a rough estimation, With an 8TB Samsung (2880TBW) at 450€ NEW!, you have a chance to get ROI after 5Y. (for the drive, ~37Months after its full)
If: and here are a lot of ??? and the writes are not the problem here. Not even if you save the DB,ORDERS, and LOG(redirected) here.

Storj does not rewrite, it only writes. deletes, writes.

He’s probably talking about the flash: TBW is based on it being rewritten (which is separate from what appears to be happening at a filesystem level)

1 Like

filesystem plays no role here, it’s in the ssd itself.

Ok, Total Bytes Written ?

In fact, storj does not modify a blob file. aside from the Filesystem. so NOTHING in Blobs gets rewritten.

Rewrites happen only if minimum a bit in a cell is changed, causing the ssd to read that block, modify it and writes it back.(somewhere else maybe because wear leveling).
Writes happen to cells are not endles. so rewrites are worse than writes. but ALL writes count to the wearing.

Yes, I was reffering to their activity, in the context of the initial use, not in Storj. Maybe the client that gets them as new has more activity for them than Storj use case, with many writes, deletes and writes of the same cells (rewrites :grin:) wearing them down. TBW is somehow missleading. Imagine you store like 50 TB of data and don’t change them for years, so they occupy the same cells, and use the rest of 10-12 TB to write files- delete- write again. After some use those 12TB of cells get burned and your drive becomes full and of 50TB capacity only.
I don’t know what DWPD means.

It’s Difficult to Make Predictions, Especially About the Future :wink:

Seriously, who would have thought a year ago, that Samsung slashed NAND production by 50% and also wants to raise prices by 50% in 2024?

Or that Seagate can break the 20TB HDD barrier that soon and offer production ready 30TB HDDs?

Or that WD plans to add flash cache (similar to SSHD, but they don’t wanna call it that) to offer 50TB HDDs that hold the metadata in flash? A 50TB drive that can hold all metadata for filewalker in flash? Wow, that sounds like the drive was made for STORJ workload!

In conclusion we have enough space, we just need Storj clients and more more more data :heart_eyes:. You start small, maybe in v2 or begining of v3, with “what you have”, a raspi and a laptop 500GB HDD… years pass, you look with proud at your HDD farm of raspis, miniPCs, NASes, servers, UPSes, “I don’t know what the hack is that” and other contraptions that spin 1-2PB of data and your electric bill :joy:.

They move the data silently around, internaly, over invisible reserve cells. Wearleveling and (over)provisioning.