If I’m doing the rough-math correctly: these guys could build a 2PB setup in about 4u, only pulling 1000w? And since it’s flash: they’d have the IO to support several nodes per drive?
Of course that looks to be about $200k+ in disks today… but in a few years… once enterprises start to swap them out… and they hit Ebay/ServerPartDeals… maybe we’ll be running them too?
They are really big, I was surprised to find out that 32TB SSD exists yesterday, and now these! Man! I only knew of 8TB models. But, the question is how many rewrites can handle? How many levels of cells are there?
Of course they are not for Storj as new because of ROI, but yeah, maybe when they hit ebay as used drives, maybe… the price will be somehow appealing. But it’s very important what life remains in them.
<1 DWPD is actually fairly low endurance for an enterprise SSD.
Unless this kind of SSD gets to be cheaper than hard drives, no real point. You don’t need performance for storj, nor are these more power efficient than hard drives. On top of that, you have to factor in the PCIe lanes needed to run a ton of NVMe - you either need a full-blown server platform or a bunch of PCIe switches, both of which increase cost and power.
Did a rough estimation, With an 8TB Samsung (2880TBW) at 450€ NEW!, you have a chance to get ROI after 5Y. (for the drive, ~37Months after its full)
If: and here are a lot of ??? and the writes are not the problem here. Not even if you save the DB,ORDERS, and LOG(redirected) here.
Storj does not rewrite, it only writes. deletes, writes.
filesystem plays no role here, it’s in the ssd itself.
Ok, Total Bytes Written ?
In fact, storj does not modify a blob file. aside from the Filesystem. so NOTHING in Blobs gets rewritten.
Rewrites happen only if minimum a bit in a cell is changed, causing the ssd to read that block, modify it and writes it back.(somewhere else maybe because wear leveling).
Writes happen to cells are not endles. so rewrites are worse than writes. but ALL writes count to the wearing.
Yes, I was reffering to their activity, in the context of the initial use, not in Storj. Maybe the client that gets them as new has more activity for them than Storj use case, with many writes, deletes and writes of the same cells (rewrites ) wearing them down. TBW is somehow missleading. Imagine you store like 50 TB of data and don’t change them for years, so they occupy the same cells, and use the rest of 10-12 TB to write files- delete- write again. After some use those 12TB of cells get burned and your drive becomes full and of 50TB capacity only.
I don’t know what DWPD means.
In conclusion we have enough space, we just need Storj clients and more more more data . You start small, maybe in v2 or begining of v3, with “what you have”, a raspi and a laptop 500GB HDD… years pass, you look with proud at your HDD farm of raspis, miniPCs, NASes, servers, UPSes, “I don’t know what the hack is that” and other contraptions that spin 1-2PB of data and your electric bill .