I don’t know what I meant. Your answer was perfectly clear, I don’t really understand how I could misunderstand your answer.
I see your answer in another comment, with your pool Pool of multiple 4 disk wide RAID-z1 vDevs, with a mirrored special meta device in front. And man the Intel P3600 SSDs were nice for their time. Still are, but I drooled so hard over those, back in the day.
What would happen if you exhausted the space on your special meta device vDev?
This is probably why our views diverge. I’m oversimplifying a bit, but the primary load is part of a collection of proxy services. So while it’s best for the data on each disk to remain stable (because refilling it takes time, and performance is lower when it’s empty): any disk can “just die” and be replaced. That’s what I meant by saying it rewards capacity: there’s no reason to give up space to parity/mirroring. As long as any particular HDD always has about 500GB free… Storj can use the rest. It’s also why ZFS was initially chosen: for the online scrubs: you can ensure each proxy is still “clean”.
I’m still hoping hashstore does something clever. But… I’m not switching until it ships as the default. I get things now, when there’s millions of .sj1 files: and a understandable and observable filesystem beneath them. No funny business…
If there is no space left on special device new metadata will be written to data vdevs, same way it would without special device.
If you expect you are going to use up entire SSD it would make sense to over provision it slightly to reduce wear (with vendor tools or by partitioning manually). Or better yet, replace with larger ones before it’s full.
Makes super sense, thank you. I can’t imagine choosing a meta data array that’s too small to begin with, but I can totally see a future, where the array that it’s accelerating that has grown enough to me the former inadequate.
Would you in that case remove one from the mirror, replace with larger, rebuild and then ditto with the other, to then expand the “new” mirror?
Yes, you can increase size of special virtual device just like any other device — replacing drives one by one and resilvering.
Unlike many other filesystems zfs replace allows the disk being replaced to remain in the pool until resilver is completed so that fault tolerance is not diminished for the duration of replacement.
former: you conserve ports and power consumption remains relatively same
latter: vdevs are sharing the load so if you have such a massive amount of IO that your SSDs get overwhelmed adding another one may make sense. However with intel solutions you may not have PCIe lanes available by then. On the other hand, perhaps SATA SSDs are still better than nothing.
As an anecdotal reference, on my arrays, the special device usage varies between 0.3 and 0.7% of used storage. For example, on one of the systems with 70TB used space I see 547 GB allocated on the special device. So its pretty achievable
hi just letting You know, that all that over-average traffic this month, all month, did nothing to my payment. im getting less and less for the same nodes. Months ago it was steady, around 100$ now its 15-20% less. Im not happy, just letting You know.
(Because my out to fiat, is from 100$.)
Edit: @Julio Well, my nodes are very old, like from 2019/2020 soo probably a lot of deletion of that “free tier” data purged, prooobably, more than new data.
Yes, now that you mention it, I see payment has arrived this afternoon. Checking price, Storj is already down 4%.
Was almost same $ amount as last month here.
Groovy.
Storj needs to make its global network more attractive to potential customers. Rather than hiding its Select network to potential customers, the company should focus on getting the global network audited and certified. This is crucial if the interesting large, high-value potential customers are currently unable to use the network due to lack of compliance with general required standards. Until this issue is addressed, the network’s potential for growth and success will be very limited in that field.
Looking back I had a bump every month. March/April payments this year went in the wrong direction but I think that is because during Feb/March I converted to hashstore on all my nodes making them very slow so I lost a lot of races.
I think the large ingress bandwidth is partially or mostly canceled out by faster deletion. It’s unfortunately the worst kind of traffic from a profit perspective. It uses bandwidth and IOPS but generates little storage revenue. I mean, the impact is, modest, though.