Updates on Test Data

If you’re talking about @littleskunk’s gerrit link, this is about piece expiration, not GC. Yes, pieces were removed only 24 hours after they expired. If this patch will be merged, this delay will be configurable with 1 hour as the default.

…and…

The Storj business model is based on the fact SNOs take the risk of unused or nonperforming hardware. If you are a SNO, you have to accept this. Complaining is useless. If you don’t accept this fact, there’s no point for you to operate nodes.

Well, apparently there are people like that, which is why Storj still works.

SNOs started this, not Storj.

How’s the Storj regular salary? I do hope they pay you well, dealing with SNOs on forums should warrant some better health benefits.

I think nobody knew, even Storj. It’s not possible to predict upfront what kind of customers a startup will attract.

This was actually a somewhat recent discovery in social media.

The new drives offer 1M hour MTBF and only a 300TB/ year workload rate. WD defines “Workload Rate is defined as the amount of user data transferred to or from the hard drive.”

This would mean the traffic you quote (500 Mbps) would spend the workload limit on just writes in 55 days.

I’ve stated this before, but I do wish SNOs also got a per-piece write revenue to balance this.

I don’t think this is how it works.

Rube Goldberg nodes?

Yeah, also works.

I’ll save this quote for later.

Yeah, this is funny, good observation. I suspect the Algorithm should have separate counters for small and big pieces. My nodes were usually winning way more big pieces than small pieces, being located somewhat outside of the customer-hot areas (so higher latency), but with decent bandwidth.

I am kinda close for historical reasons—used to operate in multiple locations, but had to bring back my nodes together after losing access. Though, already down to ~40 TB, hosted behind an ISP-provided potato router which couldn’t deal with the traffic.

Not with these piece sizes, I think.

2 Likes