Updates on Test Data

But you already have the line. You don’t pay for the data transfer out of pocket. $15 is better than 0. Anything is better than zero. You don’t have any extra costs. You are literally handed $15 at the end of the month. Why is it not enough?

2 Likes

Why pick such a high value? Maybe you should calculate with 1 hour instead? Isn’t that more realistic?

4 Likes

Test are continuing. It appears they will continue for some time. Chnages are being made and tested at the Sat level as well as node considerations.

This is all being done to expect similar load and ops by customers.

9 Likes

So multiply this by 3 and you have $0.60/month.

That is too much. Lets try to get this down to 0$ per months with 1 GBit/s constant uploads. Lets go extreme here. I mean we are already far away from the TTL that is actually getting uploaded so we can as well run the math with nanoseconds.

6 Likes

Ah, my sarcasm detector was broken earlier. But you missed the point I made. I was not talking about the current tests but rather about a malicious customer who could do this.

Same here. Did some math lately. It is not economically viable for me to accept more than 300 TB of uploads per year and get paid less than ~400 USD per year due to just wear of HDDs and SSD caches on disk writes—I’m not even taking bandwidth into account here. This translates at full utilization to an expectaction that on average (TTL or not) pieces will live for 27 days. The 30 days TTL that the current tests use are pretty close, but it’s still not the majority of data my nodes are still storing, so it’s probably fine for now. But I’m watching closely.

These numbers could be improved quite a lot at the cost of non-trivial engineering effort.

The protocol accepts TTL with subsecond precision, and there’s nothing in the node code that wouldn’t accept a TTL of, let say, 1 second. Even better: a node will gracefully accept an already expired piece. No idea about the satellite though.

5 Likes

Waiting for Roxor comment “let the data floow im getting riiiiCcch!!! :money_mouth_face::money_mouth_face::money_mouth_face::moneybag:

6 Likes

Show…

Me…

'Da Monaaaaaaaay! :euro: :coin: :moneybag: :money_mouth_face: :dollar: :heavy_dollar_sign: :pound:

You guys can stay here and argue about non-existent micro TTLs used by attackers that don’t exist paid for by money nobody would part with . I’ll take all the ones. All the zeroes. And even some twos if the price is right!

TL;DR; Wen Lambo :question:

3 Likes
2 Likes

This something we need to bring up to the table for real, the wear and bandwidth is aloot. And with low ttls etc, im having a hard time even understanding how you will have alot SNOs left. The calculations does not work in favor even the slightest for the SNO. How will you then scale up when you get your customers storj?

5 Likes

The delusion is over 10000!!

2 Likes

This is another one reason why we need to distribute nodes widely rather concentrating them in the one physical location.

1 Like

Fast forward to present, we don’t have any ternary logic system on the market…
So it will take more than 5 years.

What are your calculations for a 10TB HDD that takes 2 years to fill and that lasts 6 years? (from the “expectancy of a hard drive being alive at six years is 88%”). Wouldn’t a SNO still be up over $500 after all costs?

If we’ve got over 22000 nodes and they’re all doomed to lose money, it’s our moral imperative to let them know! :wink:

2 posts were merged into an existing topic: Avg disk space used dropped with 60-70%

I don’t see how distribution answers this concern? Spreading wear over more devices, is that what you mean?

Probably the money isn’t right yet.

Absolutely correct. I propose storj buys us the drives, the host systems to put them in, also install airconditioners in our in-house server rooms, and also pay for multiple uplinks to different ISPs (with alternate physical routes, ofc). They also need to pay for our UPS and yearly maintenance. While we are at it, we may as well ask for a diesel generator and a re-fueling priority contract, you know, like real datacenters do.

All sarcasm aside, let’s have a look at a 20TB Ultrastar (I honestly don’t know why WD doesn’t just rebrand them to “SATA Storj Drive” and “SAS Storj Drive”, without even mentioning capacity or anything else):

This drives (like all others) have a 550TBW/year. That means you can write 550TB per year. Also means you can re-write the entire drive 27.5 times per year. Which logically leads to re-writing the drive 2 times per month, or once every couple of weeks (literally), with 3.5 times re-writes left. And that’s staying within the “recommendations” of the manufacturer, actually this is what the manufacturer has to say about it:

Projected values. Final MTBF and AFR specifications will be based
on a sample population and are estimated by statistical measurements
and acceleration algorithms under typical operating conditions, typical
workload and 40°C device-reported temperature.
Derating of MTBF and AFR will occur above these parameters, up to 550TB/year and 60°C (device reported temperature). MTBF and AFR ratings do not predict an 
individual drive’s reliability and do not constitute a warranty.

Do you (@flwstern) really expect that all data going forward will be 1nanosecond-TTL data?

4 Likes

@Mitsos shhhh! They’ll hear you!

(If you start telling people micro-TTLs are unrealistic theoretical cases, and TBW ratings for HDDs and SSDs have more to do with warranty lengths than how long the drives will actually live… then more people will see how Storj can make them money. We need them to leave the project in fear! :scream: More for us!)

4 Likes