Updates on Test Data

Ah, my sarcasm detector was broken earlier. But you missed the point I made. I was not talking about the current tests but rather about a malicious customer who could do this.

Same here. Did some math lately. It is not economically viable for me to accept more than 300 TB of uploads per year and get paid less than ~400 USD per year due to just wear of HDDs and SSD caches on disk writes—I’m not even taking bandwidth into account here. This translates at full utilization to an expectaction that on average (TTL or not) pieces will live for 27 days. The 30 days TTL that the current tests use are pretty close, but it’s still not the majority of data my nodes are still storing, so it’s probably fine for now. But I’m watching closely.

These numbers could be improved quite a lot at the cost of non-trivial engineering effort.

The protocol accepts TTL with subsecond precision, and there’s nothing in the node code that wouldn’t accept a TTL of, let say, 1 second. Even better: a node will gracefully accept an already expired piece. No idea about the satellite though.

5 Likes

Waiting for Roxor comment “let the data floow im getting riiiiCcch!!! :money_mouth_face::money_mouth_face::money_mouth_face::moneybag:

6 Likes

Show…

Me…

'Da Monaaaaaaaay! :euro: :coin: :moneybag: :money_mouth_face: :dollar: :heavy_dollar_sign: :pound:

You guys can stay here and argue about non-existent micro TTLs used by attackers that don’t exist paid for by money nobody would part with . I’ll take all the ones. All the zeroes. And even some twos if the price is right!

TL;DR; Wen Lambo :question:

3 Likes
2 Likes

This something we need to bring up to the table for real, the wear and bandwidth is aloot. And with low ttls etc, im having a hard time even understanding how you will have alot SNOs left. The calculations does not work in favor even the slightest for the SNO. How will you then scale up when you get your customers storj?

5 Likes

The delusion is over 10000!!

2 Likes

This is another one reason why we need to distribute nodes widely rather concentrating them in the one physical location.

1 Like

Fast forward to present, we don’t have any ternary logic system on the market…
So it will take more than 5 years.

What are your calculations for a 10TB HDD that takes 2 years to fill and that lasts 6 years? (from the “expectancy of a hard drive being alive at six years is 88%”). Wouldn’t a SNO still be up over $500 after all costs?

If we’ve got over 22000 nodes and they’re all doomed to lose money, it’s our moral imperative to let them know! :wink:

2 posts were merged into an existing topic: Avg disk space used dropped with 60-70%

I don’t see how distribution answers this concern? Spreading wear over more devices, is that what you mean?

Probably the money isn’t right yet.

Absolutely correct. I propose storj buys us the drives, the host systems to put them in, also install airconditioners in our in-house server rooms, and also pay for multiple uplinks to different ISPs (with alternate physical routes, ofc). They also need to pay for our UPS and yearly maintenance. While we are at it, we may as well ask for a diesel generator and a re-fueling priority contract, you know, like real datacenters do.

All sarcasm aside, let’s have a look at a 20TB Ultrastar (I honestly don’t know why WD doesn’t just rebrand them to “SATA Storj Drive” and “SAS Storj Drive”, without even mentioning capacity or anything else):

This drives (like all others) have a 550TBW/year. That means you can write 550TB per year. Also means you can re-write the entire drive 27.5 times per year. Which logically leads to re-writing the drive 2 times per month, or once every couple of weeks (literally), with 3.5 times re-writes left. And that’s staying within the “recommendations” of the manufacturer, actually this is what the manufacturer has to say about it:

Projected values. Final MTBF and AFR specifications will be based
on a sample population and are estimated by statistical measurements
and acceleration algorithms under typical operating conditions, typical
workload and 40°C device-reported temperature.
Derating of MTBF and AFR will occur above these parameters, up to 550TB/year and 60°C (device reported temperature). MTBF and AFR ratings do not predict an 
individual drive’s reliability and do not constitute a warranty.

Do you (@flwstern) really expect that all data going forward will be 1nanosecond-TTL data?

4 Likes

@Mitsos shhhh! They’ll hear you!

(If you start telling people micro-TTLs are unrealistic theoretical cases, and TBW ratings for HDDs and SSDs have more to do with warranty lengths than how long the drives will actually live… then more people will see how Storj can make them money. We need them to leave the project in fear! :scream: More for us!)

4 Likes

I join you on that. Are we now a conspiration?

To be fair my Seagate Drives aren’t the best. But I don’t see a point of giving them an easier workload. They will have to deal with this or they will get replaced with something that can deal with it. It is as simple as that. They are in the system to make as much money as possible before they die and if I can get 120$ in a single month I will take it. I am sure even these bad Seagate drives will still survive another year or so and that would be enough payout to replace them all.

3 Likes

“CDN style” storing data need a different price than long term TTL. Furthermore we need to mark this kind of data differently and divide nodes in groups. Long term TTL nodes (slow) and fast nodes for fast TTL. Paid differently of course. We all started with potatoes (hey… you told us!)
This is what happen in large datacenters… some data still stored on tapes others with SSD caching. Truck filled with potatoes and bullet trains with fresh food. Someone will prefer to earn less with less stress on bandwidth… others will setup bullets

10 posts were split to a new topic: Online score is dropped and node is suspended

Are those enterprise grade or Iron Wolfs ?

Seagate Exos

(20 chars to fill)