Updates on Test Data

Test data for the big part. It’s space reservation for some big customers that might sign with Storj in the comming months.
I have nodes on the same machines, with GE on Saltlake (test sat), and I can easily compare the ingress from clients and test sat.

of your monitoring or at least a router I hope? If the storagenode dashboard, then, well, it’s inaccurate in that version,
see

Okay, I’ve relied on the dashboard for the past four years and thought it was reliable. I’ll check later if the hard drive might be full. That would of course be an explanation. Thanks for the hint.

It was, but then we decided to optimize it… And then the Community convinced us to revert the change. So, when your node would upgrade, your bandwidth graph usage should be accurate again.

1 Like

I checked and you are right… 5 GB are left. Thx for the help

3 Likes

This is the lowest safety check. So, I suppose, your databases are not updated for the current usage. I believe it’s better to fix that on your side.

In general - no problem, because we have this safety check, but your numbers on the piechart likely does not mirror the truth.

Ahhh, I always wondered when you will crack. :sweat_smile:
Nice to have you on board.

3 Likes

Pretty sure @IsThisOn has “been onboard” for quite a while now :slight_smile:

1 Like

I stand corrected. Welcome! :hugs:

3 Likes

It’s OK. We can have all the optimism because we know you’re always there to piss on our bonfire and bring it back down… :wink:

EDIT: but seriously, it’s good to have dissenting voices, so thanks for that.
This is just fun for me and I am in a very enviable position where buying a drive here and there doesn’t really matter to me. I get to learn a lot of Linux and there are some lovely (and less lovely but no less interesting) and very clever people here who have taught me a lot.

So even if it folds tomorrow I’ll still bag it as a win :smiley:

9 Likes

This is how they get you, little by little, step by step… today a drive, tommorow one more drive, and at the end of the year you ask yourself: “How the hell I got 1PB of storage?” :man_facepalming:t2:
:rofl::rofl::rofl::rofl::rofl:

4 Likes

And potentially next year “What the hell do I do with 1PB of storage?” :joy:

8 Likes

I believe this gif could explain it.

1 Like

It might look like its breaks left and right, but i bet its small fraction of a percent of all nodes, it just looks BIG on forum, but if network works, that is the true indicator.
I like this site https://status.storjstats.info/
Edit: i mean this site: https://status.storj.io/
nice there are more, everything in stats, You got even more storjs team?;D
the more is the better to show off :smile:

No worry: https://www.youtube.com/watch?v=gkLCE1LI_9M
Sorry, i had to.
Lets make only good value on the topic from now on :smile:

1 Like

I think you probably need to see it a bit more from their perspective. Also, assume they’re not all dumb.
Reliability and integrity of data was more important than performance to begin with, especially in the first iterations of the network which was not much more than a proof of concept. Sync may have come from that concern. Also, it wasn’t a problem for the levels of traffic then.
Gradually, as the network grew and the technology proves itself and is tweaked more and it’s more reliable you can start (cautiously, I think) moving toward performance optimisation.
Perhaps it is not happening as fast as you think it should, but it doesn’t seem like an unreasonable approach to me.

4 Likes

I just want to reiterate that I am in no way affiliated with Storj and I am very much not a techie.

The expansion factor is for that, indeed, assuming everything works well and all nodes do what they should.
From a risk management approach it makes sense to reduce the risk of data corruption on the nodes when you’re still not entirely sure whether everything works as it should.
You are looking at this in hindsight so I think it may be biasing your judgment somewhat.

2 Likes

Precisely. They went down the risk-avoidance root.

You still made the change years after storj turned on sync so you were benefiting from hindsight already.
Unless I completely failed to understand your post :slight_smile:

2 Likes

It does if you don’t know that the expansion works as planned.

Yes, because:
1 - You already had 3 years of knowledge that sync is not necessary (they didn’t have it when they turned it on at the start)
2 - You don’t really care if data integrity is compromised. As an SNO that is not our direct problem. Therefore you have no incentive to be cautious. They do. That makes for different risk-benefit conclusions between you and them.

2 Likes

My understanding of synchronous vs asynchronous writes suggests that async may be marginally riskier is dodgier setups.
But as I said earlier on I am not a techie so will have to bow down to your likely superior knowledge.

1 Like

You forgot ants on the motherboard… (sorry, I’m currently nearly the Equator…so, these insects sometimes as big as my foot…).

which is impossible if you use a crypto algorithms, the upload/download will simply fail. What’s the point here?
We do not ask every node individually. The uplink will request 39 nodes and will start to download, as soon as the first 29 are finished, all remaining got canceled (include those which were able to alter pieces). Then it will reconstruct the segment using Reed-Solomon Erasure coding, then de-encrypt the object (if the client have this encrypting key ever…).
So… If some nodes returns the crap, all others should return something useful. Yeah, I know, the probability and so on… But well. It’s proven for the last 10 years. And it still works, independently of the insects invasion.
Sorry about insects one more time… there are a lot…