Updates on Test Data

The history dosen’t mean anything. Policies are changing every day.
If the ISP cosiders that you cost him more money keeping you than leting you off, he will let you off.

2 Likes

Storj dont need us? Eh what? :slight_smile:

And how the ISP issues related to Storj?
Storj needs you! Please fix the problems and participate.

In my area not only were ISPs sending you data-usage warnings back then… but also passing along copyright-infringement notices from the companies who owned the IP (who were watching what was being seeded).

If a few SNOs receive data-usage warnings now: because the system is actually being used: no big deal. SNOs can tune back their usage, or upgrade their plan: or even if they leave there are tens of thousands of other nodes happy to take over.

1 Like

Yes Sir!
Big SNOs place is in Storj Select.
Storj public is for small and medium SNOs.

And personally? i think the network would be better off, if there was no big SNOs at all.
The more smaller SNOs, the better the connectivity, the more decentralized the network, a win-win-win.

Edit: "no no, look:"

@Alexey @tylkomat
Misunderstood what Big SNO is.

One is not a Big SNO, if one cannot go to Storj select.
If one looks like a Big SNO, because has PB’s of data, then most likely he/she is renting a space in datacenter, and is a middleman and is cheating on /net 24 ip rule.
A datacenter could sign up for storj select by itself. The role of such middleman is valid until datacenter itself decides to take his/shes place and $ profit cuts.

if You are big, go for storj select, that means You are a datacenter itself, or You are riding on one (and running PB’s of data in one location, under many different IP is forbidden for Storj Public, but good for Storj Select)

@tylkomat
no. they will become larger, up to medium maybe. You cannot be Big, if Your are not in datacenter, or has Your own datacenter in the garage. But in garage, like chia farms, that will make You to circumvent net /24 ip rule, for it to make sense, and thats forbidden for Storj Public.

So there is a limit how big a SNO can grow.
More than HDD’s, limited by public IP’s
A home/office still can have few public IP from ISP (commercial connection for small business)
To some degree, there is no point for adding more HDDs, if You don’t have more IP’s. it all will simply stay empty longer, consuming power.

Actually i was WRONG. Small SNO’s is not win-win-win,
its 4x win!

For Storj’s network (more internet connections, more decentralization, less strikes)
For Storj’s network customers (more speed, better access, durability and reliability)
For SNO’s (more traffic, more full HDDs faster - and if payouts go up, more small SNOs will join*)
For ISP’s (distributes the load more evenly, sells higher plans)

*and if small SNOs will want to join in mass somehow, that means less room for big SNOs in Public Storj, what’s again, better for the network and overall, if big ones go to Select (less % of nodes circumventing the rules)

I mean ONLY if small SNOs will join in mass, that means there has to be a reason to, visible for many people. There are more potential small SNOs in numbers, than big, so small SNOs will outnumber big ones. Even if big ones will want to join more as well, because see this:

The question is how to make small SNOs to join in mass.
IMHO the only way is to raise payrate per TB of storage.
That will move people in, in numbers.
Big circumventors will also like to expand, but they will be outnumbered by small nodes in mass rush.

And i even suspect, that overall % of occupied space, in rule circumventor nodes, will fall.
And more space will go to honest, small SNOs.
And they will be better rewarded.

  • IF they will be better rewarded…

no… the Storj Select only for ones who complient with SOC2/ISO 27001, not for anyone “big nodes”…

1 Like

Anyone can be a big SNO they stay long enough. Over time hard drives will fill and new ones will be bought. It’s just a matter of time. Of course the currently large one will get larger in that time as well.

2 Likes

Apologies if I missed something through the 940 records in this thread, but is testing running also on the Select network, or only on regular storj network?

1 Like

Just ouf of curiosity, when you testing store petabytes of data, do you also test reading it back? Does it get checked if all is stored correctly - like download it back and byte-compare with original files?

Looking at my traffic graphs - no. There is an audit process which checks random pieces, that is operational, but so far data is only being uploaded, not downloaded.

Someone may have a datacenter (complete with multiple uplinks, generators etc), but not have the required documents. Storj Select is less about big or small SNOs and more about having needed documents.

In theory, I could drag a server to a “real datacener” that has the documents, but it seems like a lot of work for not a lot of gain (unless I had petabyes of storage and there were enough customers that used it).

Just public+SLC. The performance tuning they’ve done means public is faster than Select (for now).

And it doesn’t sound like download speeds are something they’re trying to tune for their large potential customer. In fact since the customer is specifically looking at TTL uploads… probably most data will never be downloaded at all: just stored a minimum amount of time then deleted.

only on the Public network as far as I know. The Select is much more slower (kind of expected, but well. SOC2 do not fall from the sky, when it’s needed or even worse - required/mandatory).
If you do not needed it - please, use a Public Storj Network instead, it’s much more fast!!

1 Like

The issue seems to be bufferbloat when the VPN connection gets overwhelmed.

I’m tying to figure out if I can configure Cake or CoDel on the virtual interface.

All my nodes reached 9-10TB this month from 3TB last month. But… the satellites average use don’t say the same thing.
Again a verry big discrepancie between my node report and sat report… is this surplus data already deleted from the satellites point of view? Or is there a problem with them keeping track of the big inflow?

I’m curious if the retain processes can’t keep up. They all finish but… takes time.

My nodes are showing 14 TB average per day but 15 TB in the Stored graph. So it’s “only” 1Tb discrepancy

Remember those graphs show two different things. The second/bottom one says what the node thinks is true right now - which should be close to what you see in the OS (+/- trash processing). The first/top one is TB/m from the satellite… which is the average-so-far this month of what you’re being paid for… and because it’s an average it includes all your earlier days with less filled-space.

So in your case even if you’re holding 10.38TB now… when you add that up with all the lower-use days from June 1-9th it has only dragged the average-for-the-month up to 3.51TB so far this month. Even if you didn’t get any new ingress you’ll see that TBm number get dragged higher each day as more 10.38TB days get included in the average.

If your node didn’t grow… or grew/contracted slowly… those two graphs would have similar numbers. But when we have huge ingress or deletes the TBm average will take time to catch up.

3 Likes

traffic changed on unvetted nodes in my side…

I see. I don’t know how it calculates, but I thought it shows the dayly usage, not an average for the entire month, even though it says “monthly average” :rofl:.
Now it makes sense…

1 Like

So I realy hit 10 TB on 6 nodes half year old.
Damn… time for customising my Lambo. :star_struck:

3 Likes

It will be all gone after 4 weeks. Self destroying data.

1 Like